00:00:00.000 Started by upstream project "autotest-per-patch" build number 126201 00:00:00.000 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.020 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/rocky9-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.021 The recommended git tool is: git 00:00:00.021 using credential 00000000-0000-0000-0000-000000000002 00:00:00.023 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/rocky9-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.039 Fetching changes from the remote Git repository 00:00:00.042 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.069 Using shallow fetch with depth 1 00:00:00.069 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.069 > git --version # timeout=10 00:00:00.098 > git --version # 'git version 2.39.2' 00:00:00.098 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.125 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.125 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.614 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.626 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.637 Checking out Revision 7caca6989ac753a10259529aadac5754060382af (FETCH_HEAD) 00:00:02.637 > git config core.sparsecheckout # timeout=10 00:00:02.650 > git read-tree -mu HEAD # timeout=10 00:00:02.666 > git checkout -f 7caca6989ac753a10259529aadac5754060382af # timeout=5 00:00:02.688 Commit message: "jenkins/jjb-config: Purge centos leftovers" 00:00:02.688 > git rev-list --no-walk 7caca6989ac753a10259529aadac5754060382af # timeout=10 00:00:02.805 [Pipeline] Start of Pipeline 00:00:02.821 [Pipeline] library 00:00:02.823 Loading library shm_lib@master 00:00:02.823 Library shm_lib@master is cached. Copying from home. 00:00:02.840 [Pipeline] node 00:00:02.847 Running on VM-host-SM17 in /var/jenkins/workspace/rocky9-vg-autotest_2 00:00:02.852 [Pipeline] { 00:00:02.865 [Pipeline] catchError 00:00:02.867 [Pipeline] { 00:00:02.880 [Pipeline] wrap 00:00:02.890 [Pipeline] { 00:00:02.898 [Pipeline] stage 00:00:02.900 [Pipeline] { (Prologue) 00:00:02.922 [Pipeline] echo 00:00:02.924 Node: VM-host-SM17 00:00:02.931 [Pipeline] cleanWs 00:00:02.940 [WS-CLEANUP] Deleting project workspace... 00:00:02.940 [WS-CLEANUP] Deferred wipeout is used... 00:00:02.947 [WS-CLEANUP] done 00:00:03.157 [Pipeline] setCustomBuildProperty 00:00:03.232 [Pipeline] httpRequest 00:00:03.255 [Pipeline] echo 00:00:03.257 Sorcerer 10.211.164.101 is alive 00:00:03.263 [Pipeline] httpRequest 00:00:03.267 HttpMethod: GET 00:00:03.267 URL: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:03.268 Sending request to url: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:03.268 Response Code: HTTP/1.1 200 OK 00:00:03.268 Success: Status code 200 is in the accepted range: 200,404 00:00:03.269 Saving response body to /var/jenkins/workspace/rocky9-vg-autotest_2/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:03.547 [Pipeline] sh 00:00:03.823 + tar --no-same-owner -xf jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:03.838 [Pipeline] httpRequest 00:00:03.854 [Pipeline] echo 00:00:03.855 Sorcerer 10.211.164.101 is alive 00:00:03.863 [Pipeline] httpRequest 00:00:03.867 HttpMethod: GET 00:00:03.868 URL: http://10.211.164.101/packages/spdk_255871c197f0409b3ebd7e3c2323a8e265443306.tar.gz 00:00:03.868 Sending request to url: http://10.211.164.101/packages/spdk_255871c197f0409b3ebd7e3c2323a8e265443306.tar.gz 00:00:03.869 Response Code: HTTP/1.1 200 OK 00:00:03.870 Success: Status code 200 is in the accepted range: 200,404 00:00:03.870 Saving response body to /var/jenkins/workspace/rocky9-vg-autotest_2/spdk_255871c197f0409b3ebd7e3c2323a8e265443306.tar.gz 00:00:20.229 [Pipeline] sh 00:00:20.510 + tar --no-same-owner -xf spdk_255871c197f0409b3ebd7e3c2323a8e265443306.tar.gz 00:00:23.826 [Pipeline] sh 00:00:24.105 + git -C spdk log --oneline -n5 00:00:24.105 255871c19 autopackage: Move core of the script to autobuild 00:00:24.105 bd4841ef7 autopackage: Replace SPDK_TEST_RELEASE_BUILD with SPDK_TEST_PACKAGING 00:00:24.105 719d03c6a sock/uring: only register net impl if supported 00:00:24.105 e64f085ad vbdev_lvol_ut: unify usage of dummy base bdev 00:00:24.105 9937c0160 lib/rdma: bind TRACE_BDEV_IO_START/DONE to OBJECT_NVMF_RDMA_IO 00:00:24.126 [Pipeline] writeFile 00:00:24.142 [Pipeline] sh 00:00:24.421 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:24.433 [Pipeline] sh 00:00:24.711 + cat autorun-spdk.conf 00:00:24.712 SPDK_TEST_UNITTEST=1 00:00:24.712 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:24.712 SPDK_TEST_BLOCKDEV=1 00:00:24.712 SPDK_TEST_DAOS=1 00:00:24.712 SPDK_RUN_ASAN=1 00:00:24.712 SPDK_TEST_USE_IGB_UIO=1 00:00:24.712 SPDK_TEST_RELEASE_BUILD=1 00:00:24.712 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:24.719 RUN_NIGHTLY=0 00:00:24.721 [Pipeline] } 00:00:24.739 [Pipeline] // stage 00:00:24.753 [Pipeline] stage 00:00:24.755 [Pipeline] { (Run VM) 00:00:24.763 [Pipeline] sh 00:00:25.039 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:25.039 + echo 'Start stage prepare_nvme.sh' 00:00:25.039 Start stage prepare_nvme.sh 00:00:25.039 + [[ -n 5 ]] 00:00:25.039 + disk_prefix=ex5 00:00:25.039 + [[ -n /var/jenkins/workspace/rocky9-vg-autotest_2 ]] 00:00:25.039 + [[ -e /var/jenkins/workspace/rocky9-vg-autotest_2/autorun-spdk.conf ]] 00:00:25.039 + source /var/jenkins/workspace/rocky9-vg-autotest_2/autorun-spdk.conf 00:00:25.039 ++ SPDK_TEST_UNITTEST=1 00:00:25.039 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:25.039 ++ SPDK_TEST_BLOCKDEV=1 00:00:25.039 ++ SPDK_TEST_DAOS=1 00:00:25.039 ++ SPDK_RUN_ASAN=1 00:00:25.039 ++ SPDK_TEST_USE_IGB_UIO=1 00:00:25.039 ++ SPDK_TEST_RELEASE_BUILD=1 00:00:25.039 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:25.039 ++ RUN_NIGHTLY=0 00:00:25.039 + cd /var/jenkins/workspace/rocky9-vg-autotest_2 00:00:25.039 + nvme_files=() 00:00:25.039 + declare -A nvme_files 00:00:25.039 + backend_dir=/var/lib/libvirt/images/backends 00:00:25.039 + nvme_files['nvme.img']=5G 00:00:25.039 + nvme_files['nvme-cmb.img']=5G 00:00:25.039 + nvme_files['nvme-multi0.img']=4G 00:00:25.039 + nvme_files['nvme-multi1.img']=4G 00:00:25.039 + nvme_files['nvme-multi2.img']=4G 00:00:25.039 + nvme_files['nvme-openstack.img']=8G 00:00:25.039 + nvme_files['nvme-zns.img']=5G 00:00:25.039 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:25.039 + (( SPDK_TEST_FTL == 1 )) 00:00:25.039 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:25.039 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:25.039 + for nvme in "${!nvme_files[@]}" 00:00:25.039 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G 00:00:25.039 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:25.039 + for nvme in "${!nvme_files[@]}" 00:00:25.039 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G 00:00:25.039 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:25.039 + for nvme in "${!nvme_files[@]}" 00:00:25.039 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G 00:00:25.039 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:25.039 + for nvme in "${!nvme_files[@]}" 00:00:25.039 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G 00:00:25.039 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:25.039 + for nvme in "${!nvme_files[@]}" 00:00:25.039 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G 00:00:25.039 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:25.039 + for nvme in "${!nvme_files[@]}" 00:00:25.039 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G 00:00:25.039 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:25.039 + for nvme in "${!nvme_files[@]}" 00:00:25.039 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G 00:00:25.039 Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:25.039 ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu 00:00:25.039 + echo 'End stage prepare_nvme.sh' 00:00:25.039 End stage prepare_nvme.sh 00:00:25.052 [Pipeline] sh 00:00:25.327 + DISTRO=rocky9 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:25.327 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex5-nvme.img -H -a -v -f rocky9 00:00:25.327 00:00:25.327 DIR=/var/jenkins/workspace/rocky9-vg-autotest_2/spdk/scripts/vagrant 00:00:25.327 SPDK_DIR=/var/jenkins/workspace/rocky9-vg-autotest_2/spdk 00:00:25.327 VAGRANT_TARGET=/var/jenkins/workspace/rocky9-vg-autotest_2 00:00:25.327 HELP=0 00:00:25.327 DRY_RUN=0 00:00:25.327 NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme.img, 00:00:25.327 NVME_DISKS_TYPE=nvme, 00:00:25.327 NVME_AUTO_CREATE=0 00:00:25.327 NVME_DISKS_NAMESPACES=, 00:00:25.327 NVME_CMB=, 00:00:25.327 NVME_PMR=, 00:00:25.327 NVME_ZNS=, 00:00:25.327 NVME_MS=, 00:00:25.327 NVME_FDP=, 00:00:25.327 SPDK_VAGRANT_DISTRO=rocky9 00:00:25.327 SPDK_VAGRANT_VMCPU=10 00:00:25.327 SPDK_VAGRANT_VMRAM=12288 00:00:25.327 SPDK_VAGRANT_PROVIDER=libvirt 00:00:25.327 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:25.327 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:25.327 SPDK_OPENSTACK_NETWORK=0 00:00:25.327 VAGRANT_PACKAGE_BOX=0 00:00:25.327 VAGRANTFILE=/var/jenkins/workspace/rocky9-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:00:25.327 FORCE_DISTRO=true 00:00:25.327 VAGRANT_BOX_VERSION= 00:00:25.327 EXTRA_VAGRANTFILES= 00:00:25.327 NIC_MODEL=e1000 00:00:25.327 00:00:25.327 mkdir: created directory '/var/jenkins/workspace/rocky9-vg-autotest_2/rocky9-libvirt' 00:00:25.327 /var/jenkins/workspace/rocky9-vg-autotest_2/rocky9-libvirt /var/jenkins/workspace/rocky9-vg-autotest_2 00:00:28.613 Bringing machine 'default' up with 'libvirt' provider... 00:00:28.871 ==> default: Creating image (snapshot of base box volume). 00:00:29.129 ==> default: Creating domain with the following settings... 00:00:29.129 ==> default: -- Name: rocky9-9.0-1711172311-2200_default_1721051594_941b533f8a5026cee29b 00:00:29.129 ==> default: -- Domain type: kvm 00:00:29.129 ==> default: -- Cpus: 10 00:00:29.129 ==> default: -- Feature: acpi 00:00:29.129 ==> default: -- Feature: apic 00:00:29.129 ==> default: -- Feature: pae 00:00:29.129 ==> default: -- Memory: 12288M 00:00:29.129 ==> default: -- Memory Backing: hugepages: 00:00:29.129 ==> default: -- Management MAC: 00:00:29.129 ==> default: -- Loader: 00:00:29.129 ==> default: -- Nvram: 00:00:29.129 ==> default: -- Base box: spdk/rocky9 00:00:29.129 ==> default: -- Storage pool: default 00:00:29.129 ==> default: -- Image: /var/lib/libvirt/images/rocky9-9.0-1711172311-2200_default_1721051594_941b533f8a5026cee29b.img (20G) 00:00:29.129 ==> default: -- Volume Cache: default 00:00:29.129 ==> default: -- Kernel: 00:00:29.129 ==> default: -- Initrd: 00:00:29.129 ==> default: -- Graphics Type: vnc 00:00:29.129 ==> default: -- Graphics Port: -1 00:00:29.129 ==> default: -- Graphics IP: 127.0.0.1 00:00:29.129 ==> default: -- Graphics Password: Not defined 00:00:29.129 ==> default: -- Video Type: cirrus 00:00:29.129 ==> default: -- Video VRAM: 9216 00:00:29.129 ==> default: -- Sound Type: 00:00:29.129 ==> default: -- Keymap: en-us 00:00:29.129 ==> default: -- TPM Path: 00:00:29.129 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:29.129 ==> default: -- Command line args: 00:00:29.129 ==> default: -> value=-device, 00:00:29.130 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:29.130 ==> default: -> value=-drive, 00:00:29.130 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-0-drive0, 00:00:29.130 ==> default: -> value=-device, 00:00:29.130 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:29.130 ==> default: Creating shared folders metadata... 00:00:29.130 ==> default: Starting domain. 00:00:30.505 ==> default: Waiting for domain to get an IP address... 00:00:45.412 ==> default: Waiting for SSH to become available... 00:00:45.412 ==> default: Configuring and enabling network interfaces... 00:00:55.390 default: SSH address: 192.168.121.108:22 00:00:55.390 default: SSH username: vagrant 00:00:55.390 default: SSH auth method: private key 00:01:00.663 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/rocky9-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:10.743 ==> default: Mounting SSHFS shared folder... 00:01:12.647 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/rocky9-vg-autotest_2/rocky9-libvirt/output => /home/vagrant/spdk_repo/output 00:01:12.647 ==> default: Checking Mount.. 00:01:14.548 ==> default: Folder Successfully Mounted! 00:01:14.548 ==> default: Running provisioner: file... 00:01:15.923 default: ~/.gitconfig => .gitconfig 00:01:16.489 00:01:16.489 SUCCESS! 00:01:16.489 00:01:16.489 cd to /var/jenkins/workspace/rocky9-vg-autotest_2/rocky9-libvirt and type "vagrant ssh" to use. 00:01:16.489 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:16.489 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/rocky9-vg-autotest_2/rocky9-libvirt" to destroy all trace of vm. 00:01:16.489 00:01:16.499 [Pipeline] } 00:01:16.514 [Pipeline] // stage 00:01:16.525 [Pipeline] dir 00:01:16.526 Running in /var/jenkins/workspace/rocky9-vg-autotest_2/rocky9-libvirt 00:01:16.528 [Pipeline] { 00:01:16.542 [Pipeline] catchError 00:01:16.544 [Pipeline] { 00:01:16.557 [Pipeline] sh 00:01:16.836 + vagrant ssh-config --host vagrant 00:01:16.836 + sed -ne /^Host/,$p 00:01:16.836 + tee ssh_conf 00:01:20.146 Host vagrant 00:01:20.146 HostName 192.168.121.108 00:01:20.146 User vagrant 00:01:20.146 Port 22 00:01:20.146 UserKnownHostsFile /dev/null 00:01:20.146 StrictHostKeyChecking no 00:01:20.146 PasswordAuthentication no 00:01:20.146 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-rocky9/9.0-1711172311-2200/libvirt/rocky9 00:01:20.146 IdentitiesOnly yes 00:01:20.146 LogLevel FATAL 00:01:20.146 ForwardAgent yes 00:01:20.146 ForwardX11 yes 00:01:20.146 00:01:20.160 [Pipeline] withEnv 00:01:20.163 [Pipeline] { 00:01:20.180 [Pipeline] sh 00:01:20.458 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:20.458 source /etc/os-release 00:01:20.458 [[ -e /image.version ]] && img=$(< /image.version) 00:01:20.458 # Minimal, systemd-like check. 00:01:20.458 if [[ -e /.dockerenv ]]; then 00:01:20.458 # Clear garbage from the node's name: 00:01:20.458 # agt-er_autotest_547-896 -> autotest_547-896 00:01:20.458 # $HOSTNAME is the actual container id 00:01:20.458 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:20.458 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:20.458 # We can assume this is a mount from a host where container is running, 00:01:20.458 # so fetch its hostname to easily identify the target swarm worker. 00:01:20.458 container="$(< /etc/hostname) ($agent)" 00:01:20.458 else 00:01:20.458 # Fallback 00:01:20.458 container=$agent 00:01:20.458 fi 00:01:20.458 fi 00:01:20.458 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:20.458 00:01:20.728 [Pipeline] } 00:01:20.749 [Pipeline] // withEnv 00:01:20.759 [Pipeline] setCustomBuildProperty 00:01:20.778 [Pipeline] stage 00:01:20.780 [Pipeline] { (Tests) 00:01:20.801 [Pipeline] sh 00:01:21.081 + scp -F ssh_conf -r /var/jenkins/workspace/rocky9-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:21.353 [Pipeline] sh 00:01:21.631 + scp -F ssh_conf -r /var/jenkins/workspace/rocky9-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:21.904 [Pipeline] timeout 00:01:21.904 Timeout set to expire in 1 hr 30 min 00:01:21.906 [Pipeline] { 00:01:21.924 [Pipeline] sh 00:01:22.201 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:22.767 HEAD is now at 255871c19 autopackage: Move core of the script to autobuild 00:01:22.782 [Pipeline] sh 00:01:23.060 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:23.319 sudo: /etc/sudoers.d/99-spdk-rlimits:1:23: unknown defaults entry "rlimit_core" 00:01:23.335 [Pipeline] sh 00:01:23.690 + scp -F ssh_conf -r /var/jenkins/workspace/rocky9-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:23.710 [Pipeline] sh 00:01:24.017 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=rocky9-vg-autotest ./autoruner.sh spdk_repo 00:01:24.275 ++ readlink -f spdk_repo 00:01:24.275 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:24.275 + [[ -n /home/vagrant/spdk_repo ]] 00:01:24.275 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:24.275 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:24.275 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:24.275 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:24.275 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:24.275 + [[ rocky9-vg-autotest == pkgdep-* ]] 00:01:24.275 + cd /home/vagrant/spdk_repo 00:01:24.275 + source /etc/os-release 00:01:24.275 ++ NAME='Rocky Linux' 00:01:24.275 ++ VERSION='9.3 (Blue Onyx)' 00:01:24.275 ++ ID=rocky 00:01:24.275 ++ ID_LIKE='rhel centos fedora' 00:01:24.275 ++ VERSION_ID=9.3 00:01:24.275 ++ PLATFORM_ID=platform:el9 00:01:24.275 ++ PRETTY_NAME='Rocky Linux 9.3 (Blue Onyx)' 00:01:24.275 ++ ANSI_COLOR='0;32' 00:01:24.275 ++ LOGO=fedora-logo-icon 00:01:24.275 ++ CPE_NAME=cpe:/o:rocky:rocky:9::baseos 00:01:24.275 ++ HOME_URL=https://rockylinux.org/ 00:01:24.275 ++ BUG_REPORT_URL=https://bugs.rockylinux.org/ 00:01:24.275 ++ SUPPORT_END=2032-05-31 00:01:24.275 ++ ROCKY_SUPPORT_PRODUCT=Rocky-Linux-9 00:01:24.275 ++ ROCKY_SUPPORT_PRODUCT_VERSION=9.3 00:01:24.275 ++ REDHAT_SUPPORT_PRODUCT='Rocky Linux' 00:01:24.275 ++ REDHAT_SUPPORT_PRODUCT_VERSION=9.3 00:01:24.275 + uname -a 00:01:24.275 Linux rocky9-cloud-1711172311-2200 5.14.0-362.24.1.el9_3.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Mar 13 17:33:16 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux 00:01:24.275 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:24.275 sudo: /etc/sudoers.d/99-spdk-rlimits:1:23: unknown defaults entry "rlimit_core" 00:01:24.533 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda2,mount@vda:vda5, so not binding PCI dev 00:01:24.533 Hugepages 00:01:24.533 node hugesize free / total 00:01:24.533 node0 1048576kB 0 / 0 00:01:24.533 node0 2048kB 0 / 0 00:01:24.533 00:01:24.533 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:24.533 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:24.533 NVMe 0000:00:10.0 1b36 0010 0 nvme nvme0 nvme0n1 00:01:24.533 + rm -f /tmp/spdk-ld-path 00:01:24.533 + source autorun-spdk.conf 00:01:24.533 ++ SPDK_TEST_UNITTEST=1 00:01:24.533 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:24.533 ++ SPDK_TEST_BLOCKDEV=1 00:01:24.533 ++ SPDK_TEST_DAOS=1 00:01:24.533 ++ SPDK_RUN_ASAN=1 00:01:24.533 ++ SPDK_TEST_USE_IGB_UIO=1 00:01:24.533 ++ SPDK_TEST_RELEASE_BUILD=1 00:01:24.533 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:24.533 ++ RUN_NIGHTLY=0 00:01:24.533 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:24.533 + [[ -n '' ]] 00:01:24.533 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:24.533 sudo: /etc/sudoers.d/99-spdk-rlimits:1:23: unknown defaults entry "rlimit_core" 00:01:24.533 + for M in /var/spdk/build-*-manifest.txt 00:01:24.533 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:24.533 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:24.533 + for M in /var/spdk/build-*-manifest.txt 00:01:24.533 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:24.533 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:24.533 ++ uname 00:01:24.533 + [[ Linux == \L\i\n\u\x ]] 00:01:24.533 + sudo dmesg -T 00:01:24.533 sudo: /etc/sudoers.d/99-spdk-rlimits:1:23: unknown defaults entry "rlimit_core" 00:01:24.533 + sudo dmesg --clear 00:01:24.533 sudo: /etc/sudoers.d/99-spdk-rlimits:1:23: unknown defaults entry "rlimit_core" 00:01:24.533 + dmesg_pid=7395 00:01:24.533 + sudo dmesg -Tw 00:01:24.533 + [[ Rocky Linux == FreeBSD ]] 00:01:24.533 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:24.533 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:24.533 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:24.533 + [[ -x /usr/src/fio-static/fio ]] 00:01:24.533 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:24.533 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:24.533 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:24.533 + vfios=(/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64) 00:01:24.533 + export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:01:24.533 + VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:01:24.533 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:24.533 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:24.533 Test configuration: 00:01:24.533 SPDK_TEST_UNITTEST=1 00:01:24.533 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:24.533 SPDK_TEST_BLOCKDEV=1 00:01:24.533 SPDK_TEST_DAOS=1 00:01:24.533 SPDK_RUN_ASAN=1 00:01:24.533 SPDK_TEST_USE_IGB_UIO=1 00:01:24.533 SPDK_TEST_RELEASE_BUILD=1 00:01:24.533 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:24.533 RUN_NIGHTLY=0 13:54:10 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:24.791 13:54:10 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:24.791 13:54:10 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:24.791 13:54:10 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:24.791 13:54:10 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:01:24.791 13:54:10 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:01:24.791 13:54:10 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:01:24.791 13:54:10 -- paths/export.sh@5 -- $ export PATH 00:01:24.791 13:54:10 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:01:24.791 13:54:10 -- common/autobuild_common.sh@472 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:24.791 13:54:10 -- common/autobuild_common.sh@473 -- $ date +%s 00:01:24.791 13:54:10 -- common/autobuild_common.sh@473 -- $ mktemp -dt spdk_1721051650.XXXXXX 00:01:24.791 13:54:10 -- common/autobuild_common.sh@473 -- $ SPDK_WORKSPACE=/tmp/spdk_1721051650.KeOztC 00:01:24.791 13:54:10 -- common/autobuild_common.sh@475 -- $ [[ -n '' ]] 00:01:24.791 13:54:10 -- common/autobuild_common.sh@479 -- $ '[' -n '' ']' 00:01:24.791 13:54:10 -- common/autobuild_common.sh@482 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:24.791 13:54:10 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:24.791 13:54:10 -- common/autobuild_common.sh@488 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:24.791 13:54:10 -- common/autobuild_common.sh@489 -- $ get_config_params 00:01:24.791 13:54:10 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:24.791 13:54:10 -- common/autotest_common.sh@10 -- $ set +x 00:01:24.791 13:54:10 -- common/autobuild_common.sh@489 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-asan --enable-coverage' 00:01:24.791 13:54:10 -- common/autobuild_common.sh@491 -- $ start_monitor_resources 00:01:24.791 13:54:10 -- pm/common@17 -- $ local monitor 00:01:24.791 13:54:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:24.791 13:54:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:24.791 13:54:10 -- pm/common@25 -- $ sleep 1 00:01:24.791 13:54:10 -- pm/common@21 -- $ date +%s 00:01:24.791 13:54:10 -- pm/common@21 -- $ date +%s 00:01:24.791 13:54:10 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721051650 00:01:24.791 13:54:10 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721051650 00:01:24.791 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721051650_collect-vmstat.pm.log 00:01:24.791 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721051650_collect-cpu-load.pm.log 00:01:25.723 13:54:11 -- common/autobuild_common.sh@492 -- $ trap stop_monitor_resources EXIT 00:01:25.723 13:54:11 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:25.723 13:54:11 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:25.723 13:54:11 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:25.723 13:54:11 -- spdk/autobuild.sh@16 -- $ date -u 00:01:25.723 Mon Jul 15 13:54:11 UTC 2024 00:01:25.723 13:54:11 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:25.723 v24.09-pre-204-g255871c19 00:01:25.723 13:54:11 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:25.723 13:54:11 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:25.723 13:54:11 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:25.723 13:54:11 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:25.723 13:54:11 -- common/autotest_common.sh@10 -- $ set +x 00:01:25.723 ************************************ 00:01:25.723 START TEST asan 00:01:25.723 ************************************ 00:01:25.723 using asan 00:01:25.723 13:54:11 asan -- common/autotest_common.sh@1123 -- $ echo 'using asan' 00:01:25.723 00:01:25.723 real 0m0.000s 00:01:25.723 user 0m0.000s 00:01:25.723 sys 0m0.000s 00:01:25.723 13:54:11 asan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:25.723 13:54:11 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:25.723 ************************************ 00:01:25.723 END TEST asan 00:01:25.723 ************************************ 00:01:25.723 13:54:11 -- common/autotest_common.sh@1142 -- $ return 0 00:01:25.723 13:54:11 -- spdk/autobuild.sh@23 -- $ '[' 0 -eq 1 ']' 00:01:25.723 13:54:11 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:25.723 13:54:11 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:25.723 13:54:11 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:25.723 13:54:11 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:25.723 13:54:11 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:25.723 13:54:11 -- spdk/autobuild.sh@57 -- $ [[ 1 -eq 1 ]] 00:01:25.723 13:54:11 -- spdk/autobuild.sh@58 -- $ unittest_build 00:01:25.723 13:54:11 -- common/autobuild_common.sh@420 -- $ run_test unittest_build _unittest_build 00:01:25.723 13:54:11 -- common/autotest_common.sh@1099 -- $ '[' 2 -le 1 ']' 00:01:25.723 13:54:11 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:25.723 13:54:11 -- common/autotest_common.sh@10 -- $ set +x 00:01:25.723 ************************************ 00:01:25.723 START TEST unittest_build 00:01:25.723 ************************************ 00:01:25.723 13:54:11 unittest_build -- common/autotest_common.sh@1123 -- $ _unittest_build 00:01:25.723 13:54:11 unittest_build -- common/autobuild_common.sh@411 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-asan --enable-coverage --without-shared 00:01:25.980 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:25.980 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:26.238 Using 'verbs' RDMA provider 00:01:42.054 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:01:56.979 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:01:56.979 Creating mk/config.mk...done. 00:01:56.979 Creating mk/cc.flags.mk...done. 00:01:56.979 Type 'make' to build. 00:01:56.979 13:54:41 unittest_build -- common/autobuild_common.sh@412 -- $ make -j10 00:01:56.979 make[1]: Nothing to be done for 'all'. 00:02:18.905 The Meson build system 00:02:18.905 Version: 1.4.0 00:02:18.905 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:18.905 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:18.905 Build type: native build 00:02:18.905 Program cat found: YES (/bin/cat) 00:02:18.905 Project name: DPDK 00:02:18.905 Project version: 24.03.0 00:02:18.905 C compiler for the host machine: cc (gcc 11.4.1 "cc (GCC) 11.4.1 20230605 (Red Hat 11.4.1-2)") 00:02:18.905 C linker for the host machine: cc ld.bfd 2.35.2-42 00:02:18.905 Host machine cpu family: x86_64 00:02:18.905 Host machine cpu: x86_64 00:02:18.905 Message: ## Building in Developer Mode ## 00:02:18.905 Program pkg-config found: YES (/bin/pkg-config) 00:02:18.905 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:18.905 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:18.905 Program python3 found: YES (/usr/bin/python3) 00:02:18.905 Program cat found: YES (/bin/cat) 00:02:18.905 Compiler for C supports arguments -march=native: YES 00:02:18.905 Checking for size of "void *" : 8 00:02:18.905 Checking for size of "void *" : 8 (cached) 00:02:18.905 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:18.905 Library m found: YES 00:02:18.905 Library numa found: YES 00:02:18.905 Has header "numaif.h" : YES 00:02:18.905 Library fdt found: NO 00:02:18.905 Library execinfo found: NO 00:02:18.905 Has header "execinfo.h" : YES 00:02:18.905 Found pkg-config: YES (/bin/pkg-config) 1.7.3 00:02:18.905 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:18.905 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:18.905 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:18.905 Run-time dependency openssl found: YES 3.0.7 00:02:18.905 Run-time dependency libpcap found: NO (tried pkgconfig) 00:02:18.905 Library pcap found: NO 00:02:18.905 Compiler for C supports arguments -Wcast-qual: YES 00:02:18.905 Compiler for C supports arguments -Wdeprecated: YES 00:02:18.905 Compiler for C supports arguments -Wformat: YES 00:02:18.905 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:18.905 Compiler for C supports arguments -Wformat-security: NO 00:02:18.905 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:18.905 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:18.905 Compiler for C supports arguments -Wnested-externs: YES 00:02:18.905 Compiler for C supports arguments -Wold-style-definition: YES 00:02:18.905 Compiler for C supports arguments -Wpointer-arith: YES 00:02:18.905 Compiler for C supports arguments -Wsign-compare: YES 00:02:18.905 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:18.905 Compiler for C supports arguments -Wundef: YES 00:02:18.905 Compiler for C supports arguments -Wwrite-strings: YES 00:02:18.905 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:18.905 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:18.905 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:18.905 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:18.905 Program objdump found: YES (/bin/objdump) 00:02:18.905 Compiler for C supports arguments -mavx512f: YES 00:02:18.905 Checking if "AVX512 checking" compiles: YES 00:02:18.905 Fetching value of define "__SSE4_2__" : 1 00:02:18.905 Fetching value of define "__AES__" : 1 00:02:18.905 Fetching value of define "__AVX__" : 1 00:02:18.905 Fetching value of define "__AVX2__" : 1 00:02:18.905 Fetching value of define "__AVX512BW__" : (undefined) 00:02:18.905 Fetching value of define "__AVX512CD__" : (undefined) 00:02:18.905 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:18.905 Fetching value of define "__AVX512F__" : (undefined) 00:02:18.905 Fetching value of define "__AVX512VL__" : (undefined) 00:02:18.905 Fetching value of define "__PCLMUL__" : 1 00:02:18.905 Fetching value of define "__RDRND__" : 1 00:02:18.905 Fetching value of define "__RDSEED__" : 1 00:02:18.905 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:18.905 Fetching value of define "__znver1__" : (undefined) 00:02:18.905 Fetching value of define "__znver2__" : (undefined) 00:02:18.905 Fetching value of define "__znver3__" : (undefined) 00:02:18.905 Fetching value of define "__znver4__" : (undefined) 00:02:18.905 Library asan found: YES 00:02:18.905 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:18.905 Message: lib/log: Defining dependency "log" 00:02:18.905 Message: lib/kvargs: Defining dependency "kvargs" 00:02:18.905 Message: lib/telemetry: Defining dependency "telemetry" 00:02:18.905 Library rt found: YES 00:02:18.905 Checking for function "getentropy" : NO 00:02:18.905 Message: lib/eal: Defining dependency "eal" 00:02:18.905 Message: lib/ring: Defining dependency "ring" 00:02:18.905 Message: lib/rcu: Defining dependency "rcu" 00:02:18.905 Message: lib/mempool: Defining dependency "mempool" 00:02:18.905 Message: lib/mbuf: Defining dependency "mbuf" 00:02:18.905 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:18.905 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:18.905 Compiler for C supports arguments -mpclmul: YES 00:02:18.905 Compiler for C supports arguments -maes: YES 00:02:18.905 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:18.905 Compiler for C supports arguments -mavx512bw: YES 00:02:18.905 Compiler for C supports arguments -mavx512dq: YES 00:02:18.905 Compiler for C supports arguments -mavx512vl: YES 00:02:18.905 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:18.905 Compiler for C supports arguments -mavx2: YES 00:02:18.906 Compiler for C supports arguments -mavx: YES 00:02:18.906 Message: lib/net: Defining dependency "net" 00:02:18.906 Message: lib/meter: Defining dependency "meter" 00:02:18.906 Message: lib/ethdev: Defining dependency "ethdev" 00:02:18.906 Message: lib/pci: Defining dependency "pci" 00:02:18.906 Message: lib/cmdline: Defining dependency "cmdline" 00:02:18.906 Message: lib/hash: Defining dependency "hash" 00:02:18.906 Message: lib/timer: Defining dependency "timer" 00:02:18.906 Message: lib/compressdev: Defining dependency "compressdev" 00:02:18.906 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:18.906 Message: lib/dmadev: Defining dependency "dmadev" 00:02:18.906 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:18.906 Message: lib/power: Defining dependency "power" 00:02:18.906 Message: lib/reorder: Defining dependency "reorder" 00:02:18.906 Message: lib/security: Defining dependency "security" 00:02:18.906 Has header "linux/userfaultfd.h" : YES 00:02:18.906 Has header "linux/vduse.h" : NO 00:02:18.906 Message: lib/vhost: Defining dependency "vhost" 00:02:18.906 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:18.906 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:18.906 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:18.906 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:18.906 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:18.906 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:18.906 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:18.906 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:18.906 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:18.906 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:18.906 Program doxygen found: YES (/bin/doxygen) 00:02:18.906 Configuring doxy-api-html.conf using configuration 00:02:18.906 Configuring doxy-api-man.conf using configuration 00:02:18.906 Program mandb found: YES (/bin/mandb) 00:02:18.906 Program sphinx-build found: NO 00:02:18.906 Configuring rte_build_config.h using configuration 00:02:18.906 Message: 00:02:18.906 ================= 00:02:18.906 Applications Enabled 00:02:18.906 ================= 00:02:18.906 00:02:18.906 apps: 00:02:18.906 00:02:18.906 00:02:18.906 Message: 00:02:18.906 ================= 00:02:18.906 Libraries Enabled 00:02:18.906 ================= 00:02:18.906 00:02:18.906 libs: 00:02:18.906 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:18.906 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:18.906 cryptodev, dmadev, power, reorder, security, vhost, 00:02:18.906 00:02:18.906 Message: 00:02:18.906 =============== 00:02:18.906 Drivers Enabled 00:02:18.906 =============== 00:02:18.906 00:02:18.906 common: 00:02:18.906 00:02:18.906 bus: 00:02:18.906 pci, vdev, 00:02:18.906 mempool: 00:02:18.906 ring, 00:02:18.906 dma: 00:02:18.906 00:02:18.906 net: 00:02:18.906 00:02:18.906 crypto: 00:02:18.906 00:02:18.906 compress: 00:02:18.906 00:02:18.906 vdpa: 00:02:18.906 00:02:18.906 00:02:18.906 Message: 00:02:18.906 ================= 00:02:18.906 Content Skipped 00:02:18.906 ================= 00:02:18.906 00:02:18.906 apps: 00:02:18.906 dumpcap: explicitly disabled via build config 00:02:18.906 graph: explicitly disabled via build config 00:02:18.906 pdump: explicitly disabled via build config 00:02:18.906 proc-info: explicitly disabled via build config 00:02:18.906 test-acl: explicitly disabled via build config 00:02:18.906 test-bbdev: explicitly disabled via build config 00:02:18.906 test-cmdline: explicitly disabled via build config 00:02:18.906 test-compress-perf: explicitly disabled via build config 00:02:18.906 test-crypto-perf: explicitly disabled via build config 00:02:18.906 test-dma-perf: explicitly disabled via build config 00:02:18.906 test-eventdev: explicitly disabled via build config 00:02:18.906 test-fib: explicitly disabled via build config 00:02:18.906 test-flow-perf: explicitly disabled via build config 00:02:18.906 test-gpudev: explicitly disabled via build config 00:02:18.906 test-mldev: explicitly disabled via build config 00:02:18.906 test-pipeline: explicitly disabled via build config 00:02:18.906 test-pmd: explicitly disabled via build config 00:02:18.906 test-regex: explicitly disabled via build config 00:02:18.906 test-sad: explicitly disabled via build config 00:02:18.906 test-security-perf: explicitly disabled via build config 00:02:18.906 00:02:18.906 libs: 00:02:18.906 argparse: explicitly disabled via build config 00:02:18.906 metrics: explicitly disabled via build config 00:02:18.906 acl: explicitly disabled via build config 00:02:18.906 bbdev: explicitly disabled via build config 00:02:18.906 bitratestats: explicitly disabled via build config 00:02:18.906 bpf: explicitly disabled via build config 00:02:18.906 cfgfile: explicitly disabled via build config 00:02:18.906 distributor: explicitly disabled via build config 00:02:18.906 efd: explicitly disabled via build config 00:02:18.906 eventdev: explicitly disabled via build config 00:02:18.906 dispatcher: explicitly disabled via build config 00:02:18.906 gpudev: explicitly disabled via build config 00:02:18.906 gro: explicitly disabled via build config 00:02:18.906 gso: explicitly disabled via build config 00:02:18.906 ip_frag: explicitly disabled via build config 00:02:18.906 jobstats: explicitly disabled via build config 00:02:18.906 latencystats: explicitly disabled via build config 00:02:18.906 lpm: explicitly disabled via build config 00:02:18.906 member: explicitly disabled via build config 00:02:18.906 pcapng: explicitly disabled via build config 00:02:18.906 rawdev: explicitly disabled via build config 00:02:18.906 regexdev: explicitly disabled via build config 00:02:18.906 mldev: explicitly disabled via build config 00:02:18.906 rib: explicitly disabled via build config 00:02:18.906 sched: explicitly disabled via build config 00:02:18.906 stack: explicitly disabled via build config 00:02:18.906 ipsec: explicitly disabled via build config 00:02:18.906 pdcp: explicitly disabled via build config 00:02:18.906 fib: explicitly disabled via build config 00:02:18.906 port: explicitly disabled via build config 00:02:18.906 pdump: explicitly disabled via build config 00:02:18.906 table: explicitly disabled via build config 00:02:18.906 pipeline: explicitly disabled via build config 00:02:18.906 graph: explicitly disabled via build config 00:02:18.906 node: explicitly disabled via build config 00:02:18.906 00:02:18.906 drivers: 00:02:18.906 common/cpt: not in enabled drivers build config 00:02:18.906 common/dpaax: not in enabled drivers build config 00:02:18.906 common/iavf: not in enabled drivers build config 00:02:18.906 common/idpf: not in enabled drivers build config 00:02:18.906 common/ionic: not in enabled drivers build config 00:02:18.906 common/mvep: not in enabled drivers build config 00:02:18.906 common/octeontx: not in enabled drivers build config 00:02:18.906 bus/auxiliary: not in enabled drivers build config 00:02:18.906 bus/cdx: not in enabled drivers build config 00:02:18.906 bus/dpaa: not in enabled drivers build config 00:02:18.906 bus/fslmc: not in enabled drivers build config 00:02:18.906 bus/ifpga: not in enabled drivers build config 00:02:18.906 bus/platform: not in enabled drivers build config 00:02:18.906 bus/uacce: not in enabled drivers build config 00:02:18.906 bus/vmbus: not in enabled drivers build config 00:02:18.906 common/cnxk: not in enabled drivers build config 00:02:18.906 common/mlx5: not in enabled drivers build config 00:02:18.906 common/nfp: not in enabled drivers build config 00:02:18.906 common/nitrox: not in enabled drivers build config 00:02:18.906 common/qat: not in enabled drivers build config 00:02:18.906 common/sfc_efx: not in enabled drivers build config 00:02:18.906 mempool/bucket: not in enabled drivers build config 00:02:18.906 mempool/cnxk: not in enabled drivers build config 00:02:18.906 mempool/dpaa: not in enabled drivers build config 00:02:18.906 mempool/dpaa2: not in enabled drivers build config 00:02:18.906 mempool/octeontx: not in enabled drivers build config 00:02:18.906 mempool/stack: not in enabled drivers build config 00:02:18.906 dma/cnxk: not in enabled drivers build config 00:02:18.906 dma/dpaa: not in enabled drivers build config 00:02:18.906 dma/dpaa2: not in enabled drivers build config 00:02:18.906 dma/hisilicon: not in enabled drivers build config 00:02:18.906 dma/idxd: not in enabled drivers build config 00:02:18.906 dma/ioat: not in enabled drivers build config 00:02:18.906 dma/skeleton: not in enabled drivers build config 00:02:18.906 net/af_packet: not in enabled drivers build config 00:02:18.906 net/af_xdp: not in enabled drivers build config 00:02:18.906 net/ark: not in enabled drivers build config 00:02:18.906 net/atlantic: not in enabled drivers build config 00:02:18.906 net/avp: not in enabled drivers build config 00:02:18.906 net/axgbe: not in enabled drivers build config 00:02:18.906 net/bnx2x: not in enabled drivers build config 00:02:18.906 net/bnxt: not in enabled drivers build config 00:02:18.906 net/bonding: not in enabled drivers build config 00:02:18.906 net/cnxk: not in enabled drivers build config 00:02:18.906 net/cpfl: not in enabled drivers build config 00:02:18.906 net/cxgbe: not in enabled drivers build config 00:02:18.906 net/dpaa: not in enabled drivers build config 00:02:18.906 net/dpaa2: not in enabled drivers build config 00:02:18.906 net/e1000: not in enabled drivers build config 00:02:18.906 net/ena: not in enabled drivers build config 00:02:18.906 net/enetc: not in enabled drivers build config 00:02:18.906 net/enetfec: not in enabled drivers build config 00:02:18.906 net/enic: not in enabled drivers build config 00:02:18.906 net/failsafe: not in enabled drivers build config 00:02:18.906 net/fm10k: not in enabled drivers build config 00:02:18.906 net/gve: not in enabled drivers build config 00:02:18.906 net/hinic: not in enabled drivers build config 00:02:18.906 net/hns3: not in enabled drivers build config 00:02:18.906 net/i40e: not in enabled drivers build config 00:02:18.906 net/iavf: not in enabled drivers build config 00:02:18.906 net/ice: not in enabled drivers build config 00:02:18.906 net/idpf: not in enabled drivers build config 00:02:18.906 net/igc: not in enabled drivers build config 00:02:18.906 net/ionic: not in enabled drivers build config 00:02:18.906 net/ipn3ke: not in enabled drivers build config 00:02:18.906 net/ixgbe: not in enabled drivers build config 00:02:18.906 net/mana: not in enabled drivers build config 00:02:18.906 net/memif: not in enabled drivers build config 00:02:18.906 net/mlx4: not in enabled drivers build config 00:02:18.906 net/mlx5: not in enabled drivers build config 00:02:18.906 net/mvneta: not in enabled drivers build config 00:02:18.906 net/mvpp2: not in enabled drivers build config 00:02:18.906 net/netvsc: not in enabled drivers build config 00:02:18.906 net/nfb: not in enabled drivers build config 00:02:18.906 net/nfp: not in enabled drivers build config 00:02:18.906 net/ngbe: not in enabled drivers build config 00:02:18.906 net/null: not in enabled drivers build config 00:02:18.906 net/octeontx: not in enabled drivers build config 00:02:18.906 net/octeon_ep: not in enabled drivers build config 00:02:18.906 net/pcap: not in enabled drivers build config 00:02:18.906 net/pfe: not in enabled drivers build config 00:02:18.906 net/qede: not in enabled drivers build config 00:02:18.906 net/ring: not in enabled drivers build config 00:02:18.906 net/sfc: not in enabled drivers build config 00:02:18.906 net/softnic: not in enabled drivers build config 00:02:18.906 net/tap: not in enabled drivers build config 00:02:18.906 net/thunderx: not in enabled drivers build config 00:02:18.906 net/txgbe: not in enabled drivers build config 00:02:18.906 net/vdev_netvsc: not in enabled drivers build config 00:02:18.906 net/vhost: not in enabled drivers build config 00:02:18.906 net/virtio: not in enabled drivers build config 00:02:18.906 net/vmxnet3: not in enabled drivers build config 00:02:18.906 raw/*: missing internal dependency, "rawdev" 00:02:18.906 crypto/armv8: not in enabled drivers build config 00:02:18.906 crypto/bcmfs: not in enabled drivers build config 00:02:18.906 crypto/caam_jr: not in enabled drivers build config 00:02:18.906 crypto/ccp: not in enabled drivers build config 00:02:18.906 crypto/cnxk: not in enabled drivers build config 00:02:18.906 crypto/dpaa_sec: not in enabled drivers build config 00:02:18.906 crypto/dpaa2_sec: not in enabled drivers build config 00:02:18.906 crypto/ipsec_mb: not in enabled drivers build config 00:02:18.906 crypto/mlx5: not in enabled drivers build config 00:02:18.906 crypto/mvsam: not in enabled drivers build config 00:02:18.906 crypto/nitrox: not in enabled drivers build config 00:02:18.906 crypto/null: not in enabled drivers build config 00:02:18.906 crypto/octeontx: not in enabled drivers build config 00:02:18.906 crypto/openssl: not in enabled drivers build config 00:02:18.906 crypto/scheduler: not in enabled drivers build config 00:02:18.906 crypto/uadk: not in enabled drivers build config 00:02:18.906 crypto/virtio: not in enabled drivers build config 00:02:18.906 compress/isal: not in enabled drivers build config 00:02:18.906 compress/mlx5: not in enabled drivers build config 00:02:18.906 compress/nitrox: not in enabled drivers build config 00:02:18.906 compress/octeontx: not in enabled drivers build config 00:02:18.906 compress/zlib: not in enabled drivers build config 00:02:18.906 regex/*: missing internal dependency, "regexdev" 00:02:18.906 ml/*: missing internal dependency, "mldev" 00:02:18.906 vdpa/ifc: not in enabled drivers build config 00:02:18.906 vdpa/mlx5: not in enabled drivers build config 00:02:18.906 vdpa/nfp: not in enabled drivers build config 00:02:18.906 vdpa/sfc: not in enabled drivers build config 00:02:18.906 event/*: missing internal dependency, "eventdev" 00:02:18.906 baseband/*: missing internal dependency, "bbdev" 00:02:18.906 gpu/*: missing internal dependency, "gpudev" 00:02:18.906 00:02:18.906 00:02:18.906 Build targets in project: 85 00:02:18.906 00:02:18.906 DPDK 24.03.0 00:02:18.906 00:02:18.906 User defined options 00:02:18.906 buildtype : debug 00:02:18.906 default_library : static 00:02:18.906 libdir : lib 00:02:18.906 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:18.906 b_sanitize : address 00:02:18.906 c_args : -Wno-stringop-overflow -fcommon -fPIC -Werror 00:02:18.906 c_link_args : 00:02:18.906 cpu_instruction_set: native 00:02:18.906 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:18.906 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:18.906 enable_docs : false 00:02:18.906 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:18.906 enable_kmods : false 00:02:18.906 max_lcores : 128 00:02:18.906 tests : false 00:02:18.906 00:02:18.906 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:18.906 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:18.906 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:18.906 [2/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:18.906 [3/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:18.906 [4/267] Linking static target lib/librte_kvargs.a 00:02:18.906 [5/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:18.906 [6/267] Linking static target lib/librte_log.a 00:02:18.906 [7/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:18.906 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:18.906 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:18.906 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:18.906 [11/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:18.906 [12/267] Linking static target lib/librte_telemetry.a 00:02:18.906 [13/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:18.906 [14/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:18.906 [15/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:18.906 [16/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:18.906 [17/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.164 [18/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:19.164 [19/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:19.164 [20/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:19.164 [21/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:19.164 [22/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:19.422 [23/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:19.422 [24/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:19.422 [25/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:19.680 [26/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:19.680 [27/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:19.680 [28/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:19.680 [29/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:19.680 [30/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:19.937 [31/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:19.937 [32/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:19.937 [33/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:19.937 [34/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:20.195 [35/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:20.195 [36/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:20.195 [37/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:20.195 [38/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:20.195 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:20.452 [40/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.452 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:20.452 [42/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.452 [43/267] Linking target lib/librte_log.so.24.1 00:02:20.452 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:20.452 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:20.452 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:20.708 [47/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:20.708 [48/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:20.708 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:20.708 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:20.708 [51/267] Linking target lib/librte_kvargs.so.24.1 00:02:20.708 [52/267] Linking target lib/librte_telemetry.so.24.1 00:02:20.708 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:20.708 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:20.966 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:20.966 [56/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:20.966 [57/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:20.966 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:20.966 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:20.966 [60/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:21.223 [61/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:21.223 [62/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:21.223 [63/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:21.223 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:21.223 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:21.223 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:21.223 [67/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:21.480 [68/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:21.480 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:21.737 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:21.737 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:21.737 [72/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:21.737 [73/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:21.737 [74/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:21.737 [75/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:21.737 [76/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:21.737 [77/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:21.737 [78/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:21.737 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:21.994 [80/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:21.994 [81/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:22.251 [82/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:22.251 [83/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:22.251 [84/267] Linking static target lib/librte_ring.a 00:02:22.251 [85/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:22.251 [86/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:22.509 [87/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:22.509 [88/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:22.509 [89/267] Linking static target lib/librte_eal.a 00:02:22.509 [90/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:22.509 [91/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:22.767 [92/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:22.767 [93/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:22.767 [94/267] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:22.767 [95/267] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:23.025 [96/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.025 [97/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:23.283 [98/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:23.283 [99/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:23.283 [100/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:23.283 [101/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:23.283 [102/267] Linking static target lib/librte_mbuf.a 00:02:23.283 [103/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:23.283 [104/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:23.283 [105/267] Linking static target lib/librte_net.a 00:02:23.283 [106/267] Linking static target lib/librte_mempool.a 00:02:23.283 [107/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:23.551 [108/267] Linking static target lib/librte_meter.a 00:02:23.551 [109/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:23.551 [110/267] Linking static target lib/librte_rcu.a 00:02:23.817 [111/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:23.817 [112/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:23.817 [113/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.817 [114/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:23.817 [115/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.074 [116/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.074 [117/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:24.331 [118/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:24.331 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:24.588 [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:24.588 [121/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:24.845 [122/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.845 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:25.103 [124/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:25.103 [125/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:25.103 [126/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.103 [127/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:25.103 [128/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:25.103 [129/267] Linking static target lib/librte_pci.a 00:02:25.103 [130/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:25.103 [131/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:25.103 [132/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:25.103 [133/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:25.103 [134/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:25.361 [135/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:25.361 [136/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:25.361 [137/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:25.361 [138/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:25.361 [139/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:25.361 [140/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:25.361 [141/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:25.361 [142/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:25.361 [143/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.361 [144/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:25.361 [145/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:25.618 [146/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:25.618 [147/267] Linking static target lib/librte_cmdline.a 00:02:25.618 [148/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:25.876 [149/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:25.876 [150/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:25.876 [151/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:25.876 [152/267] Linking static target lib/librte_timer.a 00:02:25.876 [153/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:25.876 [154/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:26.133 [155/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:26.133 [156/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:26.133 [157/267] Linking static target lib/librte_ethdev.a 00:02:26.133 [158/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:26.391 [159/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:26.391 [160/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:26.391 [161/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.391 [162/267] Linking static target lib/librte_compressdev.a 00:02:26.391 [163/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:26.391 [164/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:26.391 [165/267] Linking static target lib/librte_hash.a 00:02:26.648 [166/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:26.649 [167/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:26.649 [168/267] Linking static target lib/librte_dmadev.a 00:02:26.649 [169/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:26.906 [170/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:26.906 [171/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:26.906 [172/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:26.906 [173/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.164 [174/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.164 [175/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:27.164 [176/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.428 [177/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:27.428 [178/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:27.428 [179/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:27.428 [180/267] Linking static target lib/librte_cryptodev.a 00:02:27.428 [181/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:27.428 [182/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.428 [183/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:27.428 [184/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:27.685 [185/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:27.685 [186/267] Linking static target lib/librte_power.a 00:02:27.942 [187/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:27.942 [188/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:27.942 [189/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:28.199 [190/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:28.199 [191/267] Linking static target lib/librte_security.a 00:02:28.457 [192/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:28.457 [193/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.457 [194/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:28.457 [195/267] Linking static target lib/librte_reorder.a 00:02:28.713 [196/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.713 [197/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:28.713 [198/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.971 [199/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:28.971 [200/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:28.971 [201/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:28.971 [202/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.229 [203/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:29.229 [204/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:29.229 [205/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:29.229 [206/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:29.487 [207/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:29.487 [208/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:29.487 [209/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:29.487 [210/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:29.487 [211/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:29.487 [212/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:29.745 [213/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:29.745 [214/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:29.745 [215/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:29.745 [216/267] Linking static target drivers/librte_bus_vdev.a 00:02:29.745 [217/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:29.745 [218/267] Linking static target drivers/librte_bus_pci.a 00:02:29.745 [219/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:29.745 [220/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:30.003 [221/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.003 [222/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:30.003 [223/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:30.003 [224/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:30.003 [225/267] Linking static target drivers/librte_mempool_ring.a 00:02:30.261 [226/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.633 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:32.199 [228/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.199 [229/267] Linking target lib/librte_eal.so.24.1 00:02:32.457 [230/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:32.457 [231/267] Linking target lib/librte_ring.so.24.1 00:02:32.457 [232/267] Linking target lib/librte_pci.so.24.1 00:02:32.457 [233/267] Linking target lib/librte_timer.so.24.1 00:02:32.457 [234/267] Linking target lib/librte_dmadev.so.24.1 00:02:32.457 [235/267] Linking target lib/librte_meter.so.24.1 00:02:32.457 [236/267] Linking target drivers/librte_bus_vdev.so.24.1 00:02:32.457 [237/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.780 [238/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:32.780 [239/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:32.780 [240/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:32.780 [241/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:32.780 [242/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:32.780 [243/267] Linking target lib/librte_mempool.so.24.1 00:02:32.780 [244/267] Linking target lib/librte_rcu.so.24.1 00:02:32.780 [245/267] Linking target drivers/librte_bus_pci.so.24.1 00:02:32.780 [246/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:32.780 [247/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:33.037 [248/267] Linking target drivers/librte_mempool_ring.so.24.1 00:02:33.037 [249/267] Linking target lib/librte_mbuf.so.24.1 00:02:33.037 [250/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:33.037 [251/267] Linking target lib/librte_reorder.so.24.1 00:02:33.037 [252/267] Linking target lib/librte_cryptodev.so.24.1 00:02:33.037 [253/267] Linking target lib/librte_net.so.24.1 00:02:33.037 [254/267] Linking target lib/librte_compressdev.so.24.1 00:02:33.295 [255/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:33.295 [256/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:33.295 [257/267] Linking target lib/librte_cmdline.so.24.1 00:02:33.295 [258/267] Linking target lib/librte_security.so.24.1 00:02:33.295 [259/267] Linking target lib/librte_hash.so.24.1 00:02:33.295 [260/267] Linking target lib/librte_ethdev.so.24.1 00:02:33.552 [261/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:33.552 [262/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:33.552 [263/267] Linking target lib/librte_power.so.24.1 00:02:35.450 [264/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:35.450 [265/267] Linking static target lib/librte_vhost.a 00:02:37.350 [266/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.350 [267/267] Linking target lib/librte_vhost.so.24.1 00:02:37.350 INFO: autodetecting backend as ninja 00:02:37.350 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:38.299 CC lib/ut/ut.o 00:02:38.299 CC lib/ut_mock/mock.o 00:02:38.299 CC lib/log/log.o 00:02:38.299 CC lib/log/log_flags.o 00:02:38.299 CC lib/log/log_deprecated.o 00:02:38.557 LIB libspdk_ut.a 00:02:38.557 LIB libspdk_ut_mock.a 00:02:38.557 LIB libspdk_log.a 00:02:38.815 CC lib/ioat/ioat.o 00:02:38.815 CC lib/dma/dma.o 00:02:38.815 CXX lib/trace_parser/trace.o 00:02:38.815 CC lib/util/bit_array.o 00:02:38.815 CC lib/util/base64.o 00:02:38.815 CC lib/util/cpuset.o 00:02:38.815 CC lib/util/crc16.o 00:02:38.815 CC lib/util/crc32c.o 00:02:38.815 CC lib/util/crc32.o 00:02:39.073 CC lib/util/crc32_ieee.o 00:02:39.073 CC lib/vfio_user/host/vfio_user_pci.o 00:02:39.073 CC lib/vfio_user/host/vfio_user.o 00:02:39.073 CC lib/util/crc64.o 00:02:39.073 CC lib/util/dif.o 00:02:39.074 CC lib/util/fd.o 00:02:39.074 CC lib/util/file.o 00:02:39.074 LIB libspdk_dma.a 00:02:39.074 CC lib/util/hexlify.o 00:02:39.074 CC lib/util/iov.o 00:02:39.074 CC lib/util/math.o 00:02:39.074 LIB libspdk_ioat.a 00:02:39.332 CC lib/util/pipe.o 00:02:39.332 CC lib/util/strerror_tls.o 00:02:39.332 CC lib/util/string.o 00:02:39.332 CC lib/util/uuid.o 00:02:39.332 CC lib/util/fd_group.o 00:02:39.332 CC lib/util/xor.o 00:02:39.332 LIB libspdk_vfio_user.a 00:02:39.332 CC lib/util/zipf.o 00:02:39.613 LIB libspdk_util.a 00:02:39.871 LIB libspdk_trace_parser.a 00:02:40.127 CC lib/rdma_provider/common.o 00:02:40.127 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:40.127 CC lib/env_dpdk/env.o 00:02:40.127 CC lib/env_dpdk/memory.o 00:02:40.127 CC lib/env_dpdk/pci.o 00:02:40.127 CC lib/rdma_utils/rdma_utils.o 00:02:40.127 CC lib/idxd/idxd.o 00:02:40.127 CC lib/conf/conf.o 00:02:40.127 CC lib/json/json_parse.o 00:02:40.127 CC lib/vmd/vmd.o 00:02:40.383 CC lib/vmd/led.o 00:02:40.383 CC lib/env_dpdk/init.o 00:02:40.383 LIB libspdk_rdma_provider.a 00:02:40.383 LIB libspdk_conf.a 00:02:40.383 CC lib/json/json_util.o 00:02:40.383 CC lib/env_dpdk/threads.o 00:02:40.383 CC lib/json/json_write.o 00:02:40.383 LIB libspdk_rdma_utils.a 00:02:40.383 CC lib/env_dpdk/pci_ioat.o 00:02:40.640 CC lib/env_dpdk/pci_virtio.o 00:02:40.640 CC lib/idxd/idxd_user.o 00:02:40.640 LIB libspdk_vmd.a 00:02:40.640 CC lib/env_dpdk/pci_vmd.o 00:02:40.640 CC lib/env_dpdk/pci_idxd.o 00:02:40.640 CC lib/env_dpdk/pci_event.o 00:02:40.640 CC lib/env_dpdk/sigbus_handler.o 00:02:40.640 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:40.640 CC lib/env_dpdk/pci_dpdk.o 00:02:40.640 LIB libspdk_idxd.a 00:02:40.896 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:40.896 LIB libspdk_json.a 00:02:41.154 CC lib/jsonrpc/jsonrpc_server.o 00:02:41.154 CC lib/jsonrpc/jsonrpc_client.o 00:02:41.154 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:41.154 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:41.412 LIB libspdk_env_dpdk.a 00:02:41.412 LIB libspdk_jsonrpc.a 00:02:41.670 CC lib/rpc/rpc.o 00:02:41.938 LIB libspdk_rpc.a 00:02:42.213 CC lib/notify/notify.o 00:02:42.213 CC lib/notify/notify_rpc.o 00:02:42.213 CC lib/trace/trace.o 00:02:42.213 CC lib/keyring/keyring_rpc.o 00:02:42.213 CC lib/trace/trace_flags.o 00:02:42.213 CC lib/keyring/keyring.o 00:02:42.213 CC lib/trace/trace_rpc.o 00:02:42.470 LIB libspdk_notify.a 00:02:42.470 LIB libspdk_keyring.a 00:02:42.470 LIB libspdk_trace.a 00:02:42.727 CC lib/sock/sock_rpc.o 00:02:42.727 CC lib/sock/sock.o 00:02:42.727 CC lib/thread/thread.o 00:02:42.727 CC lib/thread/iobuf.o 00:02:43.304 LIB libspdk_sock.a 00:02:43.561 CC lib/nvme/nvme_ctrlr.o 00:02:43.561 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:43.561 CC lib/nvme/nvme_fabric.o 00:02:43.561 CC lib/nvme/nvme_ns_cmd.o 00:02:43.561 CC lib/nvme/nvme_ns.o 00:02:43.561 CC lib/nvme/nvme_pcie_common.o 00:02:43.561 CC lib/nvme/nvme_pcie.o 00:02:43.561 CC lib/nvme/nvme_qpair.o 00:02:43.561 CC lib/nvme/nvme.o 00:02:43.819 LIB libspdk_thread.a 00:02:44.110 CC lib/nvme/nvme_quirks.o 00:02:44.110 CC lib/nvme/nvme_transport.o 00:02:44.368 CC lib/nvme/nvme_discovery.o 00:02:44.626 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:44.626 CC lib/accel/accel.o 00:02:44.626 CC lib/accel/accel_rpc.o 00:02:44.626 CC lib/accel/accel_sw.o 00:02:44.884 CC lib/blob/blobstore.o 00:02:44.884 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:44.884 CC lib/blob/request.o 00:02:44.884 CC lib/init/json_config.o 00:02:44.884 CC lib/init/subsystem.o 00:02:45.143 CC lib/init/subsystem_rpc.o 00:02:45.143 CC lib/init/rpc.o 00:02:45.143 CC lib/blob/zeroes.o 00:02:45.143 CC lib/nvme/nvme_tcp.o 00:02:45.143 CC lib/nvme/nvme_opal.o 00:02:45.143 CC lib/nvme/nvme_io_msg.o 00:02:45.402 CC lib/nvme/nvme_poll_group.o 00:02:45.402 LIB libspdk_init.a 00:02:45.402 CC lib/blob/blob_bs_dev.o 00:02:45.402 CC lib/nvme/nvme_zns.o 00:02:45.660 LIB libspdk_accel.a 00:02:45.660 CC lib/nvme/nvme_stubs.o 00:02:45.660 CC lib/nvme/nvme_auth.o 00:02:45.660 CC lib/nvme/nvme_cuse.o 00:02:45.660 CC lib/nvme/nvme_rdma.o 00:02:45.918 CC lib/virtio/virtio.o 00:02:45.918 CC lib/bdev/bdev.o 00:02:46.176 CC lib/event/app.o 00:02:46.176 CC lib/bdev/bdev_rpc.o 00:02:46.176 CC lib/virtio/virtio_vhost_user.o 00:02:46.434 CC lib/virtio/virtio_vfio_user.o 00:02:46.434 CC lib/virtio/virtio_pci.o 00:02:46.434 CC lib/bdev/bdev_zone.o 00:02:46.434 CC lib/bdev/part.o 00:02:46.434 CC lib/bdev/scsi_nvme.o 00:02:46.434 CC lib/event/reactor.o 00:02:46.691 CC lib/event/log_rpc.o 00:02:46.691 CC lib/event/app_rpc.o 00:02:46.691 CC lib/event/scheduler_static.o 00:02:46.691 LIB libspdk_nvme.a 00:02:46.691 LIB libspdk_virtio.a 00:02:46.691 LIB libspdk_blob.a 00:02:46.949 LIB libspdk_event.a 00:02:46.949 CC lib/lvol/lvol.o 00:02:47.207 CC lib/blobfs/blobfs.o 00:02:47.207 CC lib/blobfs/tree.o 00:02:47.464 LIB libspdk_bdev.a 00:02:47.721 LIB libspdk_blobfs.a 00:02:47.721 LIB libspdk_lvol.a 00:02:47.721 CC lib/nbd/nbd_rpc.o 00:02:47.721 CC lib/nbd/nbd.o 00:02:47.721 CC lib/scsi/dev.o 00:02:47.721 CC lib/scsi/lun.o 00:02:47.721 CC lib/scsi/port.o 00:02:47.721 CC lib/scsi/scsi.o 00:02:47.721 CC lib/nvmf/ctrlr_discovery.o 00:02:47.721 CC lib/ftl/ftl_core.o 00:02:47.721 CC lib/nvmf/ctrlr.o 00:02:47.721 CC lib/scsi/scsi_bdev.o 00:02:47.978 CC lib/scsi/scsi_pr.o 00:02:47.978 CC lib/ftl/ftl_init.o 00:02:47.978 CC lib/scsi/scsi_rpc.o 00:02:47.978 LIB libspdk_nbd.a 00:02:48.318 CC lib/scsi/task.o 00:02:48.318 CC lib/nvmf/ctrlr_bdev.o 00:02:48.318 CC lib/nvmf/subsystem.o 00:02:48.318 CC lib/nvmf/nvmf.o 00:02:48.318 CC lib/nvmf/nvmf_rpc.o 00:02:48.318 CC lib/nvmf/transport.o 00:02:48.318 CC lib/nvmf/tcp.o 00:02:48.318 CC lib/ftl/ftl_layout.o 00:02:48.318 CC lib/ftl/ftl_debug.o 00:02:48.575 LIB libspdk_scsi.a 00:02:48.575 CC lib/ftl/ftl_io.o 00:02:48.575 CC lib/nvmf/stubs.o 00:02:48.575 CC lib/nvmf/mdns_server.o 00:02:48.576 CC lib/nvmf/rdma.o 00:02:48.576 CC lib/nvmf/auth.o 00:02:48.833 CC lib/ftl/ftl_sb.o 00:02:48.833 CC lib/ftl/ftl_l2p.o 00:02:48.833 CC lib/ftl/ftl_l2p_flat.o 00:02:48.833 CC lib/ftl/ftl_nv_cache.o 00:02:48.833 CC lib/ftl/ftl_band.o 00:02:48.833 CC lib/ftl/ftl_band_ops.o 00:02:49.091 CC lib/ftl/ftl_writer.o 00:02:49.091 CC lib/ftl/ftl_rq.o 00:02:49.091 CC lib/ftl/ftl_reloc.o 00:02:49.091 CC lib/iscsi/conn.o 00:02:49.091 CC lib/ftl/ftl_l2p_cache.o 00:02:49.091 CC lib/ftl/ftl_p2l.o 00:02:49.348 CC lib/iscsi/init_grp.o 00:02:49.348 CC lib/iscsi/iscsi.o 00:02:49.348 CC lib/vhost/vhost.o 00:02:49.348 CC lib/vhost/vhost_rpc.o 00:02:49.348 CC lib/ftl/mngt/ftl_mngt.o 00:02:49.348 CC lib/vhost/vhost_scsi.o 00:02:49.606 CC lib/vhost/vhost_blk.o 00:02:49.606 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:49.606 CC lib/vhost/rte_vhost_user.o 00:02:49.606 LIB libspdk_nvmf.a 00:02:49.606 CC lib/iscsi/md5.o 00:02:49.864 CC lib/iscsi/param.o 00:02:49.864 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:49.864 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:49.864 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:49.864 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:49.864 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:49.864 CC lib/iscsi/portal_grp.o 00:02:50.120 CC lib/iscsi/tgt_node.o 00:02:50.120 CC lib/iscsi/iscsi_subsystem.o 00:02:50.120 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:50.120 CC lib/iscsi/iscsi_rpc.o 00:02:50.120 CC lib/iscsi/task.o 00:02:50.120 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:50.120 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:50.120 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:50.377 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:50.377 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:50.377 CC lib/ftl/utils/ftl_conf.o 00:02:50.377 LIB libspdk_vhost.a 00:02:50.377 CC lib/ftl/utils/ftl_md.o 00:02:50.377 CC lib/ftl/utils/ftl_mempool.o 00:02:50.377 CC lib/ftl/utils/ftl_bitmap.o 00:02:50.377 CC lib/ftl/utils/ftl_property.o 00:02:50.377 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:50.377 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:50.634 LIB libspdk_iscsi.a 00:02:50.634 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:50.634 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:50.634 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:50.634 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:50.634 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:50.634 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:50.634 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:50.634 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:50.634 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:50.634 CC lib/ftl/base/ftl_base_dev.o 00:02:50.634 CC lib/ftl/base/ftl_base_bdev.o 00:02:50.892 CC lib/ftl/ftl_trace.o 00:02:50.892 LIB libspdk_ftl.a 00:02:51.456 CC module/env_dpdk/env_dpdk_rpc.o 00:02:51.456 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:51.456 CC module/accel/dsa/accel_dsa.o 00:02:51.456 CC module/scheduler/gscheduler/gscheduler.o 00:02:51.456 CC module/keyring/file/keyring.o 00:02:51.456 CC module/sock/posix/posix.o 00:02:51.456 CC module/blob/bdev/blob_bdev.o 00:02:51.456 CC module/accel/ioat/accel_ioat.o 00:02:51.714 CC module/accel/error/accel_error.o 00:02:51.714 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:51.714 LIB libspdk_env_dpdk_rpc.a 00:02:51.714 CC module/keyring/file/keyring_rpc.o 00:02:51.714 LIB libspdk_scheduler_gscheduler.a 00:02:51.714 LIB libspdk_scheduler_dynamic.a 00:02:51.714 CC module/accel/ioat/accel_ioat_rpc.o 00:02:51.714 CC module/accel/dsa/accel_dsa_rpc.o 00:02:51.714 CC module/accel/error/accel_error_rpc.o 00:02:51.714 LIB libspdk_keyring_file.a 00:02:51.714 LIB libspdk_scheduler_dpdk_governor.a 00:02:51.972 LIB libspdk_accel_ioat.a 00:02:51.972 LIB libspdk_accel_dsa.a 00:02:51.972 LIB libspdk_blob_bdev.a 00:02:51.972 LIB libspdk_accel_error.a 00:02:51.972 CC module/keyring/linux/keyring.o 00:02:51.972 CC module/accel/iaa/accel_iaa.o 00:02:51.972 CC module/accel/iaa/accel_iaa_rpc.o 00:02:51.972 CC module/keyring/linux/keyring_rpc.o 00:02:52.231 LIB libspdk_keyring_linux.a 00:02:52.231 LIB libspdk_accel_iaa.a 00:02:52.231 CC module/bdev/gpt/gpt.o 00:02:52.231 CC module/bdev/gpt/vbdev_gpt.o 00:02:52.231 CC module/bdev/delay/vbdev_delay.o 00:02:52.231 CC module/bdev/lvol/vbdev_lvol.o 00:02:52.231 CC module/bdev/error/vbdev_error.o 00:02:52.231 CC module/blobfs/bdev/blobfs_bdev.o 00:02:52.231 CC module/bdev/malloc/bdev_malloc.o 00:02:52.231 CC module/bdev/null/bdev_null.o 00:02:52.490 CC module/bdev/nvme/bdev_nvme.o 00:02:52.490 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:52.490 LIB libspdk_sock_posix.a 00:02:52.490 CC module/bdev/error/vbdev_error_rpc.o 00:02:52.490 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:52.490 CC module/bdev/nvme/nvme_rpc.o 00:02:52.490 LIB libspdk_bdev_gpt.a 00:02:52.490 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:52.490 CC module/bdev/nvme/bdev_mdns_client.o 00:02:52.490 CC module/bdev/null/bdev_null_rpc.o 00:02:52.749 LIB libspdk_bdev_error.a 00:02:52.749 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:52.749 CC module/bdev/nvme/vbdev_opal.o 00:02:52.749 LIB libspdk_blobfs_bdev.a 00:02:52.749 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:52.749 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:52.749 LIB libspdk_bdev_null.a 00:02:52.749 LIB libspdk_bdev_delay.a 00:02:52.749 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:53.006 LIB libspdk_bdev_lvol.a 00:02:53.006 CC module/bdev/passthru/vbdev_passthru.o 00:02:53.006 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:53.006 LIB libspdk_bdev_malloc.a 00:02:53.006 CC module/bdev/split/vbdev_split.o 00:02:53.006 CC module/bdev/raid/bdev_raid.o 00:02:53.006 CC module/bdev/split/vbdev_split_rpc.o 00:02:53.264 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:53.264 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:53.264 CC module/bdev/aio/bdev_aio.o 00:02:53.264 CC module/bdev/ftl/bdev_ftl.o 00:02:53.264 LIB libspdk_bdev_passthru.a 00:02:53.264 CC module/bdev/iscsi/bdev_iscsi.o 00:02:53.264 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:53.264 LIB libspdk_bdev_split.a 00:02:53.264 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:53.264 CC module/bdev/raid/bdev_raid_rpc.o 00:02:53.531 CC module/bdev/aio/bdev_aio_rpc.o 00:02:53.531 CC module/bdev/raid/bdev_raid_sb.o 00:02:53.531 CC module/bdev/raid/raid0.o 00:02:53.531 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:53.531 LIB libspdk_bdev_zone_block.a 00:02:53.531 CC module/bdev/raid/raid1.o 00:02:53.531 LIB libspdk_bdev_ftl.a 00:02:53.531 LIB libspdk_bdev_aio.a 00:02:53.789 CC module/bdev/raid/concat.o 00:02:53.789 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:53.789 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:53.789 LIB libspdk_bdev_iscsi.a 00:02:53.789 LIB libspdk_bdev_nvme.a 00:02:54.047 LIB libspdk_bdev_raid.a 00:02:54.047 LIB libspdk_bdev_virtio.a 00:02:54.612 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:54.612 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:54.612 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:54.612 CC module/event/subsystems/iobuf/iobuf.o 00:02:54.612 CC module/event/subsystems/vmd/vmd.o 00:02:54.612 CC module/event/subsystems/sock/sock.o 00:02:54.612 CC module/event/subsystems/keyring/keyring.o 00:02:54.612 CC module/event/subsystems/scheduler/scheduler.o 00:02:54.612 LIB libspdk_event_keyring.a 00:02:54.612 LIB libspdk_event_vhost_blk.a 00:02:54.612 LIB libspdk_event_sock.a 00:02:54.612 LIB libspdk_event_vmd.a 00:02:54.612 LIB libspdk_event_scheduler.a 00:02:54.612 LIB libspdk_event_iobuf.a 00:02:54.870 CC module/event/subsystems/accel/accel.o 00:02:55.127 LIB libspdk_event_accel.a 00:02:55.385 CC module/event/subsystems/bdev/bdev.o 00:02:55.642 LIB libspdk_event_bdev.a 00:02:55.900 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:55.900 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:55.900 CC module/event/subsystems/scsi/scsi.o 00:02:55.900 CC module/event/subsystems/nbd/nbd.o 00:02:55.900 LIB libspdk_event_nbd.a 00:02:55.900 LIB libspdk_event_scsi.a 00:02:56.159 LIB libspdk_event_nvmf.a 00:02:56.418 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:56.418 CC module/event/subsystems/iscsi/iscsi.o 00:02:56.418 LIB libspdk_event_vhost_scsi.a 00:02:56.418 LIB libspdk_event_iscsi.a 00:02:56.676 CC app/trace_record/trace_record.o 00:02:56.676 CC app/spdk_lspci/spdk_lspci.o 00:02:56.676 CC app/spdk_nvme_perf/perf.o 00:02:56.934 CXX app/trace/trace.o 00:02:56.934 CC app/spdk_nvme_identify/identify.o 00:02:56.934 CC app/nvmf_tgt/nvmf_main.o 00:02:56.934 CC app/iscsi_tgt/iscsi_tgt.o 00:02:56.934 CC app/spdk_tgt/spdk_tgt.o 00:02:56.934 CC examples/util/zipf/zipf.o 00:02:56.934 CC test/thread/poller_perf/poller_perf.o 00:02:56.934 LINK spdk_lspci 00:02:57.192 LINK spdk_trace_record 00:02:57.192 LINK zipf 00:02:57.192 LINK nvmf_tgt 00:02:57.192 LINK poller_perf 00:02:57.192 LINK iscsi_tgt 00:02:57.192 LINK spdk_tgt 00:02:57.192 LINK spdk_trace 00:02:57.450 LINK spdk_nvme_perf 00:02:57.450 LINK spdk_nvme_identify 00:02:57.708 CC test/thread/lock/spdk_lock.o 00:02:57.708 CC examples/ioat/perf/perf.o 00:02:57.708 CC examples/vmd/lsvmd/lsvmd.o 00:02:57.966 LINK lsvmd 00:02:57.966 LINK ioat_perf 00:02:57.966 CC examples/vmd/led/led.o 00:02:58.224 LINK led 00:02:58.482 CC test/dma/test_dma/test_dma.o 00:02:58.482 LINK spdk_lock 00:02:58.482 CC examples/ioat/verify/verify.o 00:02:58.740 CC test/app/bdev_svc/bdev_svc.o 00:02:58.740 LINK test_dma 00:02:58.740 CC app/spdk_nvme_discover/discovery_aer.o 00:02:58.740 LINK verify 00:02:58.740 LINK bdev_svc 00:02:58.998 LINK spdk_nvme_discover 00:02:58.998 CC examples/idxd/perf/perf.o 00:02:59.256 TEST_HEADER include/spdk/accel.h 00:02:59.256 TEST_HEADER include/spdk/accel_module.h 00:02:59.256 TEST_HEADER include/spdk/assert.h 00:02:59.256 TEST_HEADER include/spdk/barrier.h 00:02:59.256 TEST_HEADER include/spdk/base64.h 00:02:59.256 TEST_HEADER include/spdk/bdev.h 00:02:59.256 TEST_HEADER include/spdk/bdev_module.h 00:02:59.256 TEST_HEADER include/spdk/bdev_zone.h 00:02:59.256 TEST_HEADER include/spdk/bit_array.h 00:02:59.256 TEST_HEADER include/spdk/bit_pool.h 00:02:59.256 TEST_HEADER include/spdk/blob.h 00:02:59.256 TEST_HEADER include/spdk/blob_bdev.h 00:02:59.256 TEST_HEADER include/spdk/blobfs.h 00:02:59.256 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:59.256 TEST_HEADER include/spdk/conf.h 00:02:59.256 TEST_HEADER include/spdk/config.h 00:02:59.256 TEST_HEADER include/spdk/cpuset.h 00:02:59.256 TEST_HEADER include/spdk/crc16.h 00:02:59.256 TEST_HEADER include/spdk/crc32.h 00:02:59.256 TEST_HEADER include/spdk/crc64.h 00:02:59.256 TEST_HEADER include/spdk/dif.h 00:02:59.256 TEST_HEADER include/spdk/dma.h 00:02:59.256 TEST_HEADER include/spdk/endian.h 00:02:59.256 TEST_HEADER include/spdk/env.h 00:02:59.256 TEST_HEADER include/spdk/env_dpdk.h 00:02:59.256 TEST_HEADER include/spdk/event.h 00:02:59.256 TEST_HEADER include/spdk/fd.h 00:02:59.256 TEST_HEADER include/spdk/fd_group.h 00:02:59.256 TEST_HEADER include/spdk/file.h 00:02:59.256 TEST_HEADER include/spdk/ftl.h 00:02:59.256 TEST_HEADER include/spdk/gpt_spec.h 00:02:59.256 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:59.256 TEST_HEADER include/spdk/hexlify.h 00:02:59.256 TEST_HEADER include/spdk/histogram_data.h 00:02:59.256 TEST_HEADER include/spdk/idxd.h 00:02:59.256 TEST_HEADER include/spdk/idxd_spec.h 00:02:59.256 TEST_HEADER include/spdk/init.h 00:02:59.256 TEST_HEADER include/spdk/ioat.h 00:02:59.256 TEST_HEADER include/spdk/ioat_spec.h 00:02:59.256 TEST_HEADER include/spdk/iscsi_spec.h 00:02:59.256 TEST_HEADER include/spdk/json.h 00:02:59.256 TEST_HEADER include/spdk/jsonrpc.h 00:02:59.256 TEST_HEADER include/spdk/keyring.h 00:02:59.256 TEST_HEADER include/spdk/keyring_module.h 00:02:59.256 TEST_HEADER include/spdk/likely.h 00:02:59.256 TEST_HEADER include/spdk/log.h 00:02:59.256 TEST_HEADER include/spdk/lvol.h 00:02:59.256 TEST_HEADER include/spdk/memory.h 00:02:59.256 TEST_HEADER include/spdk/mmio.h 00:02:59.256 TEST_HEADER include/spdk/nbd.h 00:02:59.256 TEST_HEADER include/spdk/notify.h 00:02:59.256 TEST_HEADER include/spdk/nvme.h 00:02:59.256 TEST_HEADER include/spdk/nvme_intel.h 00:02:59.256 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:59.256 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:59.256 TEST_HEADER include/spdk/nvme_spec.h 00:02:59.256 TEST_HEADER include/spdk/nvme_zns.h 00:02:59.256 TEST_HEADER include/spdk/nvmf.h 00:02:59.256 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:59.256 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:59.256 TEST_HEADER include/spdk/nvmf_spec.h 00:02:59.256 TEST_HEADER include/spdk/nvmf_transport.h 00:02:59.257 TEST_HEADER include/spdk/opal.h 00:02:59.514 TEST_HEADER include/spdk/opal_spec.h 00:02:59.514 TEST_HEADER include/spdk/pci_ids.h 00:02:59.514 TEST_HEADER include/spdk/pipe.h 00:02:59.514 TEST_HEADER include/spdk/queue.h 00:02:59.514 TEST_HEADER include/spdk/reduce.h 00:02:59.514 LINK idxd_perf 00:02:59.514 TEST_HEADER include/spdk/rpc.h 00:02:59.514 TEST_HEADER include/spdk/scheduler.h 00:02:59.514 TEST_HEADER include/spdk/scsi.h 00:02:59.514 TEST_HEADER include/spdk/scsi_spec.h 00:02:59.514 TEST_HEADER include/spdk/sock.h 00:02:59.514 TEST_HEADER include/spdk/stdinc.h 00:02:59.514 TEST_HEADER include/spdk/string.h 00:02:59.514 TEST_HEADER include/spdk/thread.h 00:02:59.514 TEST_HEADER include/spdk/trace.h 00:02:59.514 TEST_HEADER include/spdk/trace_parser.h 00:02:59.514 TEST_HEADER include/spdk/tree.h 00:02:59.514 TEST_HEADER include/spdk/ublk.h 00:02:59.514 TEST_HEADER include/spdk/util.h 00:02:59.514 TEST_HEADER include/spdk/uuid.h 00:02:59.514 TEST_HEADER include/spdk/version.h 00:02:59.514 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:59.514 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:59.514 TEST_HEADER include/spdk/vhost.h 00:02:59.514 TEST_HEADER include/spdk/vmd.h 00:02:59.514 TEST_HEADER include/spdk/xor.h 00:02:59.514 TEST_HEADER include/spdk/zipf.h 00:02:59.514 CXX test/cpp_headers/accel.o 00:02:59.514 LINK interrupt_tgt 00:02:59.772 CC examples/thread/thread/thread_ex.o 00:02:59.772 CXX test/cpp_headers/accel_module.o 00:03:00.029 CXX test/cpp_headers/assert.o 00:03:00.030 LINK thread 00:03:00.030 CXX test/cpp_headers/barrier.o 00:03:00.030 CXX test/cpp_headers/base64.o 00:03:00.030 CXX test/cpp_headers/bdev.o 00:03:00.030 CXX test/cpp_headers/bdev_module.o 00:03:00.030 CXX test/cpp_headers/bdev_zone.o 00:03:00.030 CXX test/cpp_headers/bit_array.o 00:03:00.030 CXX test/cpp_headers/bit_pool.o 00:03:00.287 CXX test/cpp_headers/blob.o 00:03:00.288 CC app/spdk_top/spdk_top.o 00:03:00.288 CC examples/sock/hello_world/hello_sock.o 00:03:00.288 CXX test/cpp_headers/blob_bdev.o 00:03:00.546 CC app/vhost/vhost.o 00:03:00.546 CC app/spdk_dd/spdk_dd.o 00:03:00.546 LINK hello_sock 00:03:00.546 CC app/fio/nvme/fio_plugin.o 00:03:00.546 CXX test/cpp_headers/blobfs.o 00:03:00.546 LINK vhost 00:03:00.804 CXX test/cpp_headers/blobfs_bdev.o 00:03:00.804 LINK spdk_dd 00:03:00.804 LINK spdk_top 00:03:00.804 CXX test/cpp_headers/conf.o 00:03:01.062 LINK spdk_nvme 00:03:01.062 CXX test/cpp_headers/config.o 00:03:01.062 CXX test/cpp_headers/cpuset.o 00:03:01.062 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:01.393 CXX test/cpp_headers/crc16.o 00:03:01.393 CXX test/cpp_headers/crc32.o 00:03:01.393 CXX test/cpp_headers/crc64.o 00:03:01.393 CC examples/nvme/hello_world/hello_world.o 00:03:01.393 LINK nvme_fuzz 00:03:01.393 CC examples/nvme/reconnect/reconnect.o 00:03:01.393 CXX test/cpp_headers/dif.o 00:03:01.652 CC test/app/histogram_perf/histogram_perf.o 00:03:01.652 CXX test/cpp_headers/dma.o 00:03:01.652 LINK hello_world 00:03:01.652 LINK histogram_perf 00:03:01.910 CC test/app/jsoncat/jsoncat.o 00:03:01.910 LINK reconnect 00:03:01.910 CXX test/cpp_headers/endian.o 00:03:01.910 LINK jsoncat 00:03:02.169 CXX test/cpp_headers/env.o 00:03:02.169 CC app/fio/bdev/fio_plugin.o 00:03:02.169 CXX test/cpp_headers/env_dpdk.o 00:03:02.427 CXX test/cpp_headers/event.o 00:03:02.685 LINK spdk_bdev 00:03:02.685 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:02.685 CXX test/cpp_headers/fd.o 00:03:02.685 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:02.685 CC examples/accel/perf/accel_perf.o 00:03:02.685 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:02.685 CC examples/blob/hello_world/hello_blob.o 00:03:02.943 CXX test/cpp_headers/fd_group.o 00:03:02.943 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:02.943 LINK accel_perf 00:03:02.943 CXX test/cpp_headers/file.o 00:03:02.943 LINK hello_blob 00:03:03.201 LINK vhost_fuzz 00:03:03.201 CXX test/cpp_headers/ftl.o 00:03:03.201 CC test/env/mem_callbacks/mem_callbacks.o 00:03:03.460 LINK nvme_manage 00:03:03.460 CXX test/cpp_headers/gpt_spec.o 00:03:03.460 CXX test/cpp_headers/hexlify.o 00:03:03.460 CC test/event/event_perf/event_perf.o 00:03:03.718 CC test/event/reactor/reactor.o 00:03:03.718 LINK iscsi_fuzz 00:03:03.718 LINK mem_callbacks 00:03:03.718 LINK event_perf 00:03:03.718 LINK reactor 00:03:03.718 CXX test/cpp_headers/histogram_data.o 00:03:03.976 CC test/event/reactor_perf/reactor_perf.o 00:03:03.976 CXX test/cpp_headers/idxd.o 00:03:03.976 LINK reactor_perf 00:03:04.234 CXX test/cpp_headers/idxd_spec.o 00:03:04.234 CC test/app/stub/stub.o 00:03:04.234 CC test/env/vtophys/vtophys.o 00:03:04.234 LINK stub 00:03:04.234 CXX test/cpp_headers/init.o 00:03:04.492 LINK vtophys 00:03:04.492 CC examples/nvme/arbitration/arbitration.o 00:03:04.492 CXX test/cpp_headers/ioat.o 00:03:04.492 CC examples/nvme/hotplug/hotplug.o 00:03:04.749 CC examples/blob/cli/blobcli.o 00:03:04.749 CXX test/cpp_headers/ioat_spec.o 00:03:04.749 LINK arbitration 00:03:04.749 LINK hotplug 00:03:04.749 CC test/nvme/aer/aer.o 00:03:04.749 CC test/event/app_repeat/app_repeat.o 00:03:05.006 CXX test/cpp_headers/iscsi_spec.o 00:03:05.006 LINK app_repeat 00:03:05.006 CXX test/cpp_headers/json.o 00:03:05.006 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:05.262 LINK blobcli 00:03:05.262 LINK aer 00:03:05.262 CXX test/cpp_headers/jsonrpc.o 00:03:05.262 LINK env_dpdk_post_init 00:03:05.519 CXX test/cpp_headers/keyring.o 00:03:05.520 CC test/nvme/reset/reset.o 00:03:05.520 CXX test/cpp_headers/keyring_module.o 00:03:05.777 CC test/event/scheduler/scheduler.o 00:03:05.777 CC test/rpc_client/rpc_client_test.o 00:03:05.777 LINK reset 00:03:05.777 CXX test/cpp_headers/likely.o 00:03:06.035 LINK rpc_client_test 00:03:06.035 LINK scheduler 00:03:06.035 CXX test/cpp_headers/log.o 00:03:06.293 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:06.293 CC test/unit/include/spdk/histogram_data.h/histogram_ut.o 00:03:06.293 CXX test/cpp_headers/lvol.o 00:03:06.293 CC test/env/pci/pci_ut.o 00:03:06.293 CC test/env/memory/memory_ut.o 00:03:06.551 LINK cmb_copy 00:03:06.551 CC test/unit/lib/log/log.c/log_ut.o 00:03:06.551 LINK histogram_ut 00:03:06.551 CXX test/cpp_headers/memory.o 00:03:06.551 CXX test/cpp_headers/mmio.o 00:03:06.810 LINK log_ut 00:03:06.810 LINK pci_ut 00:03:06.810 CXX test/cpp_headers/nbd.o 00:03:06.810 CXX test/cpp_headers/notify.o 00:03:06.810 CC test/nvme/sgl/sgl.o 00:03:07.067 CC test/accel/dif/dif.o 00:03:07.067 CXX test/cpp_headers/nvme.o 00:03:07.067 CC test/blobfs/mkfs/mkfs.o 00:03:07.067 LINK sgl 00:03:07.325 CC test/unit/lib/rdma/common.c/common_ut.o 00:03:07.325 LINK mkfs 00:03:07.325 LINK dif 00:03:07.325 CXX test/cpp_headers/nvme_intel.o 00:03:07.325 LINK memory_ut 00:03:07.629 CC examples/nvme/abort/abort.o 00:03:07.629 CC test/lvol/esnap/esnap.o 00:03:07.886 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:07.886 CXX test/cpp_headers/nvme_ocssd.o 00:03:07.886 LINK common_ut 00:03:07.886 LINK abort 00:03:07.886 LINK pmr_persistence 00:03:08.144 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:08.144 CC examples/bdev/hello_world/hello_bdev.o 00:03:08.402 CC test/unit/lib/util/base64.c/base64_ut.o 00:03:08.402 CXX test/cpp_headers/nvme_spec.o 00:03:08.402 LINK hello_bdev 00:03:08.658 LINK base64_ut 00:03:08.658 CXX test/cpp_headers/nvme_zns.o 00:03:08.916 CC test/nvme/e2edp/nvme_dp.o 00:03:08.916 CXX test/cpp_headers/nvmf.o 00:03:08.916 CC test/unit/lib/util/bit_array.c/bit_array_ut.o 00:03:08.916 CXX test/cpp_headers/nvmf_cmd.o 00:03:09.174 LINK nvme_dp 00:03:09.174 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:09.174 CC test/unit/lib/ioat/ioat.c/ioat_ut.o 00:03:09.174 CC examples/bdev/bdevperf/bdevperf.o 00:03:09.432 CC test/unit/lib/dma/dma.c/dma_ut.o 00:03:09.432 CXX test/cpp_headers/nvmf_spec.o 00:03:09.432 LINK bit_array_ut 00:03:09.690 CXX test/cpp_headers/nvmf_transport.o 00:03:09.690 LINK ioat_ut 00:03:09.690 CC test/unit/lib/util/cpuset.c/cpuset_ut.o 00:03:09.947 LINK bdevperf 00:03:09.947 CXX test/cpp_headers/opal.o 00:03:09.947 LINK dma_ut 00:03:09.947 LINK cpuset_ut 00:03:09.947 CXX test/cpp_headers/opal_spec.o 00:03:09.947 CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o 00:03:09.947 CC test/unit/lib/util/crc16.c/crc16_ut.o 00:03:10.204 LINK crc32_ieee_ut 00:03:10.204 CC test/unit/lib/util/crc32c.c/crc32c_ut.o 00:03:10.204 CC test/nvme/overhead/overhead.o 00:03:10.204 CXX test/cpp_headers/pci_ids.o 00:03:10.204 LINK crc16_ut 00:03:10.204 CC test/nvme/err_injection/err_injection.o 00:03:10.204 CC test/bdev/bdevio/bdevio.o 00:03:10.204 LINK crc32c_ut 00:03:10.462 CXX test/cpp_headers/pipe.o 00:03:10.462 LINK err_injection 00:03:10.462 LINK overhead 00:03:10.462 CC test/unit/lib/util/crc64.c/crc64_ut.o 00:03:10.462 CC test/unit/lib/util/dif.c/dif_ut.o 00:03:10.462 CXX test/cpp_headers/queue.o 00:03:10.462 LINK crc64_ut 00:03:10.462 CXX test/cpp_headers/reduce.o 00:03:10.462 CC test/unit/lib/util/iov.c/iov_ut.o 00:03:10.719 LINK bdevio 00:03:10.719 CXX test/cpp_headers/rpc.o 00:03:10.719 LINK iov_ut 00:03:10.976 CXX test/cpp_headers/scheduler.o 00:03:10.976 CC test/unit/lib/util/math.c/math_ut.o 00:03:10.976 CXX test/cpp_headers/scsi.o 00:03:10.976 CC test/nvme/startup/startup.o 00:03:11.234 LINK math_ut 00:03:11.234 CXX test/cpp_headers/scsi_spec.o 00:03:11.234 CC test/unit/lib/util/pipe.c/pipe_ut.o 00:03:11.234 LINK esnap 00:03:11.234 LINK startup 00:03:11.492 CC test/nvme/reserve/reserve.o 00:03:11.492 LINK dif_ut 00:03:11.492 CXX test/cpp_headers/sock.o 00:03:11.492 CC test/nvme/simple_copy/simple_copy.o 00:03:11.492 LINK reserve 00:03:11.492 CC test/nvme/connect_stress/connect_stress.o 00:03:11.749 CXX test/cpp_headers/stdinc.o 00:03:11.749 LINK pipe_ut 00:03:11.749 LINK simple_copy 00:03:11.749 CC test/nvme/boot_partition/boot_partition.o 00:03:11.749 CC test/nvme/compliance/nvme_compliance.o 00:03:11.749 CXX test/cpp_headers/string.o 00:03:11.749 LINK connect_stress 00:03:12.007 LINK boot_partition 00:03:12.007 CXX test/cpp_headers/thread.o 00:03:12.007 CC test/unit/lib/util/string.c/string_ut.o 00:03:12.265 CXX test/cpp_headers/trace.o 00:03:12.265 LINK nvme_compliance 00:03:12.265 LINK string_ut 00:03:12.265 CXX test/cpp_headers/trace_parser.o 00:03:12.265 CC test/nvme/fused_ordering/fused_ordering.o 00:03:12.523 CXX test/cpp_headers/tree.o 00:03:12.523 LINK fused_ordering 00:03:12.523 CXX test/cpp_headers/ublk.o 00:03:12.523 CXX test/cpp_headers/util.o 00:03:12.781 CC examples/nvmf/nvmf/nvmf.o 00:03:12.781 CC test/unit/lib/util/xor.c/xor_ut.o 00:03:12.781 CXX test/cpp_headers/uuid.o 00:03:13.038 CXX test/cpp_headers/version.o 00:03:13.038 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:13.038 CC test/nvme/fdp/fdp.o 00:03:13.038 CXX test/cpp_headers/vfio_user_pci.o 00:03:13.038 CXX test/cpp_headers/vfio_user_spec.o 00:03:13.038 LINK nvmf 00:03:13.038 CXX test/cpp_headers/vhost.o 00:03:13.038 LINK doorbell_aers 00:03:13.038 CC test/nvme/cuse/cuse.o 00:03:13.296 LINK xor_ut 00:03:13.296 CXX test/cpp_headers/vmd.o 00:03:13.296 LINK fdp 00:03:13.296 CXX test/cpp_headers/xor.o 00:03:13.296 CXX test/cpp_headers/zipf.o 00:03:13.864 CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o 00:03:13.864 CC test/unit/lib/json/json_util.c/json_util_ut.o 00:03:13.864 CC test/unit/lib/json/json_parse.c/json_parse_ut.o 00:03:13.864 CC test/unit/lib/json/json_write.c/json_write_ut.o 00:03:13.864 CC test/unit/lib/idxd/idxd.c/idxd_ut.o 00:03:13.864 CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o 00:03:14.123 LINK cuse 00:03:14.123 LINK pci_event_ut 00:03:14.381 LINK idxd_user_ut 00:03:14.381 LINK json_write_ut 00:03:14.381 LINK json_util_ut 00:03:14.381 LINK idxd_ut 00:03:14.949 LINK json_parse_ut 00:03:15.515 CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o 00:03:15.773 LINK jsonrpc_server_ut 00:03:16.340 CC test/unit/lib/rpc/rpc.c/rpc_ut.o 00:03:16.905 LINK rpc_ut 00:03:17.469 CC test/unit/lib/thread/thread.c/thread_ut.o 00:03:17.469 CC test/unit/lib/sock/sock.c/sock_ut.o 00:03:17.469 CC test/unit/lib/thread/iobuf.c/iobuf_ut.o 00:03:17.469 CC test/unit/lib/sock/posix.c/posix_ut.o 00:03:17.469 CC test/unit/lib/keyring/keyring.c/keyring_ut.o 00:03:17.469 CC test/unit/lib/notify/notify.c/notify_ut.o 00:03:18.034 LINK keyring_ut 00:03:18.034 LINK notify_ut 00:03:18.292 LINK iobuf_ut 00:03:18.292 LINK posix_ut 00:03:18.858 LINK sock_ut 00:03:18.858 LINK thread_ut 00:03:19.424 CC test/unit/lib/nvme/nvme.c/nvme_ut.o 00:03:19.424 CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o 00:03:19.424 CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o 00:03:19.424 CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o 00:03:19.424 CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o 00:03:19.424 CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o 00:03:19.424 CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o 00:03:19.424 CC test/unit/lib/accel/accel.c/accel_ut.o 00:03:19.424 CC test/unit/lib/init/subsystem.c/subsystem_ut.o 00:03:19.424 CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o 00:03:19.991 LINK subsystem_ut 00:03:19.991 LINK blob_bdev_ut 00:03:19.991 LINK nvme_ctrlr_ocssd_cmd_ut 00:03:19.991 LINK nvme_ns_ut 00:03:20.250 LINK nvme_ctrlr_cmd_ut 00:03:20.250 CC test/unit/lib/init/rpc.c/rpc_ut.o 00:03:20.508 CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o 00:03:20.508 CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o 00:03:20.508 CC test/unit/lib/blob/blob.c/blob_ut.o 00:03:20.508 LINK nvme_ut 00:03:20.508 CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o 00:03:20.508 LINK nvme_ns_ocssd_cmd_ut 00:03:20.767 CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o 00:03:20.767 LINK rpc_ut 00:03:20.767 LINK nvme_ns_cmd_ut 00:03:21.025 CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o 00:03:21.025 CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o 00:03:21.284 LINK nvme_quirks_ut 00:03:21.284 LINK accel_ut 00:03:21.284 CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o 00:03:21.284 LINK nvme_poll_group_ut 00:03:21.542 CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o 00:03:21.542 LINK nvme_qpair_ut 00:03:21.542 CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o 00:03:21.542 LINK nvme_ctrlr_ut 00:03:21.800 CC test/unit/lib/event/app.c/app_ut.o 00:03:21.800 CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o 00:03:22.058 LINK nvme_transport_ut 00:03:22.058 CC test/unit/lib/bdev/bdev.c/bdev_ut.o 00:03:22.058 LINK nvme_io_msg_ut 00:03:22.058 LINK nvme_pcie_ut 00:03:22.315 LINK app_ut 00:03:22.315 CC test/unit/lib/bdev/part.c/part_ut.o 00:03:22.315 CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o 00:03:22.574 CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o 00:03:22.574 LINK nvme_fabric_ut 00:03:22.574 LINK nvme_opal_ut 00:03:22.574 LINK nvme_pcie_common_ut 00:03:22.574 CC test/unit/lib/event/reactor.c/reactor_ut.o 00:03:22.574 LINK scsi_nvme_ut 00:03:22.832 CC test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut.o 00:03:22.832 CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o 00:03:22.832 CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o 00:03:23.091 LINK nvme_tcp_ut 00:03:23.091 CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o 00:03:23.348 LINK gpt_ut 00:03:23.348 LINK reactor_ut 00:03:23.605 CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o 00:03:23.605 CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o 00:03:23.605 CC test/unit/lib/bdev/raid/concat.c/concat_ut.o 00:03:23.863 LINK vbdev_lvol_ut 00:03:24.121 LINK bdev_raid_sb_ut 00:03:24.121 LINK nvme_cuse_ut 00:03:24.402 LINK concat_ut 00:03:24.402 CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o 00:03:24.402 CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o 00:03:24.402 LINK nvme_rdma_ut 00:03:24.402 CC test/unit/lib/bdev/raid/raid0.c/raid0_ut.o 00:03:24.673 LINK bdev_zone_ut 00:03:24.673 CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o 00:03:24.673 LINK part_ut 00:03:24.940 LINK bdev_raid_ut 00:03:24.940 LINK raid1_ut 00:03:24.940 CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o 00:03:24.940 LINK blob_ut 00:03:25.199 LINK raid0_ut 00:03:25.199 LINK vbdev_zone_block_ut 00:03:25.457 LINK bdev_ut 00:03:25.457 LINK bdev_ut 00:03:25.457 CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o 00:03:25.457 CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o 00:03:25.457 CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o 00:03:25.457 CC test/unit/lib/lvol/lvol.c/lvol_ut.o 00:03:25.457 CC test/unit/lib/blobfs/tree.c/tree_ut.o 00:03:25.715 LINK blobfs_bdev_ut 00:03:25.715 LINK tree_ut 00:03:26.649 LINK blobfs_sync_ut 00:03:26.649 LINK blobfs_async_ut 00:03:26.908 LINK lvol_ut 00:03:28.305 LINK bdev_nvme_ut 00:03:28.872 CC test/unit/lib/ftl/ftl_p2l.c/ftl_p2l_ut.o 00:03:28.872 CC test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut.o 00:03:28.872 CC test/unit/lib/scsi/scsi.c/scsi_ut.o 00:03:28.872 CC test/unit/lib/nvmf/tcp.c/tcp_ut.o 00:03:28.872 CC test/unit/lib/ftl/ftl_band.c/ftl_band_ut.o 00:03:28.872 CC test/unit/lib/scsi/lun.c/lun_ut.o 00:03:28.872 CC test/unit/lib/ftl/ftl_io.c/ftl_io_ut.o 00:03:28.872 CC test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut.o 00:03:28.872 CC test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut.o 00:03:29.131 CC test/unit/lib/scsi/dev.c/dev_ut.o 00:03:29.131 LINK ftl_bitmap_ut 00:03:29.389 LINK ftl_mempool_ut 00:03:29.389 LINK scsi_ut 00:03:29.389 LINK ftl_l2p_ut 00:03:29.389 LINK dev_ut 00:03:29.647 CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o 00:03:29.647 LINK ftl_io_ut 00:03:29.647 CC test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut.o 00:03:29.647 CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o 00:03:29.647 CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o 00:03:29.647 LINK lun_ut 00:03:29.647 CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o 00:03:29.905 LINK ftl_p2l_ut 00:03:30.163 CC test/unit/lib/ftl/ftl_sb/ftl_sb_ut.o 00:03:30.163 LINK ftl_band_ut 00:03:30.163 CC test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut.o 00:03:30.163 CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o 00:03:30.163 LINK ftl_mngt_ut 00:03:30.421 CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o 00:03:30.421 CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o 00:03:30.679 LINK scsi_pr_ut 00:03:30.679 LINK scsi_bdev_ut 00:03:30.937 CC test/unit/lib/nvmf/auth.c/auth_ut.o 00:03:30.937 CC test/unit/lib/nvmf/rdma.c/rdma_ut.o 00:03:31.196 LINK ftl_sb_ut 00:03:31.196 LINK ftl_layout_upgrade_ut 00:03:31.196 LINK ctrlr_discovery_ut 00:03:31.196 LINK ctrlr_bdev_ut 00:03:31.196 LINK subsystem_ut 00:03:31.453 CC test/unit/lib/nvmf/transport.c/transport_ut.o 00:03:31.453 CC test/unit/lib/iscsi/conn.c/conn_ut.o 00:03:31.453 CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o 00:03:31.453 LINK nvmf_ut 00:03:31.453 CC test/unit/lib/vhost/vhost.c/vhost_ut.o 00:03:31.712 LINK tcp_ut 00:03:31.712 LINK ctrlr_ut 00:03:31.712 CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o 00:03:31.970 CC test/unit/lib/iscsi/param.c/param_ut.o 00:03:31.970 LINK init_grp_ut 00:03:31.970 LINK auth_ut 00:03:31.970 CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o 00:03:31.970 CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o 00:03:32.229 LINK param_ut 00:03:32.487 LINK conn_ut 00:03:32.744 LINK portal_grp_ut 00:03:32.744 LINK rdma_ut 00:03:33.002 LINK tgt_node_ut 00:03:33.002 LINK transport_ut 00:03:33.260 LINK vhost_ut 00:03:33.260 LINK iscsi_ut 00:03:33.519 00:03:33.519 real 2m7.749s 00:03:33.519 user 9m40.517s 00:03:33.519 sys 3m19.688s 00:03:33.519 13:56:19 unittest_build -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:33.519 13:56:19 unittest_build -- common/autotest_common.sh@10 -- $ set +x 00:03:33.519 ************************************ 00:03:33.519 END TEST unittest_build 00:03:33.519 ************************************ 00:03:33.519 13:56:19 -- common/autotest_common.sh@1142 -- $ return 0 00:03:33.519 13:56:19 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:33.519 13:56:19 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:33.519 13:56:19 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:33.519 13:56:19 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:33.519 13:56:19 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:33.519 13:56:19 -- pm/common@44 -- $ pid=7432 00:03:33.519 13:56:19 -- pm/common@50 -- $ kill -TERM 7432 00:03:33.519 13:56:19 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:33.519 13:56:19 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:33.519 13:56:19 -- pm/common@44 -- $ pid=7434 00:03:33.519 13:56:19 -- pm/common@50 -- $ kill -TERM 7434 00:03:33.519 sudo: /etc/sudoers.d/99-spdk-rlimits:1:23: unknown defaults entry "rlimit_core" 00:03:33.778 13:56:19 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:33.778 13:56:19 -- nvmf/common.sh@7 -- # uname -s 00:03:33.778 13:56:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:33.778 13:56:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:33.778 13:56:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:33.778 13:56:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:33.778 13:56:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:33.778 13:56:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:33.778 13:56:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:33.778 13:56:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:33.778 13:56:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:33.778 13:56:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:33.778 13:56:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6eb37903-5e6e-4bf2-b995-7433baab6b1f 00:03:33.778 13:56:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=6eb37903-5e6e-4bf2-b995-7433baab6b1f 00:03:33.778 13:56:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:33.778 13:56:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:33.778 13:56:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:33.778 13:56:19 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:33.778 13:56:19 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:33.778 13:56:19 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:33.778 13:56:19 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:33.778 13:56:19 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:33.778 13:56:19 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:03:33.778 13:56:19 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:03:33.778 13:56:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:03:33.778 13:56:19 -- paths/export.sh@5 -- # export PATH 00:03:33.778 13:56:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:03:33.778 13:56:19 -- nvmf/common.sh@47 -- # : 0 00:03:33.778 13:56:19 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:33.778 13:56:19 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:33.778 13:56:19 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:33.778 13:56:19 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:33.778 13:56:19 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:33.778 13:56:19 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:33.778 13:56:19 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:33.778 13:56:19 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:33.778 13:56:19 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:33.778 13:56:19 -- spdk/autotest.sh@32 -- # uname -s 00:03:33.778 13:56:19 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:33.778 13:56:19 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:33.778 13:56:19 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:33.778 13:56:19 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:33.778 13:56:19 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:33.778 13:56:19 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:33.778 13:56:19 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:33.778 13:56:19 -- spdk/autotest.sh@46 -- # udevadm=/sbin/udevadm 00:03:33.778 13:56:19 -- spdk/autotest.sh@48 -- # udevadm_pid=168613 00:03:33.778 13:56:19 -- spdk/autotest.sh@47 -- # /sbin/udevadm monitor --property 00:03:33.778 13:56:19 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:33.778 13:56:19 -- pm/common@17 -- # local monitor 00:03:33.778 13:56:19 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:33.778 13:56:19 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:33.778 13:56:19 -- pm/common@25 -- # sleep 1 00:03:33.778 13:56:19 -- pm/common@21 -- # date +%s 00:03:33.778 13:56:19 -- pm/common@21 -- # date +%s 00:03:33.778 13:56:19 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721051779 00:03:33.778 13:56:19 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721051779 00:03:33.778 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721051779_collect-vmstat.pm.log 00:03:33.778 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721051779_collect-cpu-load.pm.log 00:03:34.713 13:56:20 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:34.713 13:56:20 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:34.713 13:56:20 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:34.713 13:56:20 -- common/autotest_common.sh@10 -- # set +x 00:03:34.713 13:56:20 -- spdk/autotest.sh@59 -- # create_test_list 00:03:34.713 13:56:20 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:34.713 13:56:20 -- common/autotest_common.sh@10 -- # set +x 00:03:34.713 13:56:20 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:34.713 13:56:20 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:34.713 13:56:20 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:34.713 13:56:20 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:34.713 13:56:20 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:34.713 13:56:20 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:34.713 13:56:20 -- common/autotest_common.sh@1455 -- # uname 00:03:34.713 13:56:20 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:34.713 13:56:20 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:34.713 13:56:20 -- common/autotest_common.sh@1475 -- # uname 00:03:34.713 13:56:20 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:34.713 13:56:20 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:34.713 13:56:20 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:34.713 13:56:20 -- spdk/autotest.sh@72 -- # hash lcov 00:03:34.713 13:56:20 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:34.713 13:56:20 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:34.713 --rc lcov_branch_coverage=1 00:03:34.713 --rc lcov_function_coverage=1 00:03:34.713 --rc genhtml_branch_coverage=1 00:03:34.713 --rc genhtml_function_coverage=1 00:03:34.713 --rc genhtml_legend=1 00:03:34.713 --rc geninfo_all_blocks=1 00:03:34.713 ' 00:03:34.713 13:56:20 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:34.713 --rc lcov_branch_coverage=1 00:03:34.713 --rc lcov_function_coverage=1 00:03:34.713 --rc genhtml_branch_coverage=1 00:03:34.713 --rc genhtml_function_coverage=1 00:03:34.713 --rc genhtml_legend=1 00:03:34.713 --rc geninfo_all_blocks=1 00:03:34.713 ' 00:03:34.713 13:56:20 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:34.713 --rc lcov_branch_coverage=1 00:03:34.713 --rc lcov_function_coverage=1 00:03:34.713 --rc genhtml_branch_coverage=1 00:03:34.713 --rc genhtml_function_coverage=1 00:03:34.713 --rc genhtml_legend=1 00:03:34.713 --rc geninfo_all_blocks=1 00:03:34.713 --no-external' 00:03:34.713 13:56:20 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:34.713 --rc lcov_branch_coverage=1 00:03:34.713 --rc lcov_function_coverage=1 00:03:34.713 --rc genhtml_branch_coverage=1 00:03:34.713 --rc genhtml_function_coverage=1 00:03:34.713 --rc genhtml_legend=1 00:03:34.713 --rc geninfo_all_blocks=1 00:03:34.713 --no-external' 00:03:34.713 13:56:20 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:34.972 lcov: LCOV version 1.15 00:03:34.972 13:56:20 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:49.863 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:49.863 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:03:57.972 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:57.972 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:03:57.972 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:57.972 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:03:57.972 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:57.972 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:03:57.972 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:57.972 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:03:57.972 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:57.972 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:03:57.972 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:57.972 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:03:57.973 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:57.973 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:03:57.973 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:57.973 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:03:57.973 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:57.973 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:03:57.973 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:57.973 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:03:57.973 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:57.973 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:03:57.973 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:57.973 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:03:57.973 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:57.973 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:03:57.973 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:57.973 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:03:57.973 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:57.973 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:03:57.973 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:57.973 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:03:57.973 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:57.973 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:03:57.973 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:57.973 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:03:57.973 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:57.973 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:57.973 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:57.973 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:03:57.973 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:03:57.973 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:03:57.973 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:57.973 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:03:57.973 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:57.973 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:03:57.973 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:03:57.973 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:03:57.973 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:57.973 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:03:57.973 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:03:57.973 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:03:57.973 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:57.973 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:03:57.973 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:57.973 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:03:57.973 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:57.973 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:03:57.973 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:03:57.973 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:03:57.973 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:57.973 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:03:57.973 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:57.973 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:03:57.973 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:57.973 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:03:57.973 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:57.973 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:03:57.973 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:57.973 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:03:57.973 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:57.973 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:03:57.973 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:57.973 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:03:57.973 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:57.973 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:03:57.973 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:57.973 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:03:57.973 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:03:57.973 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:03:57.973 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:57.973 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:03:57.973 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:57.973 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:03:57.973 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:57.973 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:03:57.973 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:57.973 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:57.973 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:03:57.973 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:03:57.973 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:57.973 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:03:57.973 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:57.973 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:03:57.973 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:57.973 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:03:57.973 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:57.973 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:03:57.973 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:57.973 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:03:57.973 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:03:57.973 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:03:57.973 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:57.973 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:03:57.973 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:57.973 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:03:57.973 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:57.974 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:03:57.974 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:57.974 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:03:57.974 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:57.974 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:03:57.974 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:03:57.974 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:03:57.974 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:57.974 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:03:57.974 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:57.974 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:03:57.974 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:57.974 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:03:57.974 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:57.974 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:03:57.974 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:57.974 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:57.974 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:57.974 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:57.974 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:57.974 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:57.974 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:57.974 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:03:57.974 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:57.974 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:03:57.974 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:57.974 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:03:57.974 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:57.974 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:03:57.974 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:57.974 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:57.974 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:57.974 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:03:57.974 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:57.974 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:57.974 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:57.974 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:58.232 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:58.232 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:03:58.232 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:58.232 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:03:58.232 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:58.232 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:03:58.232 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:58.232 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:03:58.232 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:58.232 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:03:58.232 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:58.232 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:03:58.232 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:03:58.232 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:03:58.232 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:58.232 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:03:58.232 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:58.232 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:03:58.232 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:03:58.232 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:03:58.232 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:58.232 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:58.232 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:58.232 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:58.232 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:58.232 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:03:58.232 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:58.232 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:03:58.232 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:58.232 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:03:58.232 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:58.232 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:04:30.379 13:57:14 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:30.379 13:57:14 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:30.379 13:57:14 -- common/autotest_common.sh@10 -- # set +x 00:04:30.379 13:57:14 -- spdk/autotest.sh@91 -- # rm -f 00:04:30.379 13:57:14 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:30.379 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda2,mount@vda:vda5, so not binding PCI dev 00:04:30.379 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:30.379 13:57:15 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:30.379 13:57:15 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:30.379 13:57:15 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:30.379 13:57:15 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:30.379 13:57:15 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:30.379 13:57:15 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:30.379 13:57:15 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:30.379 13:57:15 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:30.379 13:57:15 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:30.379 13:57:15 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:30.379 13:57:15 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:30.379 13:57:15 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:30.379 13:57:15 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:30.379 13:57:15 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:30.379 13:57:15 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:30.379 No valid GPT data, bailing 00:04:30.379 13:57:15 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:30.379 13:57:15 -- scripts/common.sh@391 -- # pt= 00:04:30.379 13:57:15 -- scripts/common.sh@392 -- # return 1 00:04:30.379 13:57:15 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:30.379 1+0 records in 00:04:30.380 1+0 records out 00:04:30.380 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00364635 s, 288 MB/s 00:04:30.380 13:57:15 -- spdk/autotest.sh@118 -- # sync 00:04:30.380 13:57:15 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:30.380 13:57:15 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:30.380 13:57:15 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:30.640 13:57:16 -- spdk/autotest.sh@124 -- # uname -s 00:04:30.641 13:57:16 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:30.641 13:57:16 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:30.641 13:57:16 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:30.641 13:57:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:30.641 13:57:16 -- common/autotest_common.sh@10 -- # set +x 00:04:30.641 ************************************ 00:04:30.641 START TEST setup.sh 00:04:30.641 ************************************ 00:04:30.641 13:57:16 setup.sh -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:30.899 * Looking for test storage... 00:04:30.899 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:30.899 13:57:16 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:30.899 13:57:16 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:30.899 13:57:16 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:30.899 13:57:16 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:30.899 13:57:16 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:30.899 13:57:16 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:30.899 ************************************ 00:04:30.899 START TEST acl 00:04:30.899 ************************************ 00:04:30.899 13:57:16 setup.sh.acl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:30.899 * Looking for test storage... 00:04:30.899 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:30.899 13:57:16 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:30.899 13:57:16 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:30.899 13:57:16 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:30.899 13:57:16 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:30.899 13:57:16 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:30.899 13:57:16 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:30.899 13:57:16 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:30.899 13:57:16 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:30.899 13:57:16 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:30.899 13:57:16 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:30.899 13:57:16 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:30.899 13:57:16 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:30.899 13:57:16 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:30.899 13:57:16 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:30.899 13:57:16 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:30.899 13:57:16 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:31.156 13:57:17 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:31.156 13:57:17 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:31.156 13:57:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:31.156 13:57:17 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:31.156 13:57:17 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:31.156 13:57:17 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:31.722 13:57:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:04:31.722 13:57:17 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:31.722 13:57:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:31.722 Hugepages 00:04:31.722 node hugesize free / total 00:04:31.722 13:57:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:31.722 13:57:17 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:31.722 13:57:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:31.722 00:04:31.722 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:31.722 13:57:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:31.722 13:57:17 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:31.722 13:57:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:31.722 13:57:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:04:31.722 13:57:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:04:31.722 13:57:17 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:31.722 13:57:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:31.722 13:57:17 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:04:31.722 13:57:17 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:31.722 13:57:17 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:04:31.722 13:57:17 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:31.722 13:57:17 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:31.722 13:57:17 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:31.722 13:57:17 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:31.722 13:57:17 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:31.722 13:57:17 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:31.722 13:57:17 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:31.722 13:57:17 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:31.722 ************************************ 00:04:31.722 START TEST denied 00:04:31.722 ************************************ 00:04:31.722 13:57:17 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:04:31.722 13:57:17 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:04:31.722 13:57:17 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:04:31.722 13:57:17 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:31.722 13:57:17 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:31.722 13:57:17 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:31.980 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:04:31.980 13:57:17 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:04:31.980 13:57:17 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:31.980 13:57:17 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:31.980 13:57:17 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:04:31.980 13:57:17 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:04:31.980 13:57:17 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:31.980 13:57:17 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:31.980 13:57:17 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:31.980 13:57:17 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:31.980 13:57:17 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:32.543 00:04:32.543 real 0m0.730s 00:04:32.543 user 0m0.360s 00:04:32.543 sys 0m0.409s 00:04:32.543 13:57:18 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:32.543 13:57:18 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:32.543 ************************************ 00:04:32.543 END TEST denied 00:04:32.543 ************************************ 00:04:32.543 13:57:18 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:32.543 13:57:18 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:32.543 13:57:18 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:32.543 13:57:18 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:32.543 13:57:18 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:32.543 ************************************ 00:04:32.543 START TEST allowed 00:04:32.543 ************************************ 00:04:32.543 13:57:18 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:04:32.543 13:57:18 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:04:32.543 13:57:18 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:04:32.543 13:57:18 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:32.543 13:57:18 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:32.543 13:57:18 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:33.107 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:33.107 13:57:18 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:04:33.107 13:57:18 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:33.107 13:57:18 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:33.107 13:57:18 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:33.107 13:57:18 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:33.365 00:04:33.365 real 0m0.796s 00:04:33.365 user 0m0.273s 00:04:33.365 sys 0m0.481s 00:04:33.365 13:57:19 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:33.365 13:57:19 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:33.365 ************************************ 00:04:33.365 END TEST allowed 00:04:33.365 ************************************ 00:04:33.365 13:57:19 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:33.365 00:04:33.365 real 0m2.530s 00:04:33.365 user 0m1.074s 00:04:33.365 sys 0m1.476s 00:04:33.365 13:57:19 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:33.365 13:57:19 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:33.365 ************************************ 00:04:33.365 END TEST acl 00:04:33.365 ************************************ 00:04:33.365 13:57:19 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:33.365 13:57:19 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:33.365 13:57:19 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:33.365 13:57:19 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:33.365 13:57:19 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:33.365 ************************************ 00:04:33.365 START TEST hugepages 00:04:33.365 ************************************ 00:04:33.365 13:57:19 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:33.365 * Looking for test storage... 00:04:33.365 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:33.365 13:57:19 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:33.365 13:57:19 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:33.365 13:57:19 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:33.365 13:57:19 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:33.365 13:57:19 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:33.365 13:57:19 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:33.365 13:57:19 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:33.365 13:57:19 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:33.365 13:57:19 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:33.365 13:57:19 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:33.365 13:57:19 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.365 13:57:19 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:33.365 13:57:19 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:33.365 13:57:19 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.365 13:57:19 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.365 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.365 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.365 13:57:19 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246896 kB' 'MemFree: 4736748 kB' 'MemAvailable: 7471024 kB' 'Buffers: 2208 kB' 'Cached: 2947684 kB' 'SwapCached: 0 kB' 'Active: 1007336 kB' 'Inactive: 2054888 kB' 'Active(anon): 988 kB' 'Inactive(anon): 129868 kB' 'Active(file): 1006348 kB' 'Inactive(file): 1925020 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 12 kB' 'Writeback: 0 kB' 'AnonPages: 112416 kB' 'Mapped: 38960 kB' 'Shmem: 18524 kB' 'KReclaimable: 79484 kB' 'Slab: 143156 kB' 'SReclaimable: 79484 kB' 'SUnreclaim: 63672 kB' 'KernelStack: 4616 kB' 'PageTables: 3632 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 4026296 kB' 'Committed_AS: 366032 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 22940 kB' 'VmallocChunk: 0 kB' 'Percpu: 4848 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 20480 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:04:33.365 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.365 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.365 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.365 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.365 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.365 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.365 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.365 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.365 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.365 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.365 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.365 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.365 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.365 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.365 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.365 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.365 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.365 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.365 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.365 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.365 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.365 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.366 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.366 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.366 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.366 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.366 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.366 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.366 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.366 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.366 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.366 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.366 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.366 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.366 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.366 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.366 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.366 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.366 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.366 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.366 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.366 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.366 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.366 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.366 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.366 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.366 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.366 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.366 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.366 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.366 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.366 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.366 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.366 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.366 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.366 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.366 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.366 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.366 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.366 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.366 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.366 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.366 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.366 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.366 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.366 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.366 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.366 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.366 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.366 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.366 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.366 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.366 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.366 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.366 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.366 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.366 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.366 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.366 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.366 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.366 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.366 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.366 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.366 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.366 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.366 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.366 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.366 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.366 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.366 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.366 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.366 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.366 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.366 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.366 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.366 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.366 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.366 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.366 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.366 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.366 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.366 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.366 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.366 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.366 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.366 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.366 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:33.659 13:57:19 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:33.659 13:57:19 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:33.659 13:57:19 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:33.659 13:57:19 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:33.659 ************************************ 00:04:33.659 START TEST default_setup 00:04:33.659 ************************************ 00:04:33.659 13:57:19 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:04:33.659 13:57:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:33.659 13:57:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:33.659 13:57:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:33.659 13:57:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:33.659 13:57:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:33.659 13:57:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:33.659 13:57:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:33.659 13:57:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:33.659 13:57:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:33.659 13:57:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:33.659 13:57:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:33.659 13:57:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:33.659 13:57:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:33.659 13:57:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:33.659 13:57:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:33.659 13:57:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:33.659 13:57:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:33.659 13:57:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:33.659 13:57:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:33.659 13:57:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:33.659 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:33.659 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:33.918 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda2,mount@vda:vda5, so not binding PCI dev 00:04:33.918 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:33.918 13:57:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:33.918 13:57:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:33.918 13:57:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:33.918 13:57:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:33.918 13:57:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:33.918 13:57:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:33.918 13:57:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:33.918 13:57:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ [always] madvise never != *\[\n\e\v\e\r\]* ]] 00:04:33.918 13:57:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:33.918 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:33.918 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:33.918 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:33.918 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:33.918 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.918 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:33.918 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:33.918 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.918 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.918 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.918 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246896 kB' 'MemFree: 6821168 kB' 'MemAvailable: 9555412 kB' 'Buffers: 2208 kB' 'Cached: 2947680 kB' 'SwapCached: 0 kB' 'Active: 1007364 kB' 'Inactive: 2070624 kB' 'Active(anon): 1004 kB' 'Inactive(anon): 145616 kB' 'Active(file): 1006360 kB' 'Inactive(file): 1925008 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 12 kB' 'Writeback: 0 kB' 'AnonPages: 128124 kB' 'Mapped: 38996 kB' 'Shmem: 18516 kB' 'KReclaimable: 79420 kB' 'Slab: 143036 kB' 'SReclaimable: 79420 kB' 'SUnreclaim: 63616 kB' 'KernelStack: 4532 kB' 'PageTables: 3116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074872 kB' 'Committed_AS: 382348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 22972 kB' 'VmallocChunk: 0 kB' 'Percpu: 4848 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 20480 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:04:33.918 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.918 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.918 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.918 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.918 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.918 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.918 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.918 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.918 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.918 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.918 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.918 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.918 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.918 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.918 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.918 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.918 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.918 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.918 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.918 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.918 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.918 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.918 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.918 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.918 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.918 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.918 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.918 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.918 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.918 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.918 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.918 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.918 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.918 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.918 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.918 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.918 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.918 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.918 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.918 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.918 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.918 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.918 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.918 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.918 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.918 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.918 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.918 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.918 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.918 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.918 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.918 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.918 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.918 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.918 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.918 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.919 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.919 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.919 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.919 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.919 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.919 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.919 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.919 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.919 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.919 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.919 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.919 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.919 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.919 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.919 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.919 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.919 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.919 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.919 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.919 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.919 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.919 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.919 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.919 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.919 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.919 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.919 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.919 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.919 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.919 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.919 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.919 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.919 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.919 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.919 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.919 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.919 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.919 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.919 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.919 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.919 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.919 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.919 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.181 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.181 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.181 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.181 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.181 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.181 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.181 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.181 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.181 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.181 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.181 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.181 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.181 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.181 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.181 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.181 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.181 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.181 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.181 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.181 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.181 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.181 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.181 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.181 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.181 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.181 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.181 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.181 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.181 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.181 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.181 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.181 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.181 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.181 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.181 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.181 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.181 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.181 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.181 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.181 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.181 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.181 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.181 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.181 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.181 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.181 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.181 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.181 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.181 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.181 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.181 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.181 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.181 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.181 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.181 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.181 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.181 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.181 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.181 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.181 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.181 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.181 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.181 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.181 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 20480 00:04:34.181 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:34.181 13:57:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=20480 00:04:34.181 13:57:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:34.181 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:34.181 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:34.181 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:34.181 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:34.181 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.181 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:34.181 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:34.181 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.181 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.181 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.181 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246896 kB' 'MemFree: 6820916 kB' 'MemAvailable: 9555160 kB' 'Buffers: 2208 kB' 'Cached: 2947680 kB' 'SwapCached: 0 kB' 'Active: 1007348 kB' 'Inactive: 2070524 kB' 'Active(anon): 988 kB' 'Inactive(anon): 145516 kB' 'Active(file): 1006360 kB' 'Inactive(file): 1925008 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 12 kB' 'Writeback: 0 kB' 'AnonPages: 128252 kB' 'Mapped: 38964 kB' 'Shmem: 18516 kB' 'KReclaimable: 79420 kB' 'Slab: 143020 kB' 'SReclaimable: 79420 kB' 'SUnreclaim: 63600 kB' 'KernelStack: 4448 kB' 'PageTables: 3092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074872 kB' 'Committed_AS: 382348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 22940 kB' 'VmallocChunk: 0 kB' 'Percpu: 4848 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 20480 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.182 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.183 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246896 kB' 'MemFree: 6820916 kB' 'MemAvailable: 9555160 kB' 'Buffers: 2208 kB' 'Cached: 2947680 kB' 'SwapCached: 0 kB' 'Active: 1007356 kB' 'Inactive: 2070780 kB' 'Active(anon): 996 kB' 'Inactive(anon): 145772 kB' 'Active(file): 1006360 kB' 'Inactive(file): 1925008 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 12 kB' 'Writeback: 0 kB' 'AnonPages: 128136 kB' 'Mapped: 38964 kB' 'Shmem: 18516 kB' 'KReclaimable: 79420 kB' 'Slab: 143052 kB' 'SReclaimable: 79420 kB' 'SUnreclaim: 63632 kB' 'KernelStack: 4448 kB' 'PageTables: 3076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074872 kB' 'Committed_AS: 382348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 22924 kB' 'VmallocChunk: 0 kB' 'Percpu: 4848 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 20480 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.184 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.185 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.185 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.185 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.185 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.185 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.185 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.185 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.185 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.185 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.185 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.185 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.185 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.185 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.185 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.185 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.185 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.185 13:57:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.185 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.185 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.185 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.185 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.185 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.185 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.185 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.185 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.185 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.185 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.185 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.185 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.185 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.185 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.185 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.185 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.185 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.185 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.185 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.185 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.185 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.185 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.185 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.185 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.185 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.185 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.185 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.185 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.185 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.185 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.185 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.185 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.185 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.185 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.185 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.185 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.185 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.185 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.185 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.185 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.185 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.185 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.185 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.185 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.185 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.185 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.185 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.185 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.185 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.185 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.185 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.185 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.185 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.185 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.185 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.185 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.185 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.185 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.185 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.185 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.185 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.185 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.185 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.185 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.185 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.185 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.185 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.185 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.185 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.185 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.185 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.185 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.185 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.185 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.185 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.185 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.185 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.185 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.185 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.185 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.185 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.185 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.185 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.185 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.185 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.185 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.185 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:34.185 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:34.185 13:57:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:34.186 13:57:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:34.186 nr_hugepages=1024 00:04:34.186 13:57:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:34.186 resv_hugepages=0 00:04:34.186 13:57:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:34.186 surplus_hugepages=0 00:04:34.186 13:57:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=20480 00:04:34.186 anon_hugepages=20480 00:04:34.186 13:57:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:34.186 13:57:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:34.186 13:57:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:34.186 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:34.186 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:34.186 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:34.186 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:34.186 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.186 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:34.186 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:34.186 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.186 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.186 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.186 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246896 kB' 'MemFree: 6820916 kB' 'MemAvailable: 9555160 kB' 'Buffers: 2208 kB' 'Cached: 2947680 kB' 'SwapCached: 0 kB' 'Active: 1007356 kB' 'Inactive: 2070580 kB' 'Active(anon): 996 kB' 'Inactive(anon): 145572 kB' 'Active(file): 1006360 kB' 'Inactive(file): 1925008 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 12 kB' 'Writeback: 0 kB' 'AnonPages: 128176 kB' 'Mapped: 38964 kB' 'Shmem: 18516 kB' 'KReclaimable: 79420 kB' 'Slab: 143052 kB' 'SReclaimable: 79420 kB' 'SUnreclaim: 63632 kB' 'KernelStack: 4448 kB' 'PageTables: 3076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074872 kB' 'Committed_AS: 382348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 22940 kB' 'VmallocChunk: 0 kB' 'Percpu: 4848 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 20480 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:04:34.186 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.186 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.186 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.186 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.186 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.186 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.186 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.186 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.186 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.186 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.186 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.186 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.186 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.186 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.186 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.186 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.186 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.186 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.186 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.186 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.186 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.186 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.186 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.186 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.186 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.186 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.186 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.186 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.186 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.186 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.186 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.186 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.186 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.186 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.186 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.186 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.186 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.186 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.186 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.186 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.186 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.186 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.186 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.186 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.186 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.186 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.186 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.186 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.186 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.186 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.186 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.186 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.186 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.186 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.186 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.186 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.186 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.186 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.186 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.186 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.186 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.186 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.186 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.186 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.186 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.187 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246896 kB' 'MemFree: 6820416 kB' 'MemUsed: 5426480 kB' 'SwapCached: 0 kB' 'Active: 1007356 kB' 'Inactive: 2070500 kB' 'Active(anon): 996 kB' 'Inactive(anon): 145492 kB' 'Active(file): 1006360 kB' 'Inactive(file): 1925008 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 12 kB' 'Writeback: 0 kB' 'FilePages: 2949888 kB' 'Mapped: 38964 kB' 'AnonPages: 128092 kB' 'Shmem: 18516 kB' 'KernelStack: 4464 kB' 'PageTables: 3116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 79420 kB' 'Slab: 143052 kB' 'SReclaimable: 79420 kB' 'SUnreclaim: 63632 kB' 'AnonHugePages: 20480 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.188 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.189 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.189 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.189 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.189 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.189 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.189 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.189 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.189 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.189 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.189 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.189 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.189 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.189 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.189 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.189 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.189 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.189 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.189 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.189 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.189 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.189 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.189 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.189 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.189 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.189 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.189 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.189 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.189 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.189 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.189 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.189 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.189 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.189 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.189 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.189 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.189 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.189 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.189 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.189 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.189 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.189 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.189 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.189 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.189 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.189 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.189 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.189 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.189 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.189 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.189 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.189 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.189 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.189 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.189 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.189 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.189 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.189 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.189 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.189 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.189 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.189 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.189 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.189 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.189 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.189 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.189 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.189 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.189 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:34.189 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:34.189 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:34.189 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.189 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:34.189 13:57:20 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:34.189 13:57:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:34.189 13:57:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:34.189 13:57:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:34.189 13:57:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:34.189 node0=1024 expecting 1024 00:04:34.189 13:57:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:34.189 13:57:20 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:34.189 00:04:34.189 real 0m0.690s 00:04:34.189 user 0m0.298s 00:04:34.189 sys 0m0.251s 00:04:34.189 13:57:20 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:34.189 13:57:20 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:34.189 ************************************ 00:04:34.189 END TEST default_setup 00:04:34.189 ************************************ 00:04:34.189 13:57:20 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:34.189 13:57:20 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:34.189 13:57:20 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:34.189 13:57:20 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:34.189 13:57:20 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:34.189 ************************************ 00:04:34.189 START TEST per_node_1G_alloc 00:04:34.189 ************************************ 00:04:34.189 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:04:34.189 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:34.189 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:04:34.189 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:34.189 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:34.189 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:34.189 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:34.189 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:34.189 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:34.189 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:34.189 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:34.189 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:34.189 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:34.189 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:34.189 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:34.189 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:34.189 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:34.189 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:34.189 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:34.189 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:34.189 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:34.189 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:34.189 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:04:34.189 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:34.189 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:34.190 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:34.447 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda2,mount@vda:vda5, so not binding PCI dev 00:04:34.447 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:34.447 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:04:34.447 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:34.447 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:34.447 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:34.447 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:34.447 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:34.447 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:34.447 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:34.447 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ [always] madvise never != *\[\n\e\v\e\r\]* ]] 00:04:34.447 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:34.447 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:34.447 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:34.447 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:34.447 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:34.447 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.447 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:34.447 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:34.447 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.447 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.447 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.447 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.447 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246896 kB' 'MemFree: 7861652 kB' 'MemAvailable: 10595900 kB' 'Buffers: 2208 kB' 'Cached: 2947680 kB' 'SwapCached: 0 kB' 'Active: 1007336 kB' 'Inactive: 2071240 kB' 'Active(anon): 980 kB' 'Inactive(anon): 146224 kB' 'Active(file): 1006356 kB' 'Inactive(file): 1925016 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 12 kB' 'Writeback: 0 kB' 'AnonPages: 128936 kB' 'Mapped: 39092 kB' 'Shmem: 18516 kB' 'KReclaimable: 79420 kB' 'Slab: 143088 kB' 'SReclaimable: 79420 kB' 'SUnreclaim: 63668 kB' 'KernelStack: 4604 kB' 'PageTables: 3560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5599160 kB' 'Committed_AS: 382348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 22972 kB' 'VmallocChunk: 0 kB' 'Percpu: 4848 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 20480 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:04:34.447 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.447 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.447 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.447 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.447 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.447 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.447 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.447 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.447 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.447 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.447 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.447 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.447 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.447 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.447 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.447 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.447 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.447 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.447 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.447 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.447 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.447 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.447 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.447 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.447 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.447 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.447 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.710 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.710 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.710 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.710 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.710 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.710 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.710 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.710 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.710 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.710 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.710 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.710 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.710 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.710 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.710 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.710 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.710 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.710 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.710 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.710 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.710 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.710 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.710 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.710 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.710 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.710 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.710 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.710 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.710 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.710 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.710 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.710 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.710 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.710 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.710 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.710 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.710 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.710 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.710 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.710 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.710 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.710 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.710 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.710 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.710 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.710 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.710 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.710 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.710 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.710 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.710 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.710 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.710 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.710 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.710 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.710 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.710 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.710 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.710 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.710 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.710 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.710 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.710 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.710 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.710 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.710 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.710 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.710 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.710 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.710 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.710 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.710 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.710 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.710 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.710 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.710 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.710 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.710 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.710 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.710 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.710 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.710 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.710 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.710 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.710 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.711 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.711 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.711 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.711 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.711 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.711 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.711 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.711 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.711 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.711 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.711 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.711 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.711 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.711 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.711 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.711 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.711 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.711 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.711 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.711 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.711 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.711 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.711 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.711 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.711 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.711 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.711 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.711 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.711 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.711 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.711 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.711 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.711 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.711 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.711 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.711 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.711 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.711 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.711 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.711 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.711 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.711 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.711 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.711 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.711 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.711 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.711 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.711 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.711 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.711 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 20480 00:04:34.711 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:34.711 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=20480 00:04:34.711 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:34.711 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:34.711 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:34.711 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:34.711 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:34.711 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.711 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:34.711 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:34.711 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.711 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.711 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.711 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.711 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246896 kB' 'MemFree: 7861148 kB' 'MemAvailable: 10595396 kB' 'Buffers: 2208 kB' 'Cached: 2947680 kB' 'SwapCached: 0 kB' 'Active: 1007340 kB' 'Inactive: 2071100 kB' 'Active(anon): 980 kB' 'Inactive(anon): 146088 kB' 'Active(file): 1006360 kB' 'Inactive(file): 1925012 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 12 kB' 'Writeback: 0 kB' 'AnonPages: 128280 kB' 'Mapped: 39352 kB' 'Shmem: 18516 kB' 'KReclaimable: 79420 kB' 'Slab: 143072 kB' 'SReclaimable: 79420 kB' 'SUnreclaim: 63652 kB' 'KernelStack: 4608 kB' 'PageTables: 3408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5599160 kB' 'Committed_AS: 381960 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 22924 kB' 'VmallocChunk: 0 kB' 'Percpu: 4848 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 20480 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:04:34.711 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.711 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.711 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.711 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.711 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.711 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.711 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.711 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.711 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.711 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.711 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.711 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.711 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.711 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.711 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.711 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.711 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.711 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.711 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.711 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.711 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.711 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.711 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.711 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.711 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.711 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.712 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246896 kB' 'MemFree: 7861148 kB' 'MemAvailable: 10595396 kB' 'Buffers: 2208 kB' 'Cached: 2947680 kB' 'SwapCached: 0 kB' 'Active: 1007340 kB' 'Inactive: 2070872 kB' 'Active(anon): 980 kB' 'Inactive(anon): 145860 kB' 'Active(file): 1006360 kB' 'Inactive(file): 1925012 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 12 kB' 'Writeback: 0 kB' 'AnonPages: 128332 kB' 'Mapped: 39096 kB' 'Shmem: 18516 kB' 'KReclaimable: 79420 kB' 'Slab: 143076 kB' 'SReclaimable: 79420 kB' 'SUnreclaim: 63656 kB' 'KernelStack: 4592 kB' 'PageTables: 3644 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5599160 kB' 'Committed_AS: 382348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 22876 kB' 'VmallocChunk: 0 kB' 'Percpu: 4848 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 20480 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.713 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.714 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.714 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.714 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.714 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.714 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.714 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.714 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.714 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.714 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.714 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.714 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.714 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.714 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.714 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.714 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.714 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.714 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.714 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.714 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.714 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.714 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.714 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.714 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.714 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.714 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.714 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.714 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.714 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.714 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.714 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.714 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.714 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.714 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.714 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.714 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.714 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.714 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.714 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.714 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.714 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.714 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.714 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.714 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.714 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.714 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.714 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.714 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.714 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.714 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.714 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.714 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.714 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.714 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.714 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.714 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.714 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.714 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.714 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.714 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.714 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.714 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.714 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.714 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.714 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.714 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.714 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.714 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.714 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.714 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.714 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.714 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.714 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.714 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.714 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.714 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.714 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.714 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.714 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.714 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.714 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.714 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.714 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.714 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.714 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.714 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.714 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.714 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.714 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.714 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.714 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.714 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.714 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:34.715 nr_hugepages=512 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:34.715 resv_hugepages=0 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:34.715 surplus_hugepages=0 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:34.715 anon_hugepages=20480 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=20480 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246896 kB' 'MemFree: 7860648 kB' 'MemAvailable: 10594900 kB' 'Buffers: 2208 kB' 'Cached: 2947684 kB' 'SwapCached: 0 kB' 'Active: 1007324 kB' 'Inactive: 2070948 kB' 'Active(anon): 964 kB' 'Inactive(anon): 145932 kB' 'Active(file): 1006360 kB' 'Inactive(file): 1925016 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 12 kB' 'Writeback: 0 kB' 'AnonPages: 128288 kB' 'Mapped: 39044 kB' 'Shmem: 18516 kB' 'KReclaimable: 79420 kB' 'Slab: 143076 kB' 'SReclaimable: 79420 kB' 'SUnreclaim: 63656 kB' 'KernelStack: 4620 kB' 'PageTables: 3604 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5599160 kB' 'Committed_AS: 382348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 22908 kB' 'VmallocChunk: 0 kB' 'Percpu: 4848 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 20480 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.715 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.716 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246896 kB' 'MemFree: 7860396 kB' 'MemUsed: 4386500 kB' 'SwapCached: 0 kB' 'Active: 1007324 kB' 'Inactive: 2071044 kB' 'Active(anon): 964 kB' 'Inactive(anon): 146028 kB' 'Active(file): 1006360 kB' 'Inactive(file): 1925016 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 12 kB' 'Writeback: 0 kB' 'FilePages: 2949892 kB' 'Mapped: 39044 kB' 'AnonPages: 128352 kB' 'Shmem: 18516 kB' 'KernelStack: 4596 kB' 'PageTables: 3576 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 79420 kB' 'Slab: 143068 kB' 'SReclaimable: 79420 kB' 'SUnreclaim: 63648 kB' 'AnonHugePages: 20480 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.717 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:34.718 node0=512 expecting 512 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:34.718 00:04:34.718 real 0m0.428s 00:04:34.718 user 0m0.223s 00:04:34.718 sys 0m0.233s 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:34.718 ************************************ 00:04:34.718 13:57:20 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:34.718 END TEST per_node_1G_alloc 00:04:34.718 ************************************ 00:04:34.718 13:57:20 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:34.718 13:57:20 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:34.718 13:57:20 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:34.718 13:57:20 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:34.718 13:57:20 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:34.718 ************************************ 00:04:34.718 START TEST even_2G_alloc 00:04:34.718 ************************************ 00:04:34.718 13:57:20 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:04:34.718 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:34.718 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:34.718 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:34.718 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:34.718 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:34.718 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:34.718 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:34.718 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:34.718 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:34.718 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:34.718 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:34.719 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:34.719 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:34.719 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:34.719 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:34.719 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:04:34.719 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:34.719 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:34.719 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:34.719 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:34.719 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:34.719 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:34.719 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:34.719 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:34.979 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda2,mount@vda:vda5, so not binding PCI dev 00:04:34.979 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:34.979 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:34.979 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:34.979 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:34.979 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:34.979 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:34.979 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:34.979 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:34.979 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ [always] madvise never != *\[\n\e\v\e\r\]* ]] 00:04:34.979 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:34.979 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:34.979 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:34.979 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:34.979 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:34.979 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.979 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:34.979 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:34.979 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.979 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.979 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.979 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246896 kB' 'MemFree: 6815188 kB' 'MemAvailable: 9549444 kB' 'Buffers: 2208 kB' 'Cached: 2947688 kB' 'SwapCached: 0 kB' 'Active: 1007320 kB' 'Inactive: 2071024 kB' 'Active(anon): 964 kB' 'Inactive(anon): 146000 kB' 'Active(file): 1006356 kB' 'Inactive(file): 1925024 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 12 kB' 'Writeback: 0 kB' 'AnonPages: 129016 kB' 'Mapped: 39316 kB' 'Shmem: 18516 kB' 'KReclaimable: 79420 kB' 'Slab: 143116 kB' 'SReclaimable: 79420 kB' 'SUnreclaim: 63696 kB' 'KernelStack: 4548 kB' 'PageTables: 3536 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074872 kB' 'Committed_AS: 383632 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 22940 kB' 'VmallocChunk: 0 kB' 'Percpu: 4848 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 20480 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:04:34.979 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.979 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.979 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.979 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.979 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.979 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.979 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.979 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.979 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.979 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.979 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.979 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.979 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.979 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.979 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.979 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.979 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.979 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.979 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.979 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.979 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.979 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.979 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.979 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.979 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.979 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.979 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.979 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.979 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.979 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.979 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.979 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 20480 00:04:34.980 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=20480 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246896 kB' 'MemFree: 6815188 kB' 'MemAvailable: 9549448 kB' 'Buffers: 2208 kB' 'Cached: 2947692 kB' 'SwapCached: 0 kB' 'Active: 1007320 kB' 'Inactive: 2071312 kB' 'Active(anon): 964 kB' 'Inactive(anon): 146284 kB' 'Active(file): 1006356 kB' 'Inactive(file): 1925028 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 12 kB' 'Writeback: 0 kB' 'AnonPages: 128796 kB' 'Mapped: 39540 kB' 'Shmem: 18516 kB' 'KReclaimable: 79420 kB' 'Slab: 143048 kB' 'SReclaimable: 79420 kB' 'SUnreclaim: 63628 kB' 'KernelStack: 4560 kB' 'PageTables: 3416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074872 kB' 'Committed_AS: 382348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 22908 kB' 'VmallocChunk: 0 kB' 'Percpu: 4848 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 20480 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.981 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:34.982 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:35.244 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:35.244 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:35.244 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:35.244 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:35.244 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:35.244 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.244 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.244 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.244 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.244 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.244 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246896 kB' 'MemFree: 6815188 kB' 'MemAvailable: 9549444 kB' 'Buffers: 2208 kB' 'Cached: 2947688 kB' 'SwapCached: 0 kB' 'Active: 1007332 kB' 'Inactive: 2071160 kB' 'Active(anon): 972 kB' 'Inactive(anon): 146140 kB' 'Active(file): 1006360 kB' 'Inactive(file): 1925020 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 12 kB' 'Writeback: 0 kB' 'AnonPages: 128916 kB' 'Mapped: 39324 kB' 'Shmem: 18516 kB' 'KReclaimable: 79420 kB' 'Slab: 143128 kB' 'SReclaimable: 79420 kB' 'SUnreclaim: 63708 kB' 'KernelStack: 4492 kB' 'PageTables: 3136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074872 kB' 'Committed_AS: 382348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 22892 kB' 'VmallocChunk: 0 kB' 'Percpu: 4848 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 20480 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:04:35.244 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.244 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.244 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.244 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.244 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.244 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.244 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.244 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.244 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.244 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.244 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.244 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.244 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.244 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.244 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.244 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.244 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.244 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.244 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.244 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.244 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.244 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.245 13:57:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.245 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.245 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.245 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.245 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.245 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.245 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.245 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.245 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.245 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.245 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.245 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:35.246 nr_hugepages=1024 00:04:35.246 resv_hugepages=0 00:04:35.246 surplus_hugepages=0 00:04:35.246 anon_hugepages=20480 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=20480 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246896 kB' 'MemFree: 6815188 kB' 'MemAvailable: 9549444 kB' 'Buffers: 2208 kB' 'Cached: 2947688 kB' 'SwapCached: 0 kB' 'Active: 1007332 kB' 'Inactive: 2071296 kB' 'Active(anon): 972 kB' 'Inactive(anon): 146276 kB' 'Active(file): 1006360 kB' 'Inactive(file): 1925020 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 12 kB' 'Writeback: 0 kB' 'AnonPages: 128776 kB' 'Mapped: 39324 kB' 'Shmem: 18516 kB' 'KReclaimable: 79420 kB' 'Slab: 143128 kB' 'SReclaimable: 79420 kB' 'SUnreclaim: 63708 kB' 'KernelStack: 4460 kB' 'PageTables: 3060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074872 kB' 'Committed_AS: 382348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 22940 kB' 'VmallocChunk: 0 kB' 'Percpu: 4848 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 20480 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.246 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.247 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246896 kB' 'MemFree: 6814936 kB' 'MemUsed: 5431960 kB' 'SwapCached: 0 kB' 'Active: 1007332 kB' 'Inactive: 2071344 kB' 'Active(anon): 972 kB' 'Inactive(anon): 146324 kB' 'Active(file): 1006360 kB' 'Inactive(file): 1925020 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 12 kB' 'Writeback: 0 kB' 'FilePages: 2949896 kB' 'Mapped: 39324 kB' 'AnonPages: 128820 kB' 'Shmem: 18516 kB' 'KernelStack: 4480 kB' 'PageTables: 2940 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 79420 kB' 'Slab: 143128 kB' 'SReclaimable: 79420 kB' 'SUnreclaim: 63708 kB' 'AnonHugePages: 20480 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.248 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:35.249 node0=1024 expecting 1024 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:35.249 13:57:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:35.249 00:04:35.249 real 0m0.433s 00:04:35.249 user 0m0.252s 00:04:35.249 sys 0m0.208s 00:04:35.250 13:57:21 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:35.250 13:57:21 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:35.250 ************************************ 00:04:35.250 END TEST even_2G_alloc 00:04:35.250 ************************************ 00:04:35.250 13:57:21 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:35.250 13:57:21 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:35.250 13:57:21 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:35.250 13:57:21 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:35.250 13:57:21 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:35.250 ************************************ 00:04:35.250 START TEST odd_alloc 00:04:35.250 ************************************ 00:04:35.250 13:57:21 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:04:35.250 13:57:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:35.250 13:57:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:35.250 13:57:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:35.250 13:57:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:35.250 13:57:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:35.250 13:57:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:35.250 13:57:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:35.250 13:57:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:35.250 13:57:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:35.250 13:57:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:35.250 13:57:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:35.250 13:57:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:35.250 13:57:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:35.250 13:57:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:35.250 13:57:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:35.250 13:57:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:35.250 13:57:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:35.250 13:57:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:35.250 13:57:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:35.250 13:57:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:35.250 13:57:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:35.250 13:57:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:35.250 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:35.250 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:35.510 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda2,mount@vda:vda5, so not binding PCI dev 00:04:35.510 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:35.510 13:57:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:35.510 13:57:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:35.510 13:57:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:35.510 13:57:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:35.510 13:57:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:35.510 13:57:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:35.510 13:57:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:35.510 13:57:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ [always] madvise never != *\[\n\e\v\e\r\]* ]] 00:04:35.510 13:57:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:35.510 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:35.510 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:35.510 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:35.510 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:35.510 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.510 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.510 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.510 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.510 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.510 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.510 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.510 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246896 kB' 'MemFree: 6811384 kB' 'MemAvailable: 9545636 kB' 'Buffers: 2208 kB' 'Cached: 2947684 kB' 'SwapCached: 0 kB' 'Active: 1007360 kB' 'Inactive: 2071300 kB' 'Active(anon): 1004 kB' 'Inactive(anon): 146280 kB' 'Active(file): 1006356 kB' 'Inactive(file): 1925020 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 8 kB' 'Writeback: 0 kB' 'AnonPages: 128596 kB' 'Mapped: 39084 kB' 'Shmem: 18516 kB' 'KReclaimable: 79420 kB' 'Slab: 143136 kB' 'SReclaimable: 79420 kB' 'SUnreclaim: 63716 kB' 'KernelStack: 4512 kB' 'PageTables: 3224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5073848 kB' 'Committed_AS: 382348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 22940 kB' 'VmallocChunk: 0 kB' 'Percpu: 4848 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 20480 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:04:35.510 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.510 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.510 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.510 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.510 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.510 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.510 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.510 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.510 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.510 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.510 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.510 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.510 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.510 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.510 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.510 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.510 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.510 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.510 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.510 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.510 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.510 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.510 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.510 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.510 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.510 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.510 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.510 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 20480 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=20480 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246896 kB' 'MemFree: 6811384 kB' 'MemAvailable: 9545636 kB' 'Buffers: 2208 kB' 'Cached: 2947684 kB' 'SwapCached: 0 kB' 'Active: 1007352 kB' 'Inactive: 2070880 kB' 'Active(anon): 996 kB' 'Inactive(anon): 145860 kB' 'Active(file): 1006356 kB' 'Inactive(file): 1925020 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 8 kB' 'Writeback: 0 kB' 'AnonPages: 128412 kB' 'Mapped: 39020 kB' 'Shmem: 18516 kB' 'KReclaimable: 79420 kB' 'Slab: 143088 kB' 'SReclaimable: 79420 kB' 'SUnreclaim: 63668 kB' 'KernelStack: 4496 kB' 'PageTables: 3188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5073848 kB' 'Committed_AS: 384680 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 22908 kB' 'VmallocChunk: 0 kB' 'Percpu: 4848 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 20480 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.511 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.512 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246896 kB' 'MemFree: 6811384 kB' 'MemAvailable: 9545636 kB' 'Buffers: 2208 kB' 'Cached: 2947684 kB' 'SwapCached: 0 kB' 'Active: 1007352 kB' 'Inactive: 2070968 kB' 'Active(anon): 996 kB' 'Inactive(anon): 145948 kB' 'Active(file): 1006356 kB' 'Inactive(file): 1925020 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 8 kB' 'Writeback: 0 kB' 'AnonPages: 128316 kB' 'Mapped: 39020 kB' 'Shmem: 18516 kB' 'KReclaimable: 79420 kB' 'Slab: 143088 kB' 'SReclaimable: 79420 kB' 'SUnreclaim: 63668 kB' 'KernelStack: 4560 kB' 'PageTables: 3348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5073848 kB' 'Committed_AS: 381960 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 22860 kB' 'VmallocChunk: 0 kB' 'Percpu: 4848 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 20480 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.775 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.776 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:35.777 nr_hugepages=1025 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:35.777 resv_hugepages=0 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:35.777 surplus_hugepages=0 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:35.777 anon_hugepages=20480 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=20480 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246896 kB' 'MemFree: 6811384 kB' 'MemAvailable: 9545636 kB' 'Buffers: 2208 kB' 'Cached: 2947684 kB' 'SwapCached: 0 kB' 'Active: 1007360 kB' 'Inactive: 2070792 kB' 'Active(anon): 1004 kB' 'Inactive(anon): 145772 kB' 'Active(file): 1006356 kB' 'Inactive(file): 1925020 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 8 kB' 'Writeback: 0 kB' 'AnonPages: 128412 kB' 'Mapped: 38976 kB' 'Shmem: 18516 kB' 'KReclaimable: 79420 kB' 'Slab: 143088 kB' 'SReclaimable: 79420 kB' 'SUnreclaim: 63668 kB' 'KernelStack: 4492 kB' 'PageTables: 3344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5073848 kB' 'Committed_AS: 382348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 22908 kB' 'VmallocChunk: 0 kB' 'Percpu: 4848 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 20480 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.777 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.778 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246896 kB' 'MemFree: 6811384 kB' 'MemUsed: 5435512 kB' 'SwapCached: 0 kB' 'Active: 1007360 kB' 'Inactive: 2070700 kB' 'Active(anon): 1004 kB' 'Inactive(anon): 145680 kB' 'Active(file): 1006356 kB' 'Inactive(file): 1925020 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 8 kB' 'Writeback: 0 kB' 'FilePages: 2949892 kB' 'Mapped: 39020 kB' 'AnonPages: 128060 kB' 'Shmem: 18516 kB' 'KernelStack: 4512 kB' 'PageTables: 3224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 79420 kB' 'Slab: 143080 kB' 'SReclaimable: 79420 kB' 'SUnreclaim: 63660 kB' 'AnonHugePages: 20480 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.779 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.780 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.780 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.780 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.780 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.780 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.780 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.780 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.780 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.780 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.780 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.780 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.780 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.780 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.780 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.780 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.780 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.780 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.780 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.780 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.780 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.780 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.780 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.780 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.780 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.780 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.780 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.780 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.780 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.780 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.780 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.780 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.780 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.780 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.780 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.780 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.780 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.780 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.780 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.780 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.780 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.780 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.780 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.780 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.780 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.780 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.780 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.780 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.780 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.780 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.780 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.780 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.780 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.780 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.780 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.780 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.780 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.780 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.780 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.780 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.780 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.780 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.780 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.780 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.780 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.780 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.780 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.780 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.780 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.780 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.780 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.780 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.780 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.780 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.780 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.780 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.780 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.780 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.780 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:35.780 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.780 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.780 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.780 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:35.780 13:57:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:35.780 13:57:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:35.780 13:57:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:35.780 13:57:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:35.780 13:57:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:35.780 node0=1025 expecting 1025 00:04:35.780 13:57:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:35.780 13:57:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:35.780 00:04:35.780 real 0m0.511s 00:04:35.780 user 0m0.263s 00:04:35.780 sys 0m0.256s 00:04:35.780 13:57:21 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:35.780 13:57:21 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:35.780 ************************************ 00:04:35.780 END TEST odd_alloc 00:04:35.780 ************************************ 00:04:35.780 13:57:21 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:35.780 13:57:21 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:35.780 13:57:21 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:35.780 13:57:21 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:35.780 13:57:21 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:35.780 ************************************ 00:04:35.780 START TEST custom_alloc 00:04:35.780 ************************************ 00:04:35.780 13:57:21 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:04:35.780 13:57:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:35.780 13:57:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:35.780 13:57:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:35.780 13:57:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:35.780 13:57:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:35.780 13:57:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:35.780 13:57:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:35.780 13:57:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:35.780 13:57:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:35.780 13:57:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:35.780 13:57:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:35.780 13:57:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:35.780 13:57:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:35.780 13:57:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:35.780 13:57:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:35.780 13:57:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:35.780 13:57:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:35.780 13:57:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:35.780 13:57:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:35.780 13:57:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:35.780 13:57:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:35.780 13:57:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:35.780 13:57:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:35.780 13:57:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:35.780 13:57:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:35.780 13:57:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:35.780 13:57:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:35.780 13:57:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:35.780 13:57:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:35.781 13:57:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:35.781 13:57:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:35.781 13:57:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:35.781 13:57:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:35.781 13:57:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:35.781 13:57:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:35.781 13:57:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:35.781 13:57:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:35.781 13:57:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:35.781 13:57:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:35.781 13:57:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:35.781 13:57:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:35.781 13:57:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:35.781 13:57:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:35.781 13:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:35.781 13:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:36.042 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda2,mount@vda:vda5, so not binding PCI dev 00:04:36.042 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:36.042 13:57:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:04:36.042 13:57:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:36.042 13:57:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:36.042 13:57:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:36.042 13:57:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:36.042 13:57:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:36.042 13:57:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:36.042 13:57:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:36.042 13:57:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ [always] madvise never != *\[\n\e\v\e\r\]* ]] 00:04:36.042 13:57:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:36.042 13:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:36.042 13:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:36.042 13:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:36.042 13:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:36.042 13:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:36.042 13:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:36.042 13:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:36.042 13:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:36.042 13:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:36.042 13:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.042 13:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246896 kB' 'MemFree: 7861672 kB' 'MemAvailable: 10595916 kB' 'Buffers: 2208 kB' 'Cached: 2947676 kB' 'SwapCached: 0 kB' 'Active: 1007388 kB' 'Inactive: 2071120 kB' 'Active(anon): 1036 kB' 'Inactive(anon): 146104 kB' 'Active(file): 1006352 kB' 'Inactive(file): 1925016 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 8 kB' 'Writeback: 0 kB' 'AnonPages: 128368 kB' 'Mapped: 39708 kB' 'Shmem: 18516 kB' 'KReclaimable: 79420 kB' 'Slab: 143104 kB' 'SReclaimable: 79420 kB' 'SUnreclaim: 63684 kB' 'KernelStack: 4632 kB' 'PageTables: 3504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5599160 kB' 'Committed_AS: 382348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 22988 kB' 'VmallocChunk: 0 kB' 'Percpu: 4848 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 20480 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:04:36.042 13:57:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.042 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.042 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.042 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.042 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.042 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.042 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.042 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.042 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.042 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.042 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.042 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.042 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.042 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.042 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.042 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.042 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.042 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.042 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.042 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.042 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.042 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.042 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.042 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.042 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.042 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.042 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.042 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.042 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.042 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.042 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.042 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.042 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.042 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.042 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.042 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.042 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.042 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.042 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.042 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.042 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.042 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.042 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.042 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.042 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.042 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.042 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.042 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.042 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.042 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.042 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.042 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.042 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.042 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.042 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.042 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.042 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.042 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.042 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.042 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.042 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.042 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.042 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.042 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.042 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.042 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.042 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.042 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.042 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.042 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.042 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.042 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.042 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.042 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.042 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.042 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.042 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.042 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.042 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.042 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.042 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.042 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.042 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.042 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.042 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.042 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.042 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 20480 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=20480 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246896 kB' 'MemFree: 7861168 kB' 'MemAvailable: 10595412 kB' 'Buffers: 2208 kB' 'Cached: 2947676 kB' 'SwapCached: 0 kB' 'Active: 1007396 kB' 'Inactive: 2070268 kB' 'Active(anon): 1020 kB' 'Inactive(anon): 145276 kB' 'Active(file): 1006376 kB' 'Inactive(file): 1924992 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 12 kB' 'Writeback: 0 kB' 'AnonPages: 127992 kB' 'Mapped: 39176 kB' 'Shmem: 18516 kB' 'KReclaimable: 79420 kB' 'Slab: 143160 kB' 'SReclaimable: 79420 kB' 'SUnreclaim: 63740 kB' 'KernelStack: 4440 kB' 'PageTables: 2880 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5599160 kB' 'Committed_AS: 382348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 22924 kB' 'VmallocChunk: 0 kB' 'Percpu: 4848 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 20480 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.043 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.044 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.044 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.044 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.044 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.044 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.044 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.044 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.044 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.044 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.044 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.044 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.044 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.044 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.044 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.044 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.044 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.044 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.044 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.044 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.044 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.044 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.044 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.044 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.044 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.044 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.044 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.044 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.044 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.044 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.044 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.044 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.044 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.044 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.044 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.044 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.044 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.044 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.044 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.044 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.044 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.044 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.044 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.044 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.044 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.044 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.044 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.044 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.044 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.044 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.044 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.044 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.044 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.044 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.044 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.044 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.044 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.044 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.044 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.044 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.044 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.044 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.044 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.044 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.044 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.044 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.044 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.044 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.044 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.044 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.044 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.044 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.044 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.044 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.044 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.044 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.044 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.044 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.044 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.044 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.044 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.044 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.306 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.306 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.306 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.306 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.306 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.306 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.306 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.306 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.306 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.306 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.306 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.306 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.306 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.306 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.306 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.306 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.306 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.306 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.306 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.306 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.306 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.306 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.306 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.306 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246896 kB' 'MemFree: 7861168 kB' 'MemAvailable: 10595412 kB' 'Buffers: 2208 kB' 'Cached: 2947676 kB' 'SwapCached: 0 kB' 'Active: 1007396 kB' 'Inactive: 2070424 kB' 'Active(anon): 1020 kB' 'Inactive(anon): 145432 kB' 'Active(file): 1006376 kB' 'Inactive(file): 1924992 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 12 kB' 'Writeback: 0 kB' 'AnonPages: 127880 kB' 'Mapped: 39176 kB' 'Shmem: 18516 kB' 'KReclaimable: 79420 kB' 'Slab: 143160 kB' 'SReclaimable: 79420 kB' 'SUnreclaim: 63740 kB' 'KernelStack: 4408 kB' 'PageTables: 2800 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5599160 kB' 'Committed_AS: 382348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 22908 kB' 'VmallocChunk: 0 kB' 'Percpu: 4848 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 20480 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.307 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.308 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:36.309 nr_hugepages=512 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:36.309 resv_hugepages=0 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:36.309 surplus_hugepages=0 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=20480 00:04:36.309 anon_hugepages=20480 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246896 kB' 'MemFree: 7861168 kB' 'MemAvailable: 10595412 kB' 'Buffers: 2208 kB' 'Cached: 2947676 kB' 'SwapCached: 0 kB' 'Active: 1007396 kB' 'Inactive: 2070232 kB' 'Active(anon): 1020 kB' 'Inactive(anon): 145240 kB' 'Active(file): 1006376 kB' 'Inactive(file): 1924992 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 12 kB' 'Writeback: 0 kB' 'AnonPages: 127948 kB' 'Mapped: 39176 kB' 'Shmem: 18516 kB' 'KReclaimable: 79420 kB' 'Slab: 143168 kB' 'SReclaimable: 79420 kB' 'SUnreclaim: 63748 kB' 'KernelStack: 4440 kB' 'PageTables: 2880 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5599160 kB' 'Committed_AS: 382348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 22940 kB' 'VmallocChunk: 0 kB' 'Percpu: 4848 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 20480 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.309 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.310 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.311 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.311 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.311 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.311 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.311 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.311 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.311 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.311 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.311 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.311 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.311 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.311 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.311 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.311 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.311 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.311 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.311 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.311 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.311 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.311 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.311 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.311 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.311 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.311 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.311 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.311 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.311 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.311 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.311 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:04:36.311 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:36.311 13:57:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:36.311 13:57:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:36.311 13:57:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:36.311 13:57:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:36.311 13:57:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:36.311 13:57:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:36.311 13:57:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:36.311 13:57:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:36.311 13:57:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:36.311 13:57:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:36.311 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:36.311 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:36.311 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:36.311 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:36.311 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:36.311 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:36.311 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:36.311 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:36.311 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:36.311 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.311 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.311 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246896 kB' 'MemFree: 7860664 kB' 'MemUsed: 4386232 kB' 'SwapCached: 0 kB' 'Active: 1007396 kB' 'Inactive: 2070376 kB' 'Active(anon): 1020 kB' 'Inactive(anon): 145384 kB' 'Active(file): 1006376 kB' 'Inactive(file): 1924992 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 12 kB' 'Writeback: 0 kB' 'FilePages: 2949884 kB' 'Mapped: 39176 kB' 'AnonPages: 127832 kB' 'Shmem: 18516 kB' 'KernelStack: 4460 kB' 'PageTables: 2760 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 79420 kB' 'Slab: 143152 kB' 'SReclaimable: 79420 kB' 'SUnreclaim: 63732 kB' 'AnonHugePages: 20480 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:36.311 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.311 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.311 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.311 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.311 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.311 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.311 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.311 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.311 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.311 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.311 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.311 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.311 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.311 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.311 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.311 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.311 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.311 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.311 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.311 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.311 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.311 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.311 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.311 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.311 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.311 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.311 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.311 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.311 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.311 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.311 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.311 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.311 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.311 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.311 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.311 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:36.312 node0=512 expecting 512 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:36.312 00:04:36.312 real 0m0.452s 00:04:36.312 user 0m0.251s 00:04:36.312 sys 0m0.230s 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:36.312 13:57:22 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:36.313 ************************************ 00:04:36.313 END TEST custom_alloc 00:04:36.313 ************************************ 00:04:36.313 13:57:22 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:36.313 13:57:22 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:36.313 13:57:22 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:36.313 13:57:22 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:36.313 13:57:22 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:36.313 ************************************ 00:04:36.313 START TEST no_shrink_alloc 00:04:36.313 ************************************ 00:04:36.313 13:57:22 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:04:36.313 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:36.313 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:36.313 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:36.313 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:36.313 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:36.313 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:36.313 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:36.313 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:36.313 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:36.313 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:36.313 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:36.313 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:36.313 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:36.313 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:36.313 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:36.313 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:36.313 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:36.313 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:36.313 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:36.313 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:36.313 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:36.313 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:36.601 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda2,mount@vda:vda5, so not binding PCI dev 00:04:36.601 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:36.601 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:36.601 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:36.601 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:36.601 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:36.601 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:36.601 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:36.601 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:36.601 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ [always] madvise never != *\[\n\e\v\e\r\]* ]] 00:04:36.601 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:36.601 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:36.601 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:36.601 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:36.601 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:36.601 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:36.601 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:36.601 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:36.601 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:36.601 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:36.601 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.601 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246896 kB' 'MemFree: 6807564 kB' 'MemAvailable: 9541816 kB' 'Buffers: 2208 kB' 'Cached: 2947684 kB' 'SwapCached: 0 kB' 'Active: 1007380 kB' 'Inactive: 2071132 kB' 'Active(anon): 1004 kB' 'Inactive(anon): 146132 kB' 'Active(file): 1006376 kB' 'Inactive(file): 1925000 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 12 kB' 'Writeback: 0 kB' 'AnonPages: 128600 kB' 'Mapped: 39012 kB' 'Shmem: 18516 kB' 'KReclaimable: 79420 kB' 'Slab: 143052 kB' 'SReclaimable: 79420 kB' 'SUnreclaim: 63632 kB' 'KernelStack: 4588 kB' 'PageTables: 3456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074872 kB' 'Committed_AS: 382348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 22956 kB' 'VmallocChunk: 0 kB' 'Percpu: 4848 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 20480 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:04:36.601 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.601 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.601 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.601 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.601 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.601 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.601 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.601 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.601 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.601 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.601 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.601 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.601 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.601 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.601 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.601 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.601 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.601 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.601 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.601 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.601 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.601 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.601 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.601 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.601 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.601 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.601 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.601 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.601 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.601 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.601 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.601 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.601 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.601 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.601 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.601 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.601 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.601 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.601 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.601 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.601 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.601 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.601 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.601 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.601 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.601 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.601 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.601 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.601 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.601 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.601 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.601 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.601 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.601 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.601 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.601 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.601 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.601 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.601 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.601 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 20480 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=20480 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.602 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246896 kB' 'MemFree: 6807564 kB' 'MemAvailable: 9541816 kB' 'Buffers: 2208 kB' 'Cached: 2947684 kB' 'SwapCached: 0 kB' 'Active: 1007372 kB' 'Inactive: 2071412 kB' 'Active(anon): 996 kB' 'Inactive(anon): 146412 kB' 'Active(file): 1006376 kB' 'Inactive(file): 1925000 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 12 kB' 'Writeback: 0 kB' 'AnonPages: 128956 kB' 'Mapped: 39008 kB' 'Shmem: 18516 kB' 'KReclaimable: 79420 kB' 'Slab: 143036 kB' 'SReclaimable: 79420 kB' 'SUnreclaim: 63616 kB' 'KernelStack: 4564 kB' 'PageTables: 3196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074872 kB' 'Committed_AS: 385576 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 22908 kB' 'VmallocChunk: 0 kB' 'Percpu: 4848 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 20480 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.603 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246896 kB' 'MemFree: 6807564 kB' 'MemAvailable: 9541816 kB' 'Buffers: 2208 kB' 'Cached: 2947684 kB' 'SwapCached: 0 kB' 'Active: 1007376 kB' 'Inactive: 2070824 kB' 'Active(anon): 996 kB' 'Inactive(anon): 145828 kB' 'Active(file): 1006380 kB' 'Inactive(file): 1924996 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 12 kB' 'Writeback: 0 kB' 'AnonPages: 128304 kB' 'Mapped: 39052 kB' 'Shmem: 18516 kB' 'KReclaimable: 79420 kB' 'Slab: 143032 kB' 'SReclaimable: 79420 kB' 'SUnreclaim: 63612 kB' 'KernelStack: 4584 kB' 'PageTables: 3076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074872 kB' 'Committed_AS: 382348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 22876 kB' 'VmallocChunk: 0 kB' 'Percpu: 4848 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 20480 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.604 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.605 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.605 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.605 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.605 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.605 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.605 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.605 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.605 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.605 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.605 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.605 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.605 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.605 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.605 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.605 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.605 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.605 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.605 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.605 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.605 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.605 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.605 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.605 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.605 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.605 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.605 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.605 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.605 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.605 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.605 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.605 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.605 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.605 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.605 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.605 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.605 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.605 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.605 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.605 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.605 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.605 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.605 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.605 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.605 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.605 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.605 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.605 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.605 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.605 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.605 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.605 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.605 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.605 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.605 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.605 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.605 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.605 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.605 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.606 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.606 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.606 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.606 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.606 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.606 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.868 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.868 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.868 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.868 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.868 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.868 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.868 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.868 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.868 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.868 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.868 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.868 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.868 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.868 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.868 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.868 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.868 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.868 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.868 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.868 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.868 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.868 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.868 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.868 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.868 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.868 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.868 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.868 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.868 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.868 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.868 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.868 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.868 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.868 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.868 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.868 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.868 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.868 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.868 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.868 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.868 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.868 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.868 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.868 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.868 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.868 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.868 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.868 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.868 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.868 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.868 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.868 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.868 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.868 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.868 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.868 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.868 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.869 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.869 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.869 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.869 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.869 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.869 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.869 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.869 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.869 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.869 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.869 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.869 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.869 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.869 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.869 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.869 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.869 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.869 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.869 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.869 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.869 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.869 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.869 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.869 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.869 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.869 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.869 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.869 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.869 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.869 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.869 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.869 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.869 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.869 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.869 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.869 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.869 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.869 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.869 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.869 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.869 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.869 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.869 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.869 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.869 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.869 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.869 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.869 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.869 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.869 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.869 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.869 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.869 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.869 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.869 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.869 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.869 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.869 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.869 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.869 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:36.869 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:36.869 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:36.869 nr_hugepages=1024 00:04:36.869 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:36.869 resv_hugepages=0 00:04:36.869 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:36.869 surplus_hugepages=0 00:04:36.869 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:36.869 anon_hugepages=20480 00:04:36.869 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=20480 00:04:36.869 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:36.869 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:36.869 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:36.869 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:36.869 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:36.869 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:36.869 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:36.869 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:36.869 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:36.869 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:36.869 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:36.869 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:36.869 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.869 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246896 kB' 'MemFree: 6807564 kB' 'MemAvailable: 9541816 kB' 'Buffers: 2208 kB' 'Cached: 2947684 kB' 'SwapCached: 0 kB' 'Active: 1007384 kB' 'Inactive: 2070880 kB' 'Active(anon): 1004 kB' 'Inactive(anon): 145884 kB' 'Active(file): 1006380 kB' 'Inactive(file): 1924996 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 12 kB' 'Writeback: 0 kB' 'AnonPages: 128400 kB' 'Mapped: 39052 kB' 'Shmem: 18516 kB' 'KReclaimable: 79420 kB' 'Slab: 143024 kB' 'SReclaimable: 79420 kB' 'SUnreclaim: 63604 kB' 'KernelStack: 4580 kB' 'PageTables: 3232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074872 kB' 'Committed_AS: 382348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 22908 kB' 'VmallocChunk: 0 kB' 'Percpu: 4848 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 20480 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:04:36.869 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.869 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.870 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:36.871 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:36.872 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:36.872 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:36.872 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:36.872 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:36.872 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:36.872 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:36.872 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:36.872 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:36.872 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:36.872 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:36.872 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:36.872 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:36.872 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:36.872 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:36.872 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:36.872 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:36.872 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:36.872 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:36.872 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.872 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.872 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246896 kB' 'MemFree: 6807312 kB' 'MemUsed: 5439584 kB' 'SwapCached: 0 kB' 'Active: 1007384 kB' 'Inactive: 2070824 kB' 'Active(anon): 1004 kB' 'Inactive(anon): 145828 kB' 'Active(file): 1006380 kB' 'Inactive(file): 1924996 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 12 kB' 'Writeback: 0 kB' 'FilePages: 2949892 kB' 'Mapped: 39052 kB' 'AnonPages: 128324 kB' 'Shmem: 18516 kB' 'KernelStack: 4612 kB' 'PageTables: 3312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 79420 kB' 'Slab: 143040 kB' 'SReclaimable: 79420 kB' 'SUnreclaim: 63620 kB' 'AnonHugePages: 20480 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:36.872 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.872 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.872 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.872 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.872 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.872 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.872 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.872 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.872 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.872 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.872 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.872 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.872 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.872 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.872 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.872 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.872 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.872 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.872 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.872 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.872 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.872 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.872 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.872 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.872 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.872 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.872 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.872 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.872 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.872 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.872 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.872 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.872 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.872 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.872 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.872 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.872 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.872 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.872 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.872 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.872 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.872 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.872 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.872 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.872 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.872 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.872 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.872 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.872 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.872 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.872 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.872 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.872 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.872 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.872 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.872 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.872 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.872 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.872 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.872 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.872 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.872 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.872 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.872 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.873 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.873 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.873 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.873 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.873 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.873 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.873 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.873 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.873 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.873 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.873 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.873 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.873 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.873 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.873 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.873 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.873 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.873 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.873 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.873 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.873 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.873 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.873 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.873 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.873 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.873 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.873 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.873 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.873 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.873 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.873 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.873 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.873 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.873 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.873 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.873 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.873 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.873 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.873 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.873 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.873 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.873 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.873 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.873 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.873 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.873 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.873 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.873 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.873 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.873 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.873 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.873 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.873 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.873 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.873 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.873 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.873 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.873 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.873 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.873 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.873 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.873 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.873 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.873 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.873 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.873 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.873 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.873 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.873 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.873 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.873 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.873 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.873 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.873 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.873 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.873 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.873 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.873 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:36.873 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.873 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.873 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.873 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:36.873 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:36.873 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:36.873 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:36.873 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:36.873 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:36.873 node0=1024 expecting 1024 00:04:36.873 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:36.873 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:36.873 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:36.873 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:36.873 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:36.874 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:36.874 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:37.137 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda2,mount@vda:vda5, so not binding PCI dev 00:04:37.137 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:37.137 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:37.137 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:37.137 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ [always] madvise never != *\[\n\e\v\e\r\]* ]] 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246896 kB' 'MemFree: 6804568 kB' 'MemAvailable: 9538820 kB' 'Buffers: 2208 kB' 'Cached: 2947684 kB' 'SwapCached: 0 kB' 'Active: 1007376 kB' 'Inactive: 2070956 kB' 'Active(anon): 996 kB' 'Inactive(anon): 145960 kB' 'Active(file): 1006380 kB' 'Inactive(file): 1924996 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 12 kB' 'Writeback: 0 kB' 'AnonPages: 128712 kB' 'Mapped: 39516 kB' 'Shmem: 18516 kB' 'KReclaimable: 79420 kB' 'Slab: 143056 kB' 'SReclaimable: 79420 kB' 'SUnreclaim: 63636 kB' 'KernelStack: 4712 kB' 'PageTables: 3904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074872 kB' 'Committed_AS: 382348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 23004 kB' 'VmallocChunk: 0 kB' 'Percpu: 4848 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 20480 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.138 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 20480 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=20480 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246896 kB' 'MemFree: 6804568 kB' 'MemAvailable: 9538820 kB' 'Buffers: 2208 kB' 'Cached: 2947684 kB' 'SwapCached: 0 kB' 'Active: 1007376 kB' 'Inactive: 2070932 kB' 'Active(anon): 996 kB' 'Inactive(anon): 145936 kB' 'Active(file): 1006380 kB' 'Inactive(file): 1924996 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 12 kB' 'Writeback: 0 kB' 'AnonPages: 128452 kB' 'Mapped: 39256 kB' 'Shmem: 18516 kB' 'KReclaimable: 79420 kB' 'Slab: 143048 kB' 'SReclaimable: 79420 kB' 'SUnreclaim: 63628 kB' 'KernelStack: 4628 kB' 'PageTables: 3892 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074872 kB' 'Committed_AS: 383104 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 22924 kB' 'VmallocChunk: 0 kB' 'Percpu: 4848 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 20480 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.139 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.140 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.141 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.141 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.141 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.141 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.141 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.141 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.141 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.141 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.141 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.141 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.141 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.141 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.141 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.141 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.141 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.141 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.141 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.141 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.141 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.141 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.141 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.141 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.141 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.141 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.141 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.141 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.141 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.141 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.141 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.141 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.141 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.141 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.141 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.141 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.141 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.141 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.141 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.141 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.141 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.141 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:37.141 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:37.141 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:37.141 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:37.141 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:37.141 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:37.141 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:37.141 13:57:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:37.141 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.141 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:37.141 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:37.141 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.141 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.141 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.141 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.141 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246896 kB' 'MemFree: 6804568 kB' 'MemAvailable: 9538820 kB' 'Buffers: 2208 kB' 'Cached: 2947684 kB' 'SwapCached: 0 kB' 'Active: 1007376 kB' 'Inactive: 2071208 kB' 'Active(anon): 996 kB' 'Inactive(anon): 146212 kB' 'Active(file): 1006380 kB' 'Inactive(file): 1924996 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 12 kB' 'Writeback: 0 kB' 'AnonPages: 128748 kB' 'Mapped: 39300 kB' 'Shmem: 18516 kB' 'KReclaimable: 79420 kB' 'Slab: 143088 kB' 'SReclaimable: 79420 kB' 'SUnreclaim: 63668 kB' 'KernelStack: 4616 kB' 'PageTables: 3756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074872 kB' 'Committed_AS: 382348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 22876 kB' 'VmallocChunk: 0 kB' 'Percpu: 4848 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 20480 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:04:37.141 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.141 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.141 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.141 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.141 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.141 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.141 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.141 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.141 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.141 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.141 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.141 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.141 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.141 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.141 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.141 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.141 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.141 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.141 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.141 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.141 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.141 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.141 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.141 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.141 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.141 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.141 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.141 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.141 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.141 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.141 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.141 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.141 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.141 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.141 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.141 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.141 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.141 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.141 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.141 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.141 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.141 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.141 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.141 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.141 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.141 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.141 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.141 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.141 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.141 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.141 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.141 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.141 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.141 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.141 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.141 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.141 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.141 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.141 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.141 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.141 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.142 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:37.143 nr_hugepages=1024 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:37.143 resv_hugepages=0 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:37.143 surplus_hugepages=0 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:37.143 anon_hugepages=20480 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=20480 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246896 kB' 'MemFree: 6810500 kB' 'MemAvailable: 9544752 kB' 'Buffers: 2208 kB' 'Cached: 2947684 kB' 'SwapCached: 0 kB' 'Active: 1007376 kB' 'Inactive: 2070944 kB' 'Active(anon): 996 kB' 'Inactive(anon): 145948 kB' 'Active(file): 1006380 kB' 'Inactive(file): 1924996 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 12 kB' 'Writeback: 0 kB' 'AnonPages: 128400 kB' 'Mapped: 39296 kB' 'Shmem: 18516 kB' 'KReclaimable: 79420 kB' 'Slab: 143088 kB' 'SReclaimable: 79420 kB' 'SUnreclaim: 63668 kB' 'KernelStack: 4584 kB' 'PageTables: 3680 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074872 kB' 'Committed_AS: 382348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 22908 kB' 'VmallocChunk: 0 kB' 'Percpu: 4848 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 20480 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.143 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.144 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246896 kB' 'MemFree: 6810000 kB' 'MemUsed: 5436896 kB' 'SwapCached: 0 kB' 'Active: 1007368 kB' 'Inactive: 2070732 kB' 'Active(anon): 988 kB' 'Inactive(anon): 145736 kB' 'Active(file): 1006380 kB' 'Inactive(file): 1924996 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 12 kB' 'Writeback: 0 kB' 'FilePages: 2949892 kB' 'Mapped: 39256 kB' 'AnonPages: 128460 kB' 'Shmem: 18516 kB' 'KernelStack: 4576 kB' 'PageTables: 3688 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 79420 kB' 'Slab: 143084 kB' 'SReclaimable: 79420 kB' 'SUnreclaim: 63664 kB' 'AnonHugePages: 20480 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.145 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.146 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.146 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.146 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.146 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.146 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.146 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.146 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.146 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.146 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.146 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.146 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.146 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.146 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.146 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.146 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.146 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.146 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.146 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.146 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.146 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.146 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.146 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.146 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.146 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.146 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.146 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.146 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.146 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.146 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.146 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.146 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.146 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.146 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.146 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.146 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.146 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.146 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.146 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.146 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.146 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.146 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.146 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.146 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.146 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.146 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.146 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.146 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.146 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.146 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.146 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.146 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:37.146 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.146 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.146 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.146 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:37.146 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:37.146 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:37.146 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:37.146 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:37.146 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:37.146 node0=1024 expecting 1024 00:04:37.146 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:37.146 13:57:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:37.146 00:04:37.146 real 0m0.905s 00:04:37.146 user 0m0.492s 00:04:37.146 sys 0m0.446s 00:04:37.146 13:57:23 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:37.146 13:57:23 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:37.146 ************************************ 00:04:37.146 END TEST no_shrink_alloc 00:04:37.146 ************************************ 00:04:37.404 13:57:23 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:37.404 13:57:23 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:37.404 13:57:23 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:37.404 13:57:23 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:37.404 13:57:23 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:37.404 13:57:23 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:37.404 13:57:23 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:37.404 13:57:23 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:37.404 13:57:23 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:37.404 13:57:23 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:37.404 00:04:37.404 real 0m3.900s 00:04:37.404 user 0m1.970s 00:04:37.404 sys 0m1.876s 00:04:37.404 13:57:23 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:37.404 13:57:23 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:37.404 ************************************ 00:04:37.404 END TEST hugepages 00:04:37.404 ************************************ 00:04:37.404 13:57:23 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:37.404 13:57:23 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:37.404 13:57:23 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:37.405 13:57:23 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.405 13:57:23 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:37.405 ************************************ 00:04:37.405 START TEST driver 00:04:37.405 ************************************ 00:04:37.405 13:57:23 setup.sh.driver -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:37.405 * Looking for test storage... 00:04:37.405 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:37.405 13:57:23 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:37.405 13:57:23 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:37.405 13:57:23 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:37.662 13:57:23 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:37.662 13:57:23 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:37.662 13:57:23 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.662 13:57:23 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:37.662 ************************************ 00:04:37.662 START TEST guess_driver 00:04:37.662 ************************************ 00:04:37.662 13:57:23 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:04:37.662 13:57:23 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:37.663 13:57:23 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:37.663 13:57:23 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:37.663 13:57:23 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:37.663 13:57:23 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:37.663 13:57:23 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:37.663 13:57:23 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:37.663 13:57:23 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:37.663 13:57:23 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:04:37.663 13:57:23 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:04:37.663 13:57:23 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:04:37.663 13:57:23 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:04:37.663 13:57:23 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:04:37.663 13:57:23 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:04:37.663 13:57:23 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:04:37.663 13:57:23 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:04:37.663 13:57:23 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/5.14.0-362.24.1.el9_3.x86_64/kernel/drivers/uio/uio.ko.xz 00:04:37.663 insmod /lib/modules/5.14.0-362.24.1.el9_3.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:04:37.663 13:57:23 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:04:37.663 13:57:23 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:04:37.663 13:57:23 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:37.663 Looking for driver=uio_pci_generic 00:04:37.663 13:57:23 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:04:37.663 13:57:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:37.663 13:57:23 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:37.663 13:57:23 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:37.663 13:57:23 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:38.231 13:57:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:04:38.231 13:57:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:04:38.231 13:57:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:38.231 13:57:24 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:38.231 13:57:24 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:38.231 13:57:24 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:38.231 13:57:24 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:38.231 13:57:24 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:38.231 13:57:24 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:38.231 13:57:24 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:38.489 00:04:38.489 real 0m0.850s 00:04:38.489 user 0m0.283s 00:04:38.489 sys 0m0.512s 00:04:38.489 13:57:24 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:38.489 13:57:24 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:38.489 ************************************ 00:04:38.489 END TEST guess_driver 00:04:38.489 ************************************ 00:04:38.748 13:57:24 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:04:38.748 00:04:38.748 real 0m1.311s 00:04:38.748 user 0m0.442s 00:04:38.748 sys 0m0.821s 00:04:38.748 13:57:24 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:38.748 13:57:24 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:38.748 ************************************ 00:04:38.748 END TEST driver 00:04:38.748 ************************************ 00:04:38.748 13:57:24 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:38.748 13:57:24 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:38.748 13:57:24 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:38.748 13:57:24 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:38.748 13:57:24 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:38.748 ************************************ 00:04:38.748 START TEST devices 00:04:38.748 ************************************ 00:04:38.748 13:57:24 setup.sh.devices -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:38.748 * Looking for test storage... 00:04:38.748 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:38.748 13:57:24 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:38.748 13:57:24 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:38.748 13:57:24 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:38.748 13:57:24 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:39.006 13:57:24 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:39.006 13:57:24 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:39.006 13:57:24 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:39.006 13:57:24 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:39.006 13:57:24 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:39.006 13:57:24 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:39.006 13:57:24 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:39.006 13:57:24 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:39.006 13:57:24 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:39.006 13:57:24 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:39.006 13:57:24 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:39.006 13:57:24 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:39.006 13:57:24 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:39.006 13:57:24 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:39.006 13:57:24 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:39.006 13:57:24 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:39.006 13:57:24 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:39.006 13:57:24 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:04:39.006 13:57:24 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:04:39.006 13:57:24 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:39.006 13:57:24 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:39.006 13:57:24 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:04:39.265 No valid GPT data, bailing 00:04:39.265 13:57:25 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:39.265 13:57:25 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:39.265 13:57:25 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:39.265 13:57:25 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:39.265 13:57:25 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:39.265 13:57:25 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:39.265 13:57:25 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:04:39.265 13:57:25 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:04:39.265 13:57:25 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:39.265 13:57:25 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:04:39.265 13:57:25 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:39.265 13:57:25 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:39.265 13:57:25 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:39.265 13:57:25 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:39.265 13:57:25 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:39.265 13:57:25 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:39.265 ************************************ 00:04:39.265 START TEST nvme_mount 00:04:39.265 ************************************ 00:04:39.265 13:57:25 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:04:39.265 13:57:25 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:39.265 13:57:25 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:39.265 13:57:25 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:39.265 13:57:25 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:39.265 13:57:25 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:39.265 13:57:25 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:39.265 13:57:25 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:39.265 13:57:25 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:39.265 13:57:25 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:39.265 13:57:25 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:39.265 13:57:25 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:39.265 13:57:25 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:39.265 13:57:25 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:39.265 13:57:25 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:39.265 13:57:25 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:39.265 13:57:25 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:39.265 13:57:25 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:39.265 13:57:25 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:39.265 13:57:25 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:40.202 Creating new GPT entries in memory. 00:04:40.202 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:40.202 other utilities. 00:04:40.202 13:57:26 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:40.202 13:57:26 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:40.202 13:57:26 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:40.202 13:57:26 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:40.202 13:57:26 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:41.140 Creating new GPT entries in memory. 00:04:41.140 The operation has completed successfully. 00:04:41.400 13:57:27 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:41.400 13:57:27 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:41.400 13:57:27 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 172652 00:04:41.400 13:57:27 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:41.400 13:57:27 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:04:41.400 13:57:27 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:41.400 13:57:27 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:41.400 13:57:27 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:41.400 13:57:27 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:41.400 13:57:27 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:10.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:41.400 13:57:27 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:04:41.400 13:57:27 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:41.400 13:57:27 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:41.400 13:57:27 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:41.400 13:57:27 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:41.400 13:57:27 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:41.400 13:57:27 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:41.400 13:57:27 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:41.400 13:57:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.400 13:57:27 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:04:41.400 13:57:27 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:41.400 13:57:27 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:41.400 13:57:27 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:41.659 13:57:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:04:41.659 13:57:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:41.659 13:57:27 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:41.659 13:57:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.659 13:57:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:04:41.659 13:57:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.659 13:57:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:04:41.659 13:57:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.659 13:57:27 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:41.659 13:57:27 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:41.659 13:57:27 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:41.659 13:57:27 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:41.659 13:57:27 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:41.659 13:57:27 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:41.659 13:57:27 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:41.659 13:57:27 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:41.659 13:57:27 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:41.659 13:57:27 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:41.659 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:41.659 13:57:27 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:41.659 13:57:27 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:41.918 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:41.918 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:41.918 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:41.918 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:41.918 13:57:27 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:04:41.918 13:57:27 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:04:41.918 13:57:27 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:41.918 13:57:27 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:41.918 13:57:27 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:41.918 13:57:27 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:41.918 13:57:27 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:10.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:41.918 13:57:27 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:04:41.918 13:57:27 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:41.918 13:57:27 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:41.918 13:57:27 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:41.918 13:57:27 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:41.918 13:57:27 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:41.918 13:57:27 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:41.918 13:57:27 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:41.918 13:57:27 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:04:41.918 13:57:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.918 13:57:27 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:41.918 13:57:27 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:41.918 13:57:27 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:41.918 13:57:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:04:41.918 13:57:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:41.918 13:57:27 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:41.918 13:57:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.918 13:57:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:04:41.918 13:57:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.177 13:57:27 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:04:42.177 13:57:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.177 13:57:28 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:42.177 13:57:28 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:42.177 13:57:28 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:42.177 13:57:28 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:42.177 13:57:28 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:42.177 13:57:28 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:42.177 13:57:28 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:10.0 data@nvme0n1 '' '' 00:04:42.177 13:57:28 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:04:42.177 13:57:28 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:42.177 13:57:28 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:42.177 13:57:28 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:42.177 13:57:28 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:42.177 13:57:28 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:42.177 13:57:28 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:42.177 13:57:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.177 13:57:28 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:04:42.177 13:57:28 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:42.177 13:57:28 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:42.177 13:57:28 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:42.437 13:57:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:04:42.437 13:57:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:42.437 13:57:28 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:42.437 13:57:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.437 13:57:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:04:42.437 13:57:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.437 13:57:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:04:42.437 13:57:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.437 13:57:28 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:42.437 13:57:28 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:42.437 13:57:28 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:42.437 13:57:28 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:42.437 13:57:28 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:42.437 13:57:28 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:42.437 13:57:28 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:42.437 13:57:28 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:42.437 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:42.437 00:04:42.437 real 0m3.321s 00:04:42.437 user 0m0.414s 00:04:42.437 sys 0m0.728s 00:04:42.437 13:57:28 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:42.437 13:57:28 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:42.437 ************************************ 00:04:42.437 END TEST nvme_mount 00:04:42.437 ************************************ 00:04:42.696 13:57:28 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:42.696 13:57:28 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:42.696 13:57:28 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:42.696 13:57:28 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:42.696 13:57:28 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:42.696 ************************************ 00:04:42.696 START TEST dm_mount 00:04:42.696 ************************************ 00:04:42.696 13:57:28 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:04:42.696 13:57:28 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:42.696 13:57:28 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:42.696 13:57:28 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:42.696 13:57:28 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:42.696 13:57:28 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:42.696 13:57:28 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:42.696 13:57:28 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:42.696 13:57:28 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:42.696 13:57:28 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:42.696 13:57:28 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:42.696 13:57:28 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:42.696 13:57:28 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:42.696 13:57:28 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:42.696 13:57:28 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:42.696 13:57:28 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:42.696 13:57:28 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:42.696 13:57:28 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:42.696 13:57:28 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:42.696 13:57:28 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:42.696 13:57:28 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:42.696 13:57:28 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:43.710 Creating new GPT entries in memory. 00:04:43.710 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:43.710 other utilities. 00:04:43.710 13:57:29 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:43.710 13:57:29 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:43.710 13:57:29 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:43.710 13:57:29 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:43.710 13:57:29 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:44.644 Creating new GPT entries in memory. 00:04:44.644 The operation has completed successfully. 00:04:44.644 13:57:30 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:44.644 13:57:30 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:44.644 13:57:30 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:44.644 13:57:30 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:44.644 13:57:30 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:04:45.576 The operation has completed successfully. 00:04:45.576 13:57:31 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:45.576 13:57:31 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:45.576 13:57:31 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 173034 00:04:45.576 13:57:31 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:45.576 13:57:31 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:45.576 13:57:31 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:45.576 13:57:31 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:45.576 13:57:31 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:45.576 13:57:31 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:45.576 13:57:31 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:45.576 13:57:31 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:45.576 13:57:31 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:45.833 13:57:31 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:45.833 13:57:31 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:45.833 13:57:31 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:45.833 13:57:31 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:45.833 13:57:31 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:45.833 13:57:31 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:04:45.833 13:57:31 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:45.833 13:57:31 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:45.833 13:57:31 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:45.833 13:57:31 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:45.833 13:57:31 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:10.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:45.833 13:57:31 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:04:45.833 13:57:31 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:45.833 13:57:31 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:45.833 13:57:31 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:45.833 13:57:31 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:45.833 13:57:31 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:45.833 13:57:31 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:45.833 13:57:31 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:45.833 13:57:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.833 13:57:31 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:04:45.833 13:57:31 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:45.833 13:57:31 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:45.833 13:57:31 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:45.833 13:57:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:04:45.833 13:57:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:45.833 13:57:31 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:45.833 13:57:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.834 13:57:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:04:45.834 13:57:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.091 13:57:31 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:04:46.091 13:57:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.091 13:57:31 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:46.091 13:57:31 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:04:46.091 13:57:31 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:46.091 13:57:31 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:46.091 13:57:31 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:46.091 13:57:31 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:46.091 13:57:31 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:10.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:46.091 13:57:31 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:04:46.091 13:57:31 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:46.091 13:57:31 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:46.091 13:57:31 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:46.091 13:57:31 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:46.091 13:57:31 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:46.091 13:57:31 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:46.091 13:57:31 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.092 13:57:31 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:04:46.092 13:57:31 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:46.092 13:57:31 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:46.092 13:57:31 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:46.349 13:57:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:04:46.349 13:57:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:46.349 13:57:32 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:46.349 13:57:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.349 13:57:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:04:46.349 13:57:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.349 13:57:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:04:46.349 13:57:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.349 13:57:32 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:46.349 13:57:32 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:46.349 13:57:32 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:46.349 13:57:32 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:46.349 13:57:32 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:46.349 13:57:32 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:46.349 13:57:32 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:46.349 13:57:32 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:46.349 13:57:32 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:46.349 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:46.349 13:57:32 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:46.349 13:57:32 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:46.349 00:04:46.349 real 0m3.821s 00:04:46.349 user 0m0.291s 00:04:46.349 sys 0m0.493s 00:04:46.349 13:57:32 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:46.349 13:57:32 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:46.349 ************************************ 00:04:46.349 END TEST dm_mount 00:04:46.349 ************************************ 00:04:46.349 13:57:32 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:46.349 13:57:32 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:46.349 13:57:32 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:46.349 13:57:32 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:46.349 13:57:32 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:46.349 13:57:32 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:46.349 13:57:32 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:46.349 13:57:32 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:46.607 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:46.607 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:46.607 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:46.607 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:46.607 13:57:32 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:46.607 13:57:32 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:46.607 13:57:32 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:46.607 13:57:32 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:46.607 13:57:32 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:46.607 13:57:32 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:46.607 13:57:32 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:46.607 00:04:46.607 real 0m7.827s 00:04:46.607 user 0m1.013s 00:04:46.607 sys 0m1.559s 00:04:46.607 13:57:32 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:46.607 13:57:32 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:46.607 ************************************ 00:04:46.607 END TEST devices 00:04:46.607 ************************************ 00:04:46.607 13:57:32 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:46.607 00:04:46.607 real 0m15.847s 00:04:46.607 user 0m4.592s 00:04:46.607 sys 0m5.914s 00:04:46.607 13:57:32 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:46.607 ************************************ 00:04:46.607 END TEST setup.sh 00:04:46.607 13:57:32 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:46.607 ************************************ 00:04:46.607 13:57:32 -- common/autotest_common.sh@1142 -- # return 0 00:04:46.607 13:57:32 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:46.865 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda2,mount@vda:vda5, so not binding PCI dev 00:04:46.865 Hugepages 00:04:46.865 node hugesize free / total 00:04:46.865 node0 1048576kB 0 / 0 00:04:46.865 node0 2048kB 2048 / 2048 00:04:46.865 00:04:46.865 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:46.865 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:47.124 NVMe 0000:00:10.0 1b36 0010 0 nvme nvme0 nvme0n1 00:04:47.124 13:57:32 -- spdk/autotest.sh@130 -- # uname -s 00:04:47.124 13:57:32 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:47.124 13:57:32 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:47.124 13:57:32 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:47.382 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda2,mount@vda:vda5, so not binding PCI dev 00:04:47.382 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:47.382 13:57:33 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:48.757 13:57:34 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:48.757 13:57:34 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:48.757 13:57:34 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:48.757 13:57:34 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:48.757 13:57:34 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:48.757 13:57:34 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:48.757 13:57:34 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:48.757 13:57:34 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:48.757 13:57:34 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:48.757 13:57:34 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:48.757 13:57:34 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:04:48.757 13:57:34 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:48.757 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda2,mount@vda:vda5, so not binding PCI dev 00:04:48.757 Waiting for block devices as requested 00:04:48.757 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:49.014 13:57:34 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:49.014 13:57:34 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:49.014 13:57:34 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:04:49.014 13:57:34 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:04:49.014 13:57:34 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme0 00:04:49.014 13:57:34 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme0 ]] 00:04:49.014 13:57:34 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme0 00:04:49.015 13:57:34 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:49.015 13:57:34 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:49.015 13:57:34 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:49.015 13:57:34 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:49.015 13:57:34 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:49.015 13:57:34 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:49.015 13:57:34 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:04:49.015 13:57:34 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:49.015 13:57:34 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:49.015 13:57:34 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:49.015 13:57:34 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:49.015 13:57:34 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:49.015 13:57:34 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:49.015 13:57:34 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:49.015 13:57:34 -- common/autotest_common.sh@1557 -- # continue 00:04:49.015 13:57:34 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:49.015 13:57:34 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:49.015 13:57:34 -- common/autotest_common.sh@10 -- # set +x 00:04:49.015 13:57:34 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:49.015 13:57:34 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:49.015 13:57:34 -- common/autotest_common.sh@10 -- # set +x 00:04:49.015 13:57:34 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:49.272 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda2,mount@vda:vda5, so not binding PCI dev 00:04:49.272 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:49.272 13:57:35 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:49.272 13:57:35 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:49.272 13:57:35 -- common/autotest_common.sh@10 -- # set +x 00:04:49.531 13:57:35 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:49.531 13:57:35 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:49.531 13:57:35 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:49.531 13:57:35 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:49.531 13:57:35 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:49.531 13:57:35 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:49.531 13:57:35 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:49.531 13:57:35 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:49.531 13:57:35 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:49.531 13:57:35 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:49.531 13:57:35 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:49.531 13:57:35 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:49.531 13:57:35 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:04:49.531 13:57:35 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:49.531 13:57:35 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:49.531 13:57:35 -- common/autotest_common.sh@1580 -- # device=0x0010 00:04:49.531 13:57:35 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:49.531 13:57:35 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:04:49.531 13:57:35 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:04:49.531 13:57:35 -- common/autotest_common.sh@1593 -- # return 0 00:04:49.531 13:57:35 -- spdk/autotest.sh@150 -- # '[' 1 -eq 1 ']' 00:04:49.531 13:57:35 -- spdk/autotest.sh@151 -- # run_test unittest /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:04:49.531 13:57:35 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:49.531 13:57:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.531 13:57:35 -- common/autotest_common.sh@10 -- # set +x 00:04:49.531 ************************************ 00:04:49.531 START TEST unittest 00:04:49.531 ************************************ 00:04:49.531 13:57:35 unittest -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:04:49.531 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:04:49.531 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit 00:04:49.531 + testdir=/home/vagrant/spdk_repo/spdk/test/unit 00:04:49.531 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:04:49.531 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit/../.. 00:04:49.531 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:49.531 + source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:04:49.531 ++ rpc_py=rpc_cmd 00:04:49.531 ++ set -e 00:04:49.531 ++ shopt -s nullglob 00:04:49.531 ++ shopt -s extglob 00:04:49.531 ++ shopt -s inherit_errexit 00:04:49.531 ++ '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:04:49.531 ++ [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:04:49.531 ++ source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:04:49.531 +++ CONFIG_WPDK_DIR= 00:04:49.531 +++ CONFIG_ASAN=y 00:04:49.531 +++ CONFIG_VBDEV_COMPRESS=n 00:04:49.531 +++ CONFIG_HAVE_EXECINFO_H=y 00:04:49.531 +++ CONFIG_USDT=n 00:04:49.531 +++ CONFIG_CUSTOMOCF=n 00:04:49.531 +++ CONFIG_PREFIX=/usr/local 00:04:49.531 +++ CONFIG_RBD=n 00:04:49.531 +++ CONFIG_LIBDIR= 00:04:49.531 +++ CONFIG_IDXD=y 00:04:49.531 +++ CONFIG_NVME_CUSE=y 00:04:49.531 +++ CONFIG_SMA=n 00:04:49.531 +++ CONFIG_VTUNE=n 00:04:49.531 +++ CONFIG_TSAN=n 00:04:49.531 +++ CONFIG_RDMA_SEND_WITH_INVAL=y 00:04:49.531 +++ CONFIG_VFIO_USER_DIR= 00:04:49.531 +++ CONFIG_PGO_CAPTURE=n 00:04:49.531 +++ CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:04:49.531 +++ CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:04:49.531 +++ CONFIG_LTO=n 00:04:49.531 +++ CONFIG_ISCSI_INITIATOR=y 00:04:49.531 +++ CONFIG_CET=n 00:04:49.531 +++ CONFIG_VBDEV_COMPRESS_MLX5=n 00:04:49.531 +++ CONFIG_OCF_PATH= 00:04:49.531 +++ CONFIG_RDMA_SET_TOS=y 00:04:49.531 +++ CONFIG_HAVE_ARC4RANDOM=n 00:04:49.531 +++ CONFIG_HAVE_LIBARCHIVE=n 00:04:49.531 +++ CONFIG_UBLK=n 00:04:49.531 +++ CONFIG_ISAL_CRYPTO=y 00:04:49.531 +++ CONFIG_OPENSSL_PATH= 00:04:49.531 +++ CONFIG_OCF=n 00:04:49.531 +++ CONFIG_FUSE=n 00:04:49.531 +++ CONFIG_VTUNE_DIR= 00:04:49.531 +++ CONFIG_FUZZER_LIB= 00:04:49.531 +++ CONFIG_FUZZER=n 00:04:49.531 +++ CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:04:49.531 +++ CONFIG_CRYPTO=n 00:04:49.531 +++ CONFIG_PGO_USE=n 00:04:49.531 +++ CONFIG_VHOST=y 00:04:49.531 +++ CONFIG_DAOS=n 00:04:49.531 +++ CONFIG_DPDK_INC_DIR= 00:04:49.531 +++ CONFIG_DAOS_DIR= 00:04:49.531 +++ CONFIG_UNIT_TESTS=y 00:04:49.531 +++ CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:04:49.531 +++ CONFIG_VIRTIO=y 00:04:49.531 +++ CONFIG_DPDK_UADK=n 00:04:49.531 +++ CONFIG_COVERAGE=y 00:04:49.531 +++ CONFIG_RDMA=y 00:04:49.531 +++ CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:04:49.531 +++ CONFIG_URING_PATH= 00:04:49.531 +++ CONFIG_XNVME=n 00:04:49.531 +++ CONFIG_VFIO_USER=n 00:04:49.531 +++ CONFIG_ARCH=native 00:04:49.531 +++ CONFIG_HAVE_EVP_MAC=y 00:04:49.531 +++ CONFIG_URING_ZNS=n 00:04:49.531 +++ CONFIG_WERROR=y 00:04:49.531 +++ CONFIG_HAVE_LIBBSD=n 00:04:49.531 +++ CONFIG_UBSAN=n 00:04:49.531 +++ CONFIG_IPSEC_MB_DIR= 00:04:49.531 +++ CONFIG_GOLANG=n 00:04:49.531 +++ CONFIG_ISAL=y 00:04:49.531 +++ CONFIG_IDXD_KERNEL=n 00:04:49.531 +++ CONFIG_DPDK_LIB_DIR= 00:04:49.531 +++ CONFIG_RDMA_PROV=verbs 00:04:49.531 +++ CONFIG_APPS=y 00:04:49.531 +++ CONFIG_SHARED=n 00:04:49.531 +++ CONFIG_HAVE_KEYUTILS=y 00:04:49.531 +++ CONFIG_FC_PATH= 00:04:49.531 +++ CONFIG_DPDK_PKG_CONFIG=n 00:04:49.531 +++ CONFIG_FC=n 00:04:49.531 +++ CONFIG_AVAHI=n 00:04:49.531 +++ CONFIG_FIO_PLUGIN=y 00:04:49.531 +++ CONFIG_RAID5F=n 00:04:49.531 +++ CONFIG_EXAMPLES=y 00:04:49.531 +++ CONFIG_TESTS=y 00:04:49.531 +++ CONFIG_CRYPTO_MLX5=n 00:04:49.531 +++ CONFIG_MAX_LCORES=128 00:04:49.531 +++ CONFIG_IPSEC_MB=n 00:04:49.531 +++ CONFIG_PGO_DIR= 00:04:49.531 +++ CONFIG_DEBUG=y 00:04:49.531 +++ CONFIG_DPDK_COMPRESSDEV=n 00:04:49.531 +++ CONFIG_CROSS_PREFIX= 00:04:49.531 +++ CONFIG_URING=n 00:04:49.532 ++ source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:04:49.532 +++++ dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:04:49.532 ++++ readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:04:49.532 +++ _root=/home/vagrant/spdk_repo/spdk/test/common 00:04:49.532 +++ _root=/home/vagrant/spdk_repo/spdk 00:04:49.532 +++ _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:04:49.532 +++ _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:04:49.532 +++ _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:04:49.532 +++ VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:04:49.532 +++ ISCSI_APP=("$_app_dir/iscsi_tgt") 00:04:49.532 +++ NVMF_APP=("$_app_dir/nvmf_tgt") 00:04:49.532 +++ VHOST_APP=("$_app_dir/vhost") 00:04:49.532 +++ DD_APP=("$_app_dir/spdk_dd") 00:04:49.532 +++ SPDK_APP=("$_app_dir/spdk_tgt") 00:04:49.532 +++ [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:04:49.532 +++ [[ #ifndef SPDK_CONFIG_H 00:04:49.532 #define SPDK_CONFIG_H 00:04:49.532 #define SPDK_CONFIG_APPS 1 00:04:49.532 #define SPDK_CONFIG_ARCH native 00:04:49.532 #define SPDK_CONFIG_ASAN 1 00:04:49.532 #undef SPDK_CONFIG_AVAHI 00:04:49.532 #undef SPDK_CONFIG_CET 00:04:49.532 #define SPDK_CONFIG_COVERAGE 1 00:04:49.532 #define SPDK_CONFIG_CROSS_PREFIX 00:04:49.532 #undef SPDK_CONFIG_CRYPTO 00:04:49.532 #undef SPDK_CONFIG_CRYPTO_MLX5 00:04:49.532 #undef SPDK_CONFIG_CUSTOMOCF 00:04:49.532 #undef SPDK_CONFIG_DAOS 00:04:49.532 #define SPDK_CONFIG_DAOS_DIR 00:04:49.532 #define SPDK_CONFIG_DEBUG 1 00:04:49.532 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:04:49.532 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:04:49.532 #define SPDK_CONFIG_DPDK_INC_DIR 00:04:49.532 #define SPDK_CONFIG_DPDK_LIB_DIR 00:04:49.532 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:04:49.532 #undef SPDK_CONFIG_DPDK_UADK 00:04:49.532 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:04:49.532 #define SPDK_CONFIG_EXAMPLES 1 00:04:49.532 #undef SPDK_CONFIG_FC 00:04:49.532 #define SPDK_CONFIG_FC_PATH 00:04:49.532 #define SPDK_CONFIG_FIO_PLUGIN 1 00:04:49.532 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:04:49.532 #undef SPDK_CONFIG_FUSE 00:04:49.532 #undef SPDK_CONFIG_FUZZER 00:04:49.532 #define SPDK_CONFIG_FUZZER_LIB 00:04:49.532 #undef SPDK_CONFIG_GOLANG 00:04:49.532 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:04:49.532 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:04:49.532 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:04:49.532 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:04:49.532 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:04:49.532 #undef SPDK_CONFIG_HAVE_LIBBSD 00:04:49.532 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:04:49.532 #define SPDK_CONFIG_IDXD 1 00:04:49.532 #undef SPDK_CONFIG_IDXD_KERNEL 00:04:49.532 #undef SPDK_CONFIG_IPSEC_MB 00:04:49.532 #define SPDK_CONFIG_IPSEC_MB_DIR 00:04:49.532 #define SPDK_CONFIG_ISAL 1 00:04:49.532 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:04:49.532 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:04:49.532 #define SPDK_CONFIG_LIBDIR 00:04:49.532 #undef SPDK_CONFIG_LTO 00:04:49.532 #define SPDK_CONFIG_MAX_LCORES 128 00:04:49.532 #define SPDK_CONFIG_NVME_CUSE 1 00:04:49.532 #undef SPDK_CONFIG_OCF 00:04:49.532 #define SPDK_CONFIG_OCF_PATH 00:04:49.532 #define SPDK_CONFIG_OPENSSL_PATH 00:04:49.532 #undef SPDK_CONFIG_PGO_CAPTURE 00:04:49.532 #define SPDK_CONFIG_PGO_DIR 00:04:49.532 #undef SPDK_CONFIG_PGO_USE 00:04:49.532 #define SPDK_CONFIG_PREFIX /usr/local 00:04:49.532 #undef SPDK_CONFIG_RAID5F 00:04:49.532 #undef SPDK_CONFIG_RBD 00:04:49.532 #define SPDK_CONFIG_RDMA 1 00:04:49.532 #define SPDK_CONFIG_RDMA_PROV verbs 00:04:49.532 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:04:49.532 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:04:49.532 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:04:49.532 #undef SPDK_CONFIG_SHARED 00:04:49.532 #undef SPDK_CONFIG_SMA 00:04:49.532 #define SPDK_CONFIG_TESTS 1 00:04:49.532 #undef SPDK_CONFIG_TSAN 00:04:49.532 #undef SPDK_CONFIG_UBLK 00:04:49.532 #undef SPDK_CONFIG_UBSAN 00:04:49.532 #define SPDK_CONFIG_UNIT_TESTS 1 00:04:49.532 #undef SPDK_CONFIG_URING 00:04:49.532 #define SPDK_CONFIG_URING_PATH 00:04:49.532 #undef SPDK_CONFIG_URING_ZNS 00:04:49.532 #undef SPDK_CONFIG_USDT 00:04:49.532 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:04:49.532 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:04:49.532 #undef SPDK_CONFIG_VFIO_USER 00:04:49.532 #define SPDK_CONFIG_VFIO_USER_DIR 00:04:49.532 #define SPDK_CONFIG_VHOST 1 00:04:49.532 #define SPDK_CONFIG_VIRTIO 1 00:04:49.532 #undef SPDK_CONFIG_VTUNE 00:04:49.532 #define SPDK_CONFIG_VTUNE_DIR 00:04:49.532 #define SPDK_CONFIG_WERROR 1 00:04:49.532 #define SPDK_CONFIG_WPDK_DIR 00:04:49.532 #undef SPDK_CONFIG_XNVME 00:04:49.532 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:04:49.532 +++ (( SPDK_AUTOTEST_DEBUG_APPS )) 00:04:49.532 ++ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:49.532 +++ [[ -e /bin/wpdk_common.sh ]] 00:04:49.532 +++ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:49.532 +++ source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:49.532 ++++ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:04:49.532 ++++ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:04:49.532 ++++ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:04:49.532 ++++ export PATH 00:04:49.532 ++++ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:04:49.532 ++ source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:04:49.532 +++++ dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:04:49.532 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:04:49.532 +++ _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:04:49.532 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:04:49.532 +++ _pmrootdir=/home/vagrant/spdk_repo/spdk 00:04:49.532 +++ TEST_TAG=N/A 00:04:49.532 +++ TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:04:49.532 +++ PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:04:49.532 ++++ uname -s 00:04:49.532 +++ PM_OS=Linux 00:04:49.532 +++ MONITOR_RESOURCES_SUDO=() 00:04:49.532 +++ declare -A MONITOR_RESOURCES_SUDO 00:04:49.532 +++ MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:04:49.532 +++ MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:04:49.532 +++ MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:04:49.532 +++ MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:04:49.532 +++ SUDO[0]= 00:04:49.532 +++ SUDO[1]='sudo -E' 00:04:49.532 +++ MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:04:49.532 +++ [[ Linux == FreeBSD ]] 00:04:49.532 +++ [[ Linux == Linux ]] 00:04:49.532 +++ [[ QEMU != QEMU ]] 00:04:49.532 +++ [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:04:49.532 ++ : 0 00:04:49.532 ++ export RUN_NIGHTLY 00:04:49.532 ++ : 0 00:04:49.532 ++ export SPDK_AUTOTEST_DEBUG_APPS 00:04:49.532 ++ : 0 00:04:49.532 ++ export SPDK_RUN_VALGRIND 00:04:49.532 ++ : 1 00:04:49.532 ++ export SPDK_RUN_FUNCTIONAL_TEST 00:04:49.532 ++ : 1 00:04:49.532 ++ export SPDK_TEST_UNITTEST 00:04:49.532 ++ : 00:04:49.532 ++ export SPDK_TEST_AUTOBUILD 00:04:49.532 ++ : 1 00:04:49.532 ++ export SPDK_TEST_RELEASE_BUILD 00:04:49.532 ++ : 0 00:04:49.532 ++ export SPDK_TEST_ISAL 00:04:49.532 ++ : 0 00:04:49.532 ++ export SPDK_TEST_ISCSI 00:04:49.532 ++ : 0 00:04:49.532 ++ export SPDK_TEST_ISCSI_INITIATOR 00:04:49.532 ++ : 0 00:04:49.532 ++ export SPDK_TEST_NVME 00:04:49.532 ++ : 0 00:04:49.532 ++ export SPDK_TEST_NVME_PMR 00:04:49.532 ++ : 0 00:04:49.532 ++ export SPDK_TEST_NVME_BP 00:04:49.532 ++ : 0 00:04:49.532 ++ export SPDK_TEST_NVME_CLI 00:04:49.532 ++ : 0 00:04:49.532 ++ export SPDK_TEST_NVME_CUSE 00:04:49.532 ++ : 0 00:04:49.532 ++ export SPDK_TEST_NVME_FDP 00:04:49.532 ++ : 0 00:04:49.532 ++ export SPDK_TEST_NVMF 00:04:49.532 ++ : 0 00:04:49.532 ++ export SPDK_TEST_VFIOUSER 00:04:49.532 ++ : 0 00:04:49.532 ++ export SPDK_TEST_VFIOUSER_QEMU 00:04:49.532 ++ : 0 00:04:49.532 ++ export SPDK_TEST_FUZZER 00:04:49.532 ++ : 0 00:04:49.532 ++ export SPDK_TEST_FUZZER_SHORT 00:04:49.532 ++ : rdma 00:04:49.532 ++ export SPDK_TEST_NVMF_TRANSPORT 00:04:49.532 ++ : 0 00:04:49.532 ++ export SPDK_TEST_RBD 00:04:49.532 ++ : 0 00:04:49.532 ++ export SPDK_TEST_VHOST 00:04:49.532 ++ : 1 00:04:49.532 ++ export SPDK_TEST_BLOCKDEV 00:04:49.532 ++ : 0 00:04:49.532 ++ export SPDK_TEST_IOAT 00:04:49.532 ++ : 0 00:04:49.532 ++ export SPDK_TEST_BLOBFS 00:04:49.532 ++ : 0 00:04:49.532 ++ export SPDK_TEST_VHOST_INIT 00:04:49.532 ++ : 0 00:04:49.532 ++ export SPDK_TEST_LVOL 00:04:49.532 ++ : 0 00:04:49.532 ++ export SPDK_TEST_VBDEV_COMPRESS 00:04:49.532 ++ : 1 00:04:49.532 ++ export SPDK_RUN_ASAN 00:04:49.532 ++ : 0 00:04:49.532 ++ export SPDK_RUN_UBSAN 00:04:49.532 ++ : 00:04:49.532 ++ export SPDK_RUN_EXTERNAL_DPDK 00:04:49.532 ++ : 0 00:04:49.532 ++ export SPDK_RUN_NON_ROOT 00:04:49.532 ++ : 0 00:04:49.532 ++ export SPDK_TEST_CRYPTO 00:04:49.532 ++ : 0 00:04:49.532 ++ export SPDK_TEST_FTL 00:04:49.532 ++ : 0 00:04:49.532 ++ export SPDK_TEST_OCF 00:04:49.532 ++ : 0 00:04:49.532 ++ export SPDK_TEST_VMD 00:04:49.532 ++ : 0 00:04:49.532 ++ export SPDK_TEST_OPAL 00:04:49.532 ++ : 00:04:49.532 ++ export SPDK_TEST_NATIVE_DPDK 00:04:49.532 ++ : true 00:04:49.532 ++ export SPDK_AUTOTEST_X 00:04:49.532 ++ : 0 00:04:49.532 ++ export SPDK_TEST_RAID5 00:04:49.532 ++ : 0 00:04:49.532 ++ export SPDK_TEST_URING 00:04:49.532 ++ : 0 00:04:49.532 ++ export SPDK_TEST_USDT 00:04:49.532 ++ : 1 00:04:49.532 ++ export SPDK_TEST_USE_IGB_UIO 00:04:49.532 ++ : 0 00:04:49.532 ++ export SPDK_TEST_SCHEDULER 00:04:49.532 ++ : 0 00:04:49.532 ++ export SPDK_TEST_SCANBUILD 00:04:49.532 ++ : 00:04:49.532 ++ export SPDK_TEST_NVMF_NICS 00:04:49.532 ++ : 0 00:04:49.532 ++ export SPDK_TEST_SMA 00:04:49.532 ++ : 1 00:04:49.532 ++ export SPDK_TEST_DAOS 00:04:49.533 ++ : 0 00:04:49.533 ++ export SPDK_TEST_XNVME 00:04:49.533 ++ : 0 00:04:49.533 ++ export SPDK_TEST_ACCEL_DSA 00:04:49.533 ++ : 0 00:04:49.533 ++ export SPDK_TEST_ACCEL_IAA 00:04:49.533 ++ : 00:04:49.533 ++ export SPDK_TEST_FUZZER_TARGET 00:04:49.533 ++ : 0 00:04:49.533 ++ export SPDK_TEST_NVMF_MDNS 00:04:49.533 ++ : 0 00:04:49.533 ++ export SPDK_JSONRPC_GO_CLIENT 00:04:49.533 ++ export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:04:49.533 ++ SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:04:49.533 ++ export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:04:49.533 ++ DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:04:49.533 ++ export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:04:49.533 ++ VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:04:49.533 ++ export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:04:49.533 ++ LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:04:49.533 ++ export PCI_BLOCK_SYNC_ON_RESET=yes 00:04:49.533 ++ PCI_BLOCK_SYNC_ON_RESET=yes 00:04:49.533 ++ export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:04:49.533 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:04:49.533 ++ export PYTHONDONTWRITEBYTECODE=1 00:04:49.533 ++ PYTHONDONTWRITEBYTECODE=1 00:04:49.533 ++ export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:04:49.533 ++ ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:04:49.533 ++ export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:04:49.533 ++ UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:04:49.533 ++ asan_suppression_file=/var/tmp/asan_suppression_file 00:04:49.533 ++ rm -rf /var/tmp/asan_suppression_file 00:04:49.533 ++ cat 00:04:49.533 ++ echo leak:libfuse3.so 00:04:49.533 ++ export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:04:49.533 ++ LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:04:49.533 ++ export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:04:49.533 ++ DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:04:49.533 ++ '[' -z /var/spdk/dependencies ']' 00:04:49.533 ++ export DEPENDENCY_DIR 00:04:49.533 ++ export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:04:49.533 ++ SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:04:49.533 ++ export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:04:49.533 ++ SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:04:49.533 ++ export QEMU_BIN= 00:04:49.533 ++ QEMU_BIN= 00:04:49.533 ++ export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:04:49.533 ++ VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:04:49.533 ++ export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:04:49.533 ++ AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:04:49.533 ++ export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:49.533 ++ UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:49.533 ++ '[' 0 -eq 0 ']' 00:04:49.533 ++ export valgrind= 00:04:49.533 ++ valgrind= 00:04:49.533 +++ uname -s 00:04:49.533 ++ '[' Linux = Linux ']' 00:04:49.533 ++ HUGEMEM=4096 00:04:49.533 ++ export CLEAR_HUGE=yes 00:04:49.533 ++ CLEAR_HUGE=yes 00:04:49.533 ++ [[ 0 -eq 1 ]] 00:04:49.533 ++ [[ 0 -eq 1 ]] 00:04:49.533 ++ MAKE=make 00:04:49.533 +++ nproc 00:04:49.533 ++ MAKEFLAGS=-j10 00:04:49.533 ++ export HUGEMEM=4096 00:04:49.533 ++ HUGEMEM=4096 00:04:49.533 ++ NO_HUGE=() 00:04:49.533 ++ TEST_MODE= 00:04:49.533 ++ [[ -z '' ]] 00:04:49.533 ++ PYTHONPATH+=:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:04:49.533 ++ exec 00:04:49.533 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:04:49.533 ++ /home/vagrant/spdk_repo/spdk/scripts/rpc.py --server 00:04:49.533 ++ set_test_storage 2147483648 00:04:49.533 ++ [[ -v testdir ]] 00:04:49.533 ++ local requested_size=2147483648 00:04:49.533 ++ local mount target_dir 00:04:49.533 ++ local -A mounts fss sizes avails uses 00:04:49.533 ++ local source fs size avail mount use 00:04:49.533 ++ local storage_fallback storage_candidates 00:04:49.533 +++ mktemp -udt spdk.XXXXXX 00:04:49.533 ++ storage_fallback=/tmp/spdk.cVKDnC 00:04:49.533 ++ storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:04:49.533 ++ [[ -n '' ]] 00:04:49.533 ++ [[ -n '' ]] 00:04:49.533 ++ mkdir -p /home/vagrant/spdk_repo/spdk/test/unit /tmp/spdk.cVKDnC/tests/unit /tmp/spdk.cVKDnC 00:04:49.533 ++ requested_size=2214592512 00:04:49.533 ++ read -r source fs size use avail _ mount 00:04:49.533 +++ df -T 00:04:49.533 +++ grep -v Filesystem 00:04:49.533 ++ mounts["$mount"]=devtmpfs 00:04:49.533 ++ fss["$mount"]=devtmpfs 00:04:49.533 ++ avails["$mount"]=4194304 00:04:49.533 ++ sizes["$mount"]=4194304 00:04:49.533 ++ uses["$mount"]=0 00:04:49.533 ++ read -r source fs size use avail _ mount 00:04:49.533 ++ mounts["$mount"]=tmpfs 00:04:49.533 ++ fss["$mount"]=tmpfs 00:04:49.533 ++ avails["$mount"]=6270410752 00:04:49.533 ++ sizes["$mount"]=6270410752 00:04:49.533 ++ uses["$mount"]=0 00:04:49.533 ++ read -r source fs size use avail _ mount 00:04:49.533 ++ mounts["$mount"]=tmpfs 00:04:49.533 ++ fss["$mount"]=tmpfs 00:04:49.533 ++ avails["$mount"]=2490781696 00:04:49.533 ++ sizes["$mount"]=2508165120 00:04:49.533 ++ uses["$mount"]=17383424 00:04:49.533 ++ read -r source fs size use avail _ mount 00:04:49.533 ++ mounts["$mount"]=/dev/vda5 00:04:49.533 ++ fss["$mount"]=xfs 00:04:49.533 ++ avails["$mount"]=13036015616 00:04:49.533 ++ sizes["$mount"]=20303577088 00:04:49.533 ++ uses["$mount"]=7267561472 00:04:49.533 ++ read -r source fs size use avail _ mount 00:04:49.533 ++ mounts["$mount"]=/dev/vda2 00:04:49.533 ++ fss["$mount"]=xfs 00:04:49.533 ++ avails["$mount"]=896184320 00:04:49.533 ++ sizes["$mount"]=1042161664 00:04:49.533 ++ uses["$mount"]=145977344 00:04:49.533 ++ read -r source fs size use avail _ mount 00:04:49.533 ++ mounts["$mount"]=/dev/vda1 00:04:49.533 ++ fss["$mount"]=vfat 00:04:49.533 ++ avails["$mount"]=97312768 00:04:49.533 ++ sizes["$mount"]=104607744 00:04:49.533 ++ uses["$mount"]=7294976 00:04:49.533 ++ read -r source fs size use avail _ mount 00:04:49.533 ++ mounts["$mount"]=tmpfs 00:04:49.533 ++ fss["$mount"]=tmpfs 00:04:49.533 ++ avails["$mount"]=1254076416 00:04:49.533 ++ sizes["$mount"]=1254080512 00:04:49.533 ++ uses["$mount"]=4096 00:04:49.533 ++ read -r source fs size use avail _ mount 00:04:49.533 ++ mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/rocky9-vg-autotest_2/rocky9-libvirt/output 00:04:49.533 ++ fss["$mount"]=fuse.sshfs 00:04:49.533 ++ avails["$mount"]=93559767040 00:04:49.533 ++ sizes["$mount"]=105088212992 00:04:49.533 ++ uses["$mount"]=6143012864 00:04:49.533 ++ read -r source fs size use avail _ mount 00:04:49.533 ++ printf '* Looking for test storage...\n' 00:04:49.533 * Looking for test storage... 00:04:49.533 ++ local target_space new_size 00:04:49.533 ++ for target_dir in "${storage_candidates[@]}" 00:04:49.533 +++ df /home/vagrant/spdk_repo/spdk/test/unit 00:04:49.533 +++ awk '$1 !~ /Filesystem/{print $6}' 00:04:49.533 ++ mount=/ 00:04:49.533 ++ target_space=13036015616 00:04:49.533 ++ (( target_space == 0 || target_space < requested_size )) 00:04:49.533 ++ (( target_space >= requested_size )) 00:04:49.533 ++ [[ xfs == tmpfs ]] 00:04:49.533 ++ [[ xfs == ramfs ]] 00:04:49.533 ++ [[ / == / ]] 00:04:49.533 ++ new_size=9482153984 00:04:49.533 ++ (( new_size * 100 / sizes[/] > 95 )) 00:04:49.533 ++ export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:04:49.533 ++ SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:04:49.533 ++ printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/unit 00:04:49.533 * Found test storage at /home/vagrant/spdk_repo/spdk/test/unit 00:04:49.533 ++ return 0 00:04:49.533 ++ set -o errtrace 00:04:49.533 ++ shopt -s extdebug 00:04:49.533 ++ trap 'trap - ERR; print_backtrace >&2' ERR 00:04:49.533 ++ PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:04:49.533 13:57:35 unittest -- common/autotest_common.sh@1687 -- # true 00:04:49.533 13:57:35 unittest -- common/autotest_common.sh@1689 -- # xtrace_fd 00:04:49.533 13:57:35 unittest -- common/autotest_common.sh@25 -- # [[ -n '' ]] 00:04:49.533 13:57:35 unittest -- common/autotest_common.sh@29 -- # exec 00:04:49.533 13:57:35 unittest -- common/autotest_common.sh@31 -- # xtrace_restore 00:04:49.533 13:57:35 unittest -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:04:49.533 13:57:35 unittest -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:04:49.533 13:57:35 unittest -- common/autotest_common.sh@18 -- # set -x 00:04:49.533 13:57:35 unittest -- unit/unittest.sh@17 -- # cd /home/vagrant/spdk_repo/spdk 00:04:49.533 13:57:35 unittest -- unit/unittest.sh@153 -- # '[' 0 -eq 1 ']' 00:04:49.533 13:57:35 unittest -- unit/unittest.sh@160 -- # '[' -z x ']' 00:04:49.533 13:57:35 unittest -- unit/unittest.sh@167 -- # '[' 0 -eq 1 ']' 00:04:49.533 13:57:35 unittest -- unit/unittest.sh@180 -- # grep CC_TYPE /home/vagrant/spdk_repo/spdk/mk/cc.mk 00:04:49.533 13:57:35 unittest -- unit/unittest.sh@180 -- # CC_TYPE=CC_TYPE=gcc 00:04:49.533 13:57:35 unittest -- unit/unittest.sh@181 -- # hash lcov 00:04:49.533 13:57:35 unittest -- unit/unittest.sh@181 -- # grep -q '#define SPDK_CONFIG_COVERAGE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:04:49.533 13:57:35 unittest -- unit/unittest.sh@181 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:49.533 13:57:35 unittest -- unit/unittest.sh@182 -- # cov_avail=yes 00:04:49.533 13:57:35 unittest -- unit/unittest.sh@186 -- # '[' yes = yes ']' 00:04:49.533 13:57:35 unittest -- unit/unittest.sh@188 -- # [[ -z /home/vagrant/spdk_repo/spdk/../output ]] 00:04:49.533 13:57:35 unittest -- unit/unittest.sh@191 -- # UT_COVERAGE=/home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:04:49.533 13:57:35 unittest -- unit/unittest.sh@193 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:04:49.533 13:57:35 unittest -- unit/unittest.sh@201 -- # export 'LCOV_OPTS= 00:04:49.533 --rc lcov_branch_coverage=1 00:04:49.533 --rc lcov_function_coverage=1 00:04:49.533 --rc genhtml_branch_coverage=1 00:04:49.533 --rc genhtml_function_coverage=1 00:04:49.533 --rc genhtml_legend=1 00:04:49.533 --rc geninfo_all_blocks=1 00:04:49.533 ' 00:04:49.533 13:57:35 unittest -- unit/unittest.sh@201 -- # LCOV_OPTS=' 00:04:49.533 --rc lcov_branch_coverage=1 00:04:49.533 --rc lcov_function_coverage=1 00:04:49.533 --rc genhtml_branch_coverage=1 00:04:49.533 --rc genhtml_function_coverage=1 00:04:49.533 --rc genhtml_legend=1 00:04:49.533 --rc geninfo_all_blocks=1 00:04:49.533 ' 00:04:49.533 13:57:35 unittest -- unit/unittest.sh@202 -- # export 'LCOV=lcov 00:04:49.534 --rc lcov_branch_coverage=1 00:04:49.534 --rc lcov_function_coverage=1 00:04:49.534 --rc genhtml_branch_coverage=1 00:04:49.534 --rc genhtml_function_coverage=1 00:04:49.534 --rc genhtml_legend=1 00:04:49.534 --rc geninfo_all_blocks=1 00:04:49.534 --no-external' 00:04:49.534 13:57:35 unittest -- unit/unittest.sh@202 -- # LCOV='lcov 00:04:49.534 --rc lcov_branch_coverage=1 00:04:49.534 --rc lcov_function_coverage=1 00:04:49.534 --rc genhtml_branch_coverage=1 00:04:49.534 --rc genhtml_function_coverage=1 00:04:49.534 --rc genhtml_legend=1 00:04:49.534 --rc geninfo_all_blocks=1 00:04:49.534 --no-external' 00:04:49.534 13:57:35 unittest -- unit/unittest.sh@204 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -d . -t Baseline -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info 00:05:01.734 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:01.734 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:11.728 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:05:11.728 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:05:11.728 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:05:11.728 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:05:11.728 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:05:11.728 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:05:11.728 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:05:11.728 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:05:11.728 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:05:11.728 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:05:11.728 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:05:11.728 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:05:11.728 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:05:11.728 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:05:11.728 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:05:11.728 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:05:11.728 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:05:11.728 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:05:11.728 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:05:11.728 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:05:11.728 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:05:11.728 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:05:11.728 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:05:11.728 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:05:11.728 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:05:11.728 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:05:11.728 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:05:11.728 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:05:11.728 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:05:11.728 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:05:11.728 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:05:11.728 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:05:11.728 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:05:11.728 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:05:11.728 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:05:11.728 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:05:11.728 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:05:11.728 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:05:11.728 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:05:11.728 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:05:11.728 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:05:11.728 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:05:11.728 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:05:11.728 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:05:11.728 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:05:11.728 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:05:11.728 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:05:11.728 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:05:11.728 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:05:11.728 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:05:11.728 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:05:11.728 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:05:11.728 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:05:11.728 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:05:11.728 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:05:11.728 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:05:11.728 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:05:11.728 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:05:11.728 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:05:11.728 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:05:11.728 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:05:11.728 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:05:11.728 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:05:11.728 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:05:11.728 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:05:11.728 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:05:11.728 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:05:11.728 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:05:11.728 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:05:11.728 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:05:11.728 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:05:11.728 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:05:11.728 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:05:11.728 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:05:11.728 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:05:11.728 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:05:11.728 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:05:11.728 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:05:11.728 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:05:11.729 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:05:11.729 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:05:11.729 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:05:11.729 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:05:11.729 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:05:11.729 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:05:11.729 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:05:11.729 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:05:11.729 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:05:11.729 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:05:11.729 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:05:11.729 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:05:11.729 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:05:11.729 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:05:11.729 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:05:11.729 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:05:11.729 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:05:11.729 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:05:11.729 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:05:11.729 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:05:11.729 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:05:11.729 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:05:11.729 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:05:11.729 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:05:11.729 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:05:11.729 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:05:11.729 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:05:11.729 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:05:11.729 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:05:11.729 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:05:11.729 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:05:11.729 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:05:11.729 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:05:11.729 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:05:11.729 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:05:11.729 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:05:11.729 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:05:11.729 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:05:11.729 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:05:11.729 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:05:11.729 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:05:11.729 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:05:11.729 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:05:11.729 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:05:11.729 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:05:11.729 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:05:11.729 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:05:11.729 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:05:11.729 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:05:11.729 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:05:11.729 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:05:11.729 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:05:11.729 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:05:11.729 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:05:11.729 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:05:11.729 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:05:11.729 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:05:11.729 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:05:11.729 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:05:11.729 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:05:11.729 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:05:11.729 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:05:11.729 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:05:11.729 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:05:11.729 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:05:11.729 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:05:11.729 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:05:11.729 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:05:11.729 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:05:11.729 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:05:11.729 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:05:11.729 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:05:11.729 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:05:11.729 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:05:11.729 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:05:11.729 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:05:11.729 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:05:11.729 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:05:11.729 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:05:11.729 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:05:11.729 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:05:11.729 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:05:11.729 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:05:11.729 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:05:11.729 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:05:11.729 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:05:11.729 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:05:11.729 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:05:11.729 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:05:11.729 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:05:11.729 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:05:11.729 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:05:11.729 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:05:11.729 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:05:11.729 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:05:11.729 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:05:11.729 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:05:43.798 13:58:27 unittest -- unit/unittest.sh@208 -- # uname -m 00:05:43.798 13:58:27 unittest -- unit/unittest.sh@208 -- # '[' x86_64 = aarch64 ']' 00:05:43.798 13:58:27 unittest -- unit/unittest.sh@212 -- # run_test unittest_pci_event /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:05:43.798 13:58:27 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:43.798 13:58:27 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.798 13:58:27 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:43.798 ************************************ 00:05:43.798 START TEST unittest_pci_event 00:05:43.798 ************************************ 00:05:43.798 13:58:27 unittest.unittest_pci_event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:05:43.798 00:05:43.798 00:05:43.798 CUnit - A unit testing framework for C - Version 2.1-3 00:05:43.798 http://cunit.sourceforge.net/ 00:05:43.798 00:05:43.798 00:05:43.798 Suite: pci_event 00:05:43.798 Test: test_pci_parse_event ...[2024-07-15 13:58:27.567859] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 162:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 0000 00:05:43.798 [2024-07-15 13:58:27.568734] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 185:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 000000 00:05:43.798 passed 00:05:43.798 00:05:43.798 Run Summary: Type Total Ran Passed Failed Inactive 00:05:43.798 suites 1 1 n/a 0 0 00:05:43.798 tests 1 1 1 0 0 00:05:43.798 asserts 15 15 15 0 n/a 00:05:43.798 00:05:43.798 Elapsed time = 0.001 seconds 00:05:43.798 00:05:43.798 real 0m0.028s 00:05:43.798 user 0m0.013s 00:05:43.798 sys 0m0.012s 00:05:43.798 13:58:27 unittest.unittest_pci_event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.798 13:58:27 unittest.unittest_pci_event -- common/autotest_common.sh@10 -- # set +x 00:05:43.798 ************************************ 00:05:43.798 END TEST unittest_pci_event 00:05:43.798 ************************************ 00:05:43.798 13:58:27 unittest -- common/autotest_common.sh@1142 -- # return 0 00:05:43.798 13:58:27 unittest -- unit/unittest.sh@213 -- # run_test unittest_include /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:05:43.798 13:58:27 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:43.798 13:58:27 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.798 13:58:27 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:43.798 ************************************ 00:05:43.798 START TEST unittest_include 00:05:43.798 ************************************ 00:05:43.798 13:58:27 unittest.unittest_include -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:05:43.798 00:05:43.798 00:05:43.798 CUnit - A unit testing framework for C - Version 2.1-3 00:05:43.798 http://cunit.sourceforge.net/ 00:05:43.798 00:05:43.798 00:05:43.798 Suite: histogram 00:05:43.798 Test: histogram_test ...passed 00:05:43.798 Test: histogram_merge ...passed 00:05:43.798 00:05:43.798 Run Summary: Type Total Ran Passed Failed Inactive 00:05:43.798 suites 1 1 n/a 0 0 00:05:43.798 tests 2 2 2 0 0 00:05:43.798 asserts 50 50 50 0 n/a 00:05:43.798 00:05:43.798 Elapsed time = 0.002 seconds 00:05:43.798 00:05:43.799 real 0m0.026s 00:05:43.799 user 0m0.019s 00:05:43.799 sys 0m0.007s 00:05:43.799 13:58:27 unittest.unittest_include -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.799 13:58:27 unittest.unittest_include -- common/autotest_common.sh@10 -- # set +x 00:05:43.799 ************************************ 00:05:43.799 END TEST unittest_include 00:05:43.799 ************************************ 00:05:43.799 13:58:27 unittest -- common/autotest_common.sh@1142 -- # return 0 00:05:43.799 13:58:27 unittest -- unit/unittest.sh@214 -- # run_test unittest_bdev unittest_bdev 00:05:43.799 13:58:27 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:43.799 13:58:27 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.799 13:58:27 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:43.799 ************************************ 00:05:43.799 START TEST unittest_bdev 00:05:43.799 ************************************ 00:05:43.799 13:58:27 unittest.unittest_bdev -- common/autotest_common.sh@1123 -- # unittest_bdev 00:05:43.799 13:58:27 unittest.unittest_bdev -- unit/unittest.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev.c/bdev_ut 00:05:43.799 00:05:43.799 00:05:43.799 CUnit - A unit testing framework for C - Version 2.1-3 00:05:43.799 http://cunit.sourceforge.net/ 00:05:43.799 00:05:43.799 00:05:43.799 Suite: bdev 00:05:43.799 Test: bytes_to_blocks_test ...passed 00:05:43.799 Test: num_blocks_test ...passed 00:05:43.799 Test: io_valid_test ...passed 00:05:43.799 Test: open_write_test ...[2024-07-15 13:58:27.783587] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8078:bdev_open: *ERROR*: bdev bdev1 already claimed: type exclusive_write by module bdev_ut 00:05:43.799 [2024-07-15 13:58:27.783851] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8078:bdev_open: *ERROR*: bdev bdev4 already claimed: type exclusive_write by module bdev_ut 00:05:43.799 [2024-07-15 13:58:27.783949] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8078:bdev_open: *ERROR*: bdev bdev5 already claimed: type exclusive_write by module bdev_ut 00:05:43.799 passed 00:05:43.799 Test: claim_test ...passed 00:05:43.799 Test: alias_add_del_test ...[2024-07-15 13:58:27.887535] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4613:bdev_name_add: *ERROR*: Bdev name bdev0 already exists 00:05:43.799 [2024-07-15 13:58:27.887712] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4643:spdk_bdev_alias_add: *ERROR*: Empty alias passed 00:05:43.799 [2024-07-15 13:58:27.887790] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4613:bdev_name_add: *ERROR*: Bdev name proper alias 0 already exists 00:05:43.799 passed 00:05:43.799 Test: get_device_stat_test ...passed 00:05:43.799 Test: bdev_io_types_test ...passed 00:05:43.799 Test: bdev_io_wait_test ...passed 00:05:43.799 Test: bdev_io_spans_split_test ...passed 00:05:43.799 Test: bdev_io_boundary_split_test ...passed 00:05:43.799 Test: bdev_io_max_size_and_segment_split_test ...[2024-07-15 13:58:28.075602] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:3208:_bdev_rw_split: *ERROR*: The first child io was less than a block size 00:05:43.799 passed 00:05:43.799 Test: bdev_io_mix_split_test ...passed 00:05:43.799 Test: bdev_io_split_with_io_wait ...passed 00:05:43.799 Test: bdev_io_write_unit_split_test ...[2024-07-15 13:58:28.209528] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2759:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:05:43.799 [2024-07-15 13:58:28.209669] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2759:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:05:43.799 [2024-07-15 13:58:28.209707] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2759:bdev_io_do_submit: *ERROR*: IO num_blocks 1 does not match the write_unit_size 32 00:05:43.799 [2024-07-15 13:58:28.209771] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2759:bdev_io_do_submit: *ERROR*: IO num_blocks 32 does not match the write_unit_size 64 00:05:43.799 passed 00:05:43.799 Test: bdev_io_alignment_with_boundary ...passed 00:05:43.799 Test: bdev_io_alignment ...passed 00:05:43.799 Test: bdev_histograms ...passed 00:05:43.799 Test: bdev_write_zeroes ...passed 00:05:43.799 Test: bdev_compare_and_write ...passed 00:05:43.799 Test: bdev_compare ...passed 00:05:43.799 Test: bdev_compare_emulated ...passed 00:05:43.799 Test: bdev_zcopy_write ...passed 00:05:43.799 Test: bdev_zcopy_read ...passed 00:05:43.799 Test: bdev_open_while_hotremove ...passed 00:05:43.799 Test: bdev_close_while_hotremove ...passed 00:05:43.799 Test: bdev_open_ext_test ...[2024-07-15 13:58:28.696479] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8184:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:05:43.799 passed 00:05:43.799 Test: bdev_open_ext_unregister ...[2024-07-15 13:58:28.696660] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8184:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:05:43.799 passed 00:05:43.799 Test: bdev_set_io_timeout ...passed 00:05:43.799 Test: bdev_set_qd_sampling ...passed 00:05:43.799 Test: lba_range_overlap ...passed 00:05:43.799 Test: lock_lba_range_check_ranges ...passed 00:05:43.799 Test: lock_lba_range_with_io_outstanding ...passed 00:05:43.799 Test: lock_lba_range_overlapped ...passed 00:05:43.799 Test: bdev_quiesce ...[2024-07-15 13:58:28.921740] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:10107:_spdk_bdev_quiesce: *ERROR*: The range to unquiesce was not found. 00:05:43.799 passed 00:05:43.799 Test: bdev_io_abort ...passed 00:05:43.799 Test: bdev_unmap ...passed 00:05:43.799 Test: bdev_write_zeroes_split_test ...passed 00:05:43.799 Test: bdev_set_options_test ...passed 00:05:43.799 Test: bdev_get_memory_domains ...passed 00:05:43.799 Test: bdev_io_ext ...[2024-07-15 13:58:29.055298] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 502:spdk_bdev_set_opts: *ERROR*: opts_size inside opts cannot be zero value 00:05:43.799 passed 00:05:43.799 Test: bdev_io_ext_no_opts ...passed 00:05:43.799 Test: bdev_io_ext_invalid_opts ...passed 00:05:43.799 Test: bdev_io_ext_split ...passed 00:05:43.799 Test: bdev_io_ext_bounce_buffer ...passed 00:05:43.799 Test: bdev_register_uuid_alias ...[2024-07-15 13:58:29.257488] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4613:bdev_name_add: *ERROR*: Bdev name 2027ce07-7c35-4355-bbbf-c058d0099790 already exists 00:05:43.799 [2024-07-15 13:58:29.257585] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:2027ce07-7c35-4355-bbbf-c058d0099790 alias for bdev bdev0 00:05:43.799 passed 00:05:43.799 Test: bdev_unregister_by_name ...[2024-07-15 13:58:29.278592] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7974:spdk_bdev_unregister_by_name: *ERROR*: Failed to open bdev with name: bdev1 00:05:43.799 [2024-07-15 13:58:29.278648] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7982:spdk_bdev_unregister_by_name: *ERROR*: Bdev bdev was not registered by the specified module. 00:05:43.799 passed 00:05:43.799 Test: for_each_bdev_test ...passed 00:05:43.799 Test: bdev_seek_test ...passed 00:05:43.799 Test: bdev_copy ...passed 00:05:43.799 Test: bdev_copy_split_test ...passed 00:05:43.799 Test: examine_locks ...passed 00:05:43.799 Test: claim_v2_rwo ...[2024-07-15 13:58:29.397995] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8078:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:43.799 [2024-07-15 13:58:29.398232] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8708:claim_verify_rwo: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:43.799 [2024-07-15 13:58:29.398362] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8873:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:43.799 [2024-07-15 13:58:29.398573] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8873:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:43.799 [2024-07-15 13:58:29.398706] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8545:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:43.799 [2024-07-15 13:58:29.398876] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8703:claim_verify_rwo: *ERROR*: bdev0: key option not supported with read-write-once claims 00:05:43.799 passed 00:05:43.799 Test: claim_v2_rom ...[2024-07-15 13:58:29.399261] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8078:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:05:43.799 [2024-07-15 13:58:29.399423] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8873:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:05:43.799 [2024-07-15 13:58:29.399563] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8873:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:05:43.799 [2024-07-15 13:58:29.399701] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8545:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:05:43.799 [2024-07-15 13:58:29.399889] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8746:claim_verify_rom: *ERROR*: bdev0: key option not supported with read-only-may claims 00:05:43.799 [2024-07-15 13:58:29.400074] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8741:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:05:43.799 passed 00:05:43.799 Test: claim_v2_rwm ...[2024-07-15 13:58:29.400471] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8776:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:05:43.799 [2024-07-15 13:58:29.400636] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8078:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:05:43.799 [2024-07-15 13:58:29.400795] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8873:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:05:43.799 [2024-07-15 13:58:29.400935] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8873:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:05:43.799 [2024-07-15 13:58:29.401064] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8545:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:05:43.799 [2024-07-15 13:58:29.401193] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8796:claim_verify_rwm: *ERROR*: bdev bdev0 already claimed with another key: type read_many_write_many by module bdev_ut 00:05:43.799 [2024-07-15 13:58:29.401330] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8776:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:05:43.799 passed 00:05:43.799 Test: claim_v2_existing_writer ...[2024-07-15 13:58:29.401591] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8741:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:05:43.799 [2024-07-15 13:58:29.401743] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8741:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:05:43.799 passed 00:05:43.799 Test: claim_v2_existing_v1 ...[2024-07-15 13:58:29.402074] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8873:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:05:43.799 [2024-07-15 13:58:29.402265] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8873:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:05:43.799 [2024-07-15 13:58:29.402341] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8873:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:05:43.799 passed 00:05:43.799 Test: claim_v1_existing_v2 ...[2024-07-15 13:58:29.402684] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8545:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:43.799 [2024-07-15 13:58:29.402871] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8545:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:05:43.799 [2024-07-15 13:58:29.402955] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8545:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:05:43.799 passed 00:05:43.799 Test: examine_claimed ...[2024-07-15 13:58:29.403572] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8873:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module vbdev_ut_examine1 00:05:43.799 passed 00:05:43.799 00:05:43.799 Run Summary: Type Total Ran Passed Failed Inactive 00:05:43.799 suites 1 1 n/a 0 0 00:05:43.799 tests 59 59 59 0 0 00:05:43.799 asserts 4599 4599 4599 0 n/a 00:05:43.799 00:05:43.799 Elapsed time = 1.667 seconds 00:05:43.800 13:58:29 unittest.unittest_bdev -- unit/unittest.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut 00:05:43.800 00:05:43.800 00:05:43.800 CUnit - A unit testing framework for C - Version 2.1-3 00:05:43.800 http://cunit.sourceforge.net/ 00:05:43.800 00:05:43.800 00:05:43.800 Suite: nvme 00:05:43.800 Test: test_create_ctrlr ...passed 00:05:43.800 Test: test_reset_ctrlr ...[2024-07-15 13:58:29.443355] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:43.800 passed 00:05:43.800 Test: test_race_between_reset_and_destruct_ctrlr ...passed 00:05:43.800 Test: test_failover_ctrlr ...passed 00:05:43.800 Test: test_race_between_failover_and_add_secondary_trid ...[2024-07-15 13:58:29.447973] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:43.800 [2024-07-15 13:58:29.448501] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:43.800 [2024-07-15 13:58:29.449045] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:43.800 passed 00:05:43.800 Test: test_pending_reset ...[2024-07-15 13:58:29.451582] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:43.800 [2024-07-15 13:58:29.452199] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:43.800 passed 00:05:43.800 Test: test_attach_ctrlr ...[2024-07-15 13:58:29.454318] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:05:43.800 passed 00:05:43.800 Test: test_aer_cb ...passed 00:05:43.800 Test: test_submit_nvme_cmd ...passed 00:05:43.800 Test: test_add_remove_trid ...passed 00:05:43.800 Test: test_abort ...[2024-07-15 13:58:29.458787] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:7452:bdev_nvme_comparev_and_writev_done: *ERROR*: Unexpected write success after compare failure. 00:05:43.800 passed 00:05:43.800 Test: test_get_io_qpair ...passed 00:05:43.800 Test: test_bdev_unregister ...passed 00:05:43.800 Test: test_compare_ns ...passed 00:05:43.800 Test: test_init_ana_log_page ...passed 00:05:43.800 Test: test_get_memory_domains ...passed 00:05:43.800 Test: test_reconnect_qpair ...[2024-07-15 13:58:29.461990] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:43.800 passed 00:05:43.800 Test: test_create_bdev_ctrlr ...[2024-07-15 13:58:29.462690] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5382:bdev_nvme_check_multipath: *ERROR*: cntlid 18 are duplicated. 00:05:43.800 passed 00:05:43.800 Test: test_add_multi_ns_to_bdev ...[2024-07-15 13:58:29.464130] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4573:nvme_bdev_add_ns: *ERROR*: Namespaces are not identical. 00:05:43.800 passed 00:05:43.800 Test: test_add_multi_io_paths_to_nbdev_ch ...passed 00:05:43.800 Test: test_admin_path ...passed 00:05:43.800 Test: test_reset_bdev_ctrlr ...passed 00:05:43.800 Test: test_find_io_path ...passed 00:05:43.800 Test: test_retry_io_if_ana_state_is_updating ...passed 00:05:43.800 Test: test_retry_io_for_io_path_error ...passed 00:05:43.800 Test: test_retry_io_count ...passed 00:05:43.800 Test: test_concurrent_read_ana_log_page ...passed 00:05:43.800 Test: test_retry_io_for_ana_error ...passed 00:05:43.800 Test: test_check_io_error_resiliency_params ...[2024-07-15 13:58:29.471896] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6076:bdev_nvme_check_io_error_resiliency_params: *ERROR*: ctrlr_loss_timeout_sec can't be less than -1. 00:05:43.800 [2024-07-15 13:58:29.472104] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6080:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:05:43.800 [2024-07-15 13:58:29.472262] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6089:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:05:43.800 [2024-07-15 13:58:29.472437] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6092:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than ctrlr_loss_timeout_sec. 00:05:43.800 [2024-07-15 13:58:29.472592] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6104:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:05:43.800 [2024-07-15 13:58:29.472783] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6104:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:05:43.800 [2024-07-15 13:58:29.472940] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6084:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io-fail_timeout_sec. 00:05:43.800 [2024-07-15 13:58:29.473140] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6099:bdev_nvme_check_io_error_resiliency_params: *ERROR*: fast_io_fail_timeout_sec can't be more than ctrlr_loss_timeout_sec. 00:05:43.800 [2024-07-15 13:58:29.473296] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6096:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io_fail_timeout_sec. 00:05:43.800 passed 00:05:43.800 Test: test_retry_io_if_ctrlr_is_resetting ...passed 00:05:43.800 Test: test_reconnect_ctrlr ...[2024-07-15 13:58:29.474445] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:43.800 [2024-07-15 13:58:29.474681] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:43.800 [2024-07-15 13:58:29.474998] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:43.800 [2024-07-15 13:58:29.475241] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:43.800 [2024-07-15 13:58:29.475466] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:43.800 passed 00:05:43.800 Test: test_retry_failover_ctrlr ...[2024-07-15 13:58:29.476086] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:43.800 passed 00:05:43.800 Test: test_fail_path ...[2024-07-15 13:58:29.476884] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:43.800 [2024-07-15 13:58:29.477124] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:43.800 [2024-07-15 13:58:29.477353] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:43.800 [2024-07-15 13:58:29.477590] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:43.800 [2024-07-15 13:58:29.477820] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:43.800 passed 00:05:43.800 Test: test_nvme_ns_cmp ...passed 00:05:43.800 Test: test_ana_transition ...passed 00:05:43.800 Test: test_set_preferred_path ...passed 00:05:43.800 Test: test_find_next_io_path ...passed 00:05:43.800 Test: test_find_io_path_min_qd ...passed 00:05:43.800 Test: test_disable_auto_failback ...[2024-07-15 13:58:29.480239] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:43.800 passed 00:05:43.800 Test: test_set_multipath_policy ...passed 00:05:43.800 Test: test_uuid_generation ...passed 00:05:43.800 Test: test_retry_io_to_same_path ...passed 00:05:43.800 Test: test_race_between_reset_and_disconnected ...passed 00:05:43.800 Test: test_ctrlr_op_rpc ...passed 00:05:43.800 Test: test_bdev_ctrlr_op_rpc ...passed 00:05:43.800 Test: test_disable_enable_ctrlr ...[2024-07-15 13:58:29.484371] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:43.800 [2024-07-15 13:58:29.484637] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:43.800 passed 00:05:43.800 Test: test_delete_ctrlr_done ...passed 00:05:43.800 Test: test_ns_remove_during_reset ...passed 00:05:43.800 Test: test_io_path_is_current ...passed 00:05:43.800 00:05:43.800 Run Summary: Type Total Ran Passed Failed Inactive 00:05:43.800 suites 1 1 n/a 0 0 00:05:43.800 tests 49 49 49 0 0 00:05:43.800 asserts 3577 3577 3577 0 n/a 00:05:43.800 00:05:43.800 Elapsed time = 0.032 seconds 00:05:43.800 13:58:29 unittest.unittest_bdev -- unit/unittest.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut 00:05:43.800 00:05:43.800 00:05:43.800 CUnit - A unit testing framework for C - Version 2.1-3 00:05:43.800 http://cunit.sourceforge.net/ 00:05:43.800 00:05:43.800 Test Options 00:05:43.800 blocklen = 4096, strip_size = 64, max_io_size = 1024, g_max_base_drives = 32, g_max_raids = 2 00:05:43.800 00:05:43.800 Suite: raid 00:05:43.800 Test: test_create_raid ...passed 00:05:43.800 Test: test_create_raid_superblock ...passed 00:05:43.800 Test: test_delete_raid ...passed 00:05:43.800 Test: test_create_raid_invalid_args ...[2024-07-15 13:58:29.524295] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1481:_raid_bdev_create: *ERROR*: Unsupported raid level '-1' 00:05:43.800 [2024-07-15 13:58:29.524824] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1475:_raid_bdev_create: *ERROR*: Invalid strip size 1231 00:05:43.800 [2024-07-15 13:58:29.525370] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1465:_raid_bdev_create: *ERROR*: Duplicate raid bdev name found: raid1 00:05:43.800 [2024-07-15 13:58:29.525738] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3193:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:05:43.800 [2024-07-15 13:58:29.525974] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3369:raid_bdev_add_base_bdev: *ERROR*: base bdev 'Nvme0n1' configure failed: (null) 00:05:43.800 [2024-07-15 13:58:29.526832] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3193:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:05:43.800 [2024-07-15 13:58:29.527008] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3369:raid_bdev_add_base_bdev: *ERROR*: base bdev 'Nvme0n1' configure failed: (null) 00:05:43.800 passed 00:05:43.800 Test: test_delete_raid_invalid_args ...passed 00:05:43.800 Test: test_io_channel ...passed 00:05:43.800 Test: test_reset_io ...passed 00:05:43.800 Test: test_multi_raid ...passed 00:05:43.800 Test: test_io_type_supported ...passed 00:05:43.800 Test: test_raid_json_dump_info ...passed 00:05:43.800 Test: test_context_size ...passed 00:05:43.800 Test: test_raid_level_conversions ...passed 00:05:43.800 Test: test_raid_io_split ...passed 00:05:43.800 Test: test_raid_process ...passed 00:05:43.800 00:05:43.800 Run Summary: Type Total Ran Passed Failed Inactive 00:05:43.800 suites 1 1 n/a 0 0 00:05:43.800 tests 14 14 14 0 0 00:05:43.800 asserts 6183 6183 6183 0 n/a 00:05:43.800 00:05:43.800 Elapsed time = 0.018 seconds 00:05:43.800 13:58:29 unittest.unittest_bdev -- unit/unittest.sh@23 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut 00:05:43.800 00:05:43.800 00:05:43.800 CUnit - A unit testing framework for C - Version 2.1-3 00:05:43.800 http://cunit.sourceforge.net/ 00:05:43.800 00:05:43.800 00:05:43.800 Suite: raid_sb 00:05:43.800 Test: test_raid_bdev_write_superblock ...passed 00:05:43.800 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:05:43.801 Test: test_raid_bdev_parse_superblock ...[2024-07-15 13:58:29.570412] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 165:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:05:43.801 passed 00:05:43.801 Suite: raid_sb_md 00:05:43.801 Test: test_raid_bdev_write_superblock ...passed 00:05:43.801 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:05:43.801 Test: test_raid_bdev_parse_superblock ...[2024-07-15 13:58:29.571811] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 165:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:05:43.801 passed 00:05:43.801 Suite: raid_sb_md_interleaved 00:05:43.801 Test: test_raid_bdev_write_superblock ...passed 00:05:43.801 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:05:43.801 Test: test_raid_bdev_parse_superblock ...[2024-07-15 13:58:29.572819] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 165:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:05:43.801 passed 00:05:43.801 00:05:43.801 Run Summary: Type Total Ran Passed Failed Inactive 00:05:43.801 suites 3 3 n/a 0 0 00:05:43.801 tests 9 9 9 0 0 00:05:43.801 asserts 139 139 139 0 n/a 00:05:43.801 00:05:43.801 Elapsed time = 0.002 seconds 00:05:43.801 13:58:29 unittest.unittest_bdev -- unit/unittest.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/concat.c/concat_ut 00:05:43.801 00:05:43.801 00:05:43.801 CUnit - A unit testing framework for C - Version 2.1-3 00:05:43.801 http://cunit.sourceforge.net/ 00:05:43.801 00:05:43.801 00:05:43.801 Suite: concat 00:05:43.801 Test: test_concat_start ...passed 00:05:43.801 Test: test_concat_rw ...passed 00:05:43.801 Test: test_concat_null_payload ...passed 00:05:43.801 00:05:43.801 Run Summary: Type Total Ran Passed Failed Inactive 00:05:43.801 suites 1 1 n/a 0 0 00:05:43.801 tests 3 3 3 0 0 00:05:43.801 asserts 8460 8460 8460 0 n/a 00:05:43.801 00:05:43.801 Elapsed time = 0.004 seconds 00:05:43.801 13:58:29 unittest.unittest_bdev -- unit/unittest.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid0.c/raid0_ut 00:05:43.801 00:05:43.801 00:05:43.801 CUnit - A unit testing framework for C - Version 2.1-3 00:05:43.801 http://cunit.sourceforge.net/ 00:05:43.801 00:05:43.801 00:05:43.801 Suite: raid0 00:05:43.801 Test: test_write_io ...passed 00:05:43.801 Test: test_read_io ...passed 00:05:43.801 Test: test_unmap_io ...passed 00:05:43.801 Test: test_io_failure ...passed 00:05:43.801 Suite: raid0_dif 00:05:43.801 Test: test_write_io ...passed 00:05:43.801 Test: test_read_io ...passed 00:05:43.801 Test: test_unmap_io ...passed 00:05:43.801 Test: test_io_failure ...passed 00:05:43.801 00:05:43.801 Run Summary: Type Total Ran Passed Failed Inactive 00:05:43.801 suites 2 2 n/a 0 0 00:05:43.801 tests 8 8 8 0 0 00:05:43.801 asserts 368291 368291 368291 0 n/a 00:05:43.801 00:05:43.801 Elapsed time = 0.082 seconds 00:05:43.801 13:58:29 unittest.unittest_bdev -- unit/unittest.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid1.c/raid1_ut 00:05:43.801 00:05:43.801 00:05:43.801 CUnit - A unit testing framework for C - Version 2.1-3 00:05:43.801 http://cunit.sourceforge.net/ 00:05:43.801 00:05:43.801 00:05:43.801 Suite: raid1 00:05:43.801 Test: test_raid1_start ...passed 00:05:43.801 Test: test_raid1_read_balancing ...passed 00:05:43.801 Test: test_raid1_write_error ...passed 00:05:43.801 Test: test_raid1_read_error ...passed 00:05:43.801 00:05:43.801 Run Summary: Type Total Ran Passed Failed Inactive 00:05:43.801 suites 1 1 n/a 0 0 00:05:43.801 tests 4 4 4 0 0 00:05:43.801 asserts 4374 4374 4374 0 n/a 00:05:43.801 00:05:43.801 Elapsed time = 0.003 seconds 00:05:43.801 13:58:29 unittest.unittest_bdev -- unit/unittest.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut 00:05:43.801 00:05:43.801 00:05:43.801 CUnit - A unit testing framework for C - Version 2.1-3 00:05:43.801 http://cunit.sourceforge.net/ 00:05:43.801 00:05:43.801 00:05:43.801 Suite: zone 00:05:43.801 Test: test_zone_get_operation ...passed 00:05:43.801 Test: test_bdev_zone_get_info ...passed 00:05:43.801 Test: test_bdev_zone_management ...passed 00:05:43.801 Test: test_bdev_zone_append ...passed 00:05:43.801 Test: test_bdev_zone_append_with_md ...passed 00:05:43.801 Test: test_bdev_zone_appendv ...passed 00:05:43.801 Test: test_bdev_zone_appendv_with_md ...passed 00:05:43.801 Test: test_bdev_io_get_append_location ...passed 00:05:43.801 00:05:43.801 Run Summary: Type Total Ran Passed Failed Inactive 00:05:43.801 suites 1 1 n/a 0 0 00:05:43.801 tests 8 8 8 0 0 00:05:43.801 asserts 94 94 94 0 n/a 00:05:43.801 00:05:43.801 Elapsed time = 0.001 seconds 00:05:44.060 13:58:29 unittest.unittest_bdev -- unit/unittest.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/gpt/gpt.c/gpt_ut 00:05:44.060 00:05:44.060 00:05:44.060 CUnit - A unit testing framework for C - Version 2.1-3 00:05:44.060 http://cunit.sourceforge.net/ 00:05:44.060 00:05:44.060 00:05:44.060 Suite: gpt_parse 00:05:44.060 Test: test_parse_mbr_and_primary ...[2024-07-15 13:58:29.815055] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:05:44.060 [2024-07-15 13:58:29.815504] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:05:44.060 [2024-07-15 13:58:29.815691] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:05:44.060 [2024-07-15 13:58:29.815890] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:05:44.060 [2024-07-15 13:58:29.816024] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:05:44.060 [2024-07-15 13:58:29.816280] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:05:44.060 passed 00:05:44.060 Test: test_parse_secondary ...[2024-07-15 13:58:29.816833] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:05:44.060 [2024-07-15 13:58:29.816984] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:05:44.060 [2024-07-15 13:58:29.817121] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:05:44.060 [2024-07-15 13:58:29.817269] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:05:44.060 passed 00:05:44.060 Test: test_check_mbr ...[2024-07-15 13:58:29.817806] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:05:44.060 [2024-07-15 13:58:29.817945] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:05:44.060 passed 00:05:44.060 Test: test_read_header ...[2024-07-15 13:58:29.818267] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=600 00:05:44.060 [2024-07-15 13:58:29.818480] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 177:gpt_read_header: *ERROR*: head crc32 does not match, provided=584158336, calculated=3316781438 00:05:44.060 [2024-07-15 13:58:29.818677] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 184:gpt_read_header: *ERROR*: signature did not match 00:05:44.060 [2024-07-15 13:58:29.818854] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 191:gpt_read_header: *ERROR*: head my_lba(7016996765293437281) != expected(1) 00:05:44.060 [2024-07-15 13:58:29.818991] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 135:gpt_lba_range_check: *ERROR*: Head's usable_lba_end(7016996765293437281) > lba_end(0) 00:05:44.060 [2024-07-15 13:58:29.819146] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 197:gpt_read_header: *ERROR*: lba range check error 00:05:44.060 passed 00:05:44.060 Test: test_read_partitions ...[2024-07-15 13:58:29.819464] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=256 which exceeds max=128 00:05:44.061 [2024-07-15 13:58:29.819644] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 95:gpt_read_partitions: *ERROR*: Partition_entry_size(0) != expected(80) 00:05:44.061 [2024-07-15 13:58:29.819791] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 59:gpt_get_partitions_buf: *ERROR*: Buffer size is not enough 00:05:44.061 [2024-07-15 13:58:29.819923] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 105:gpt_read_partitions: *ERROR*: Failed to get gpt partitions buf 00:05:44.061 [2024-07-15 13:58:29.820218] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 113:gpt_read_partitions: *ERROR*: GPT partition entry array crc32 did not match 00:05:44.061 passed 00:05:44.061 00:05:44.061 Run Summary: Type Total Ran Passed Failed Inactive 00:05:44.061 suites 1 1 n/a 0 0 00:05:44.061 tests 5 5 5 0 0 00:05:44.061 asserts 33 33 33 0 n/a 00:05:44.061 00:05:44.061 Elapsed time = 0.003 seconds 00:05:44.061 13:58:29 unittest.unittest_bdev -- unit/unittest.sh@29 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/part.c/part_ut 00:05:44.061 00:05:44.061 00:05:44.061 CUnit - A unit testing framework for C - Version 2.1-3 00:05:44.061 http://cunit.sourceforge.net/ 00:05:44.061 00:05:44.061 00:05:44.061 Suite: bdev_part 00:05:44.061 Test: part_test ...[2024-07-15 13:58:29.849924] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4613:bdev_name_add: *ERROR*: Bdev name e20457cb-23c8-5cde-bad1-1f2645f22bc4 already exists 00:05:44.061 [2024-07-15 13:58:29.850311] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:e20457cb-23c8-5cde-bad1-1f2645f22bc4 alias for bdev test1 00:05:44.061 passed 00:05:44.061 Test: part_free_test ...passed 00:05:44.061 Test: part_get_io_channel_test ...passed 00:05:44.061 Test: part_construct_ext ...passed 00:05:44.061 00:05:44.061 Run Summary: Type Total Ran Passed Failed Inactive 00:05:44.061 suites 1 1 n/a 0 0 00:05:44.061 tests 4 4 4 0 0 00:05:44.061 asserts 48 48 48 0 n/a 00:05:44.061 00:05:44.061 Elapsed time = 0.053 seconds 00:05:44.061 13:58:29 unittest.unittest_bdev -- unit/unittest.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut 00:05:44.061 00:05:44.061 00:05:44.061 CUnit - A unit testing framework for C - Version 2.1-3 00:05:44.061 http://cunit.sourceforge.net/ 00:05:44.061 00:05:44.061 00:05:44.061 Suite: scsi_nvme_suite 00:05:44.061 Test: scsi_nvme_translate_test ...passed 00:05:44.061 00:05:44.061 Run Summary: Type Total Ran Passed Failed Inactive 00:05:44.061 suites 1 1 n/a 0 0 00:05:44.061 tests 1 1 1 0 0 00:05:44.061 asserts 104 104 104 0 n/a 00:05:44.061 00:05:44.061 Elapsed time = 0.000 seconds 00:05:44.061 13:58:29 unittest.unittest_bdev -- unit/unittest.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut 00:05:44.061 00:05:44.061 00:05:44.061 CUnit - A unit testing framework for C - Version 2.1-3 00:05:44.061 http://cunit.sourceforge.net/ 00:05:44.061 00:05:44.061 00:05:44.061 Suite: lvol 00:05:44.061 Test: ut_lvs_init ...[2024-07-15 13:58:29.961007] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 180:_vbdev_lvs_create_cb: *ERROR*: Cannot create lvol store bdev 00:05:44.061 [2024-07-15 13:58:29.961446] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 264:vbdev_lvs_create: *ERROR*: Cannot create blobstore device 00:05:44.061 passed 00:05:44.061 Test: ut_lvol_init ...passed 00:05:44.061 Test: ut_lvol_snapshot ...passed 00:05:44.061 Test: ut_lvol_clone ...passed 00:05:44.061 Test: ut_lvs_destroy ...passed 00:05:44.061 Test: ut_lvs_unload ...passed 00:05:44.061 Test: ut_lvol_resize ...[2024-07-15 13:58:29.963657] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1394:vbdev_lvol_resize: *ERROR*: lvol does not exist 00:05:44.061 passed 00:05:44.061 Test: ut_lvol_set_read_only ...passed 00:05:44.061 Test: ut_lvol_hotremove ...passed 00:05:44.061 Test: ut_vbdev_lvol_get_io_channel ...passed 00:05:44.061 Test: ut_vbdev_lvol_io_type_supported ...passed 00:05:44.061 Test: ut_lvol_read_write ...passed 00:05:44.061 Test: ut_vbdev_lvol_submit_request ...passed 00:05:44.061 Test: ut_lvol_examine_config ...passed 00:05:44.061 Test: ut_lvol_examine_disk ...[2024-07-15 13:58:29.965131] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1536:_vbdev_lvs_examine_finish: *ERROR*: Error opening lvol UNIT_TEST_UUID 00:05:44.061 passed 00:05:44.061 Test: ut_lvol_rename ...[2024-07-15 13:58:29.966130] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 105:_vbdev_lvol_change_bdev_alias: *ERROR*: cannot add alias 'lvs/new_lvol_name' 00:05:44.061 [2024-07-15 13:58:29.966328] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1344:vbdev_lvol_rename: *ERROR*: renaming lvol to 'new_lvol_name' does not succeed 00:05:44.061 passed 00:05:44.061 Test: ut_bdev_finish ...passed 00:05:44.061 Test: ut_lvs_rename ...passed 00:05:44.061 Test: ut_lvol_seek ...passed 00:05:44.061 Test: ut_esnap_dev_create ...[2024-07-15 13:58:29.967538] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1879:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : NULL esnap ID 00:05:44.061 [2024-07-15 13:58:29.967716] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1885:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID length (36) 00:05:44.061 [2024-07-15 13:58:29.967876] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1890:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID: not a UUID 00:05:44.061 passed 00:05:44.061 Test: ut_lvol_esnap_clone_bad_args ...[2024-07-15 13:58:29.968278] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1280:vbdev_lvol_create_bdev_clone: *ERROR*: lvol store not specified 00:05:44.061 [2024-07-15 13:58:29.968417] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1287:vbdev_lvol_create_bdev_clone: *ERROR*: bdev '255f4236-9427-42d0-a9f1-aa17f37dd8db' could not be opened: error -19 00:05:44.061 passed 00:05:44.061 Test: ut_lvol_shallow_copy ...[2024-07-15 13:58:29.969080] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1977:vbdev_lvol_shallow_copy: *ERROR*: lvol must not be NULL 00:05:44.061 [2024-07-15 13:58:29.969220] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1982:vbdev_lvol_shallow_copy: *ERROR*: lvol lvol_sc, bdev name must not be NULL 00:05:44.061 passed 00:05:44.061 Test: ut_lvol_set_external_parent ...[2024-07-15 13:58:29.969599] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:2037:vbdev_lvol_set_external_parent: *ERROR*: bdev '255f4236-9427-42d0-a9f1-aa17f37dd8db' could not be opened: error -19 00:05:44.061 passed 00:05:44.061 00:05:44.061 Run Summary: Type Total Ran Passed Failed Inactive 00:05:44.061 suites 1 1 n/a 0 0 00:05:44.061 tests 23 23 23 0 0 00:05:44.061 asserts 770 770 770 0 n/a 00:05:44.061 00:05:44.061 Elapsed time = 0.005 seconds 00:05:44.061 13:58:29 unittest.unittest_bdev -- unit/unittest.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut 00:05:44.061 00:05:44.061 00:05:44.061 CUnit - A unit testing framework for C - Version 2.1-3 00:05:44.061 http://cunit.sourceforge.net/ 00:05:44.061 00:05:44.061 00:05:44.061 Suite: zone_block 00:05:44.061 Test: test_zone_block_create ...passed 00:05:44.061 Test: test_zone_block_create_invalid ...[2024-07-15 13:58:30.025760] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 624:zone_block_insert_name: *ERROR*: base bdev Nvme0n1 already claimed 00:05:44.061 [2024-07-15 13:58:30.026184] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-07-15 13:58:30.026488] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 721:zone_block_register: *ERROR*: Base bdev zone_dev1 is already a zoned bdev 00:05:44.061 [2024-07-15 13:58:30.026660] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-07-15 13:58:30.026939] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 860:vbdev_zone_block_create: *ERROR*: Zone capacity can't be 0 00:05:44.061 [2024-07-15 13:58:30.027131] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argument[2024-07-15 13:58:30.027355] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 865:vbdev_zone_block_create: *ERROR*: Optimal open zones can't be 0 00:05:44.061 [2024-07-15 13:58:30.027543] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argumentpassed 00:05:44.061 Test: test_get_zone_info ...[2024-07-15 13:58:30.028348] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:44.061 [2024-07-15 13:58:30.028564] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:44.061 [2024-07-15 13:58:30.028780] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:44.061 passed 00:05:44.061 Test: test_supported_io_types ...passed 00:05:44.061 Test: test_reset_zone ...[2024-07-15 13:58:30.030030] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:44.061 [2024-07-15 13:58:30.030227] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:44.061 passed 00:05:44.061 Test: test_open_zone ...[2024-07-15 13:58:30.030960] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:44.061 [2024-07-15 13:58:30.031763] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:44.061 [2024-07-15 13:58:30.031964] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:44.061 passed 00:05:44.061 Test: test_zone_write ...[2024-07-15 13:58:30.032804] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:05:44.061 [2024-07-15 13:58:30.032989] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:44.061 [2024-07-15 13:58:30.033190] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:05:44.061 [2024-07-15 13:58:30.033371] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:44.061 [2024-07-15 13:58:30.039081] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x407, wp 0x405) 00:05:44.061 [2024-07-15 13:58:30.039305] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:44.061 [2024-07-15 13:58:30.039521] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x400, wp 0x405) 00:05:44.061 [2024-07-15 13:58:30.039675] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:44.061 [2024-07-15 13:58:30.045326] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:05:44.062 [2024-07-15 13:58:30.045533] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:44.062 passed 00:05:44.062 Test: test_zone_read ...[2024-07-15 13:58:30.046238] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x4ff8, len 0x10) 00:05:44.062 [2024-07-15 13:58:30.046411] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:44.062 [2024-07-15 13:58:30.046624] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 460:zone_block_read: *ERROR*: Trying to read from invalid zone (lba 0x5000) 00:05:44.062 [2024-07-15 13:58:30.046809] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:44.062 [2024-07-15 13:58:30.047445] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x3f8, len 0x10) 00:05:44.062 [2024-07-15 13:58:30.047592] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:44.062 passed 00:05:44.062 Test: test_close_zone ...[2024-07-15 13:58:30.048277] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:44.062 [2024-07-15 13:58:30.048504] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:44.062 [2024-07-15 13:58:30.048853] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:44.062 [2024-07-15 13:58:30.049048] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:44.062 passed 00:05:44.062 Test: test_finish_zone ...[2024-07-15 13:58:30.049900] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:44.062 [2024-07-15 13:58:30.050099] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:44.062 passed 00:05:44.062 Test: test_append_zone ...[2024-07-15 13:58:30.050766] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:05:44.062 [2024-07-15 13:58:30.050943] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:44.062 [2024-07-15 13:58:30.051161] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:05:44.062 [2024-07-15 13:58:30.051317] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:44.320 [2024-07-15 13:58:30.062259] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:05:44.320 [2024-07-15 13:58:30.062450] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:44.320 passed 00:05:44.320 00:05:44.320 Run Summary: Type Total Ran Passed Failed Inactive 00:05:44.320 suites 1 1 n/a 0 0 00:05:44.320 tests 11 11 11 0 0 00:05:44.320 asserts 3437 3437 3437 0 n/a 00:05:44.320 00:05:44.320 Elapsed time = 0.032 seconds 00:05:44.320 13:58:30 unittest.unittest_bdev -- unit/unittest.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/mt/bdev.c/bdev_ut 00:05:44.320 00:05:44.320 00:05:44.320 CUnit - A unit testing framework for C - Version 2.1-3 00:05:44.320 http://cunit.sourceforge.net/ 00:05:44.320 00:05:44.320 00:05:44.320 Suite: bdev 00:05:44.320 Test: basic ...[2024-07-15 13:58:30.143778] thread.c:2373:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x530d41): Operation not permitted (rc=-1) 00:05:44.320 [2024-07-15 13:58:30.144295] thread.c:2373:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device 0x6130000003c0 (0x530d00): Operation not permitted (rc=-1) 00:05:44.320 [2024-07-15 13:58:30.144497] thread.c:2373:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x530d41): Operation not permitted (rc=-1) 00:05:44.320 passed 00:05:44.320 Test: unregister_and_close ...passed 00:05:44.320 Test: unregister_and_close_different_threads ...passed 00:05:44.579 Test: basic_qos ...passed 00:05:44.579 Test: put_channel_during_reset ...passed 00:05:44.579 Test: aborted_reset ...passed 00:05:44.579 Test: aborted_reset_no_outstanding_io ...passed 00:05:44.579 Test: io_during_reset ...passed 00:05:44.837 Test: reset_completions ...passed 00:05:44.837 Test: io_during_qos_queue ...passed 00:05:44.837 Test: io_during_qos_reset ...passed 00:05:44.837 Test: enomem ...passed 00:05:44.837 Test: enomem_multi_bdev ...passed 00:05:44.837 Test: enomem_multi_bdev_unregister ...passed 00:05:45.097 Test: enomem_multi_io_target ...passed 00:05:45.097 Test: qos_dynamic_enable ...passed 00:05:45.097 Test: bdev_histograms_mt ...passed 00:05:45.097 Test: bdev_set_io_timeout_mt ...[2024-07-15 13:58:30.971369] thread.c: 471:spdk_thread_lib_fini: *ERROR*: io_device 0x6130000003c0 not unregistered 00:05:45.097 passed 00:05:45.097 Test: lock_lba_range_then_submit_io ...[2024-07-15 13:58:30.988245] thread.c:2177:spdk_io_device_register: *ERROR*: io_device 0x530cc0 already registered (old:0x6130000003c0 new:0x613000000c80) 00:05:45.097 passed 00:05:45.097 Test: unregister_during_reset ...passed 00:05:45.097 Test: event_notify_and_close ...passed 00:05:45.356 Test: unregister_and_qos_poller ...passed 00:05:45.356 Suite: bdev_wrong_thread 00:05:45.356 Test: spdk_bdev_register_wt ...[2024-07-15 13:58:31.136111] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8502:spdk_bdev_register: *ERROR*: Cannot register bdev wt_bdev on thread 0x619000158b80 (0x619000158b80) 00:05:45.356 passed 00:05:45.356 Test: spdk_bdev_examine_wt ...[2024-07-15 13:58:31.136669] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 810:spdk_bdev_examine: *ERROR*: Cannot examine bdev ut_bdev_wt on thread 0x619000158b80 (0x619000158b80) 00:05:45.356 passed 00:05:45.357 00:05:45.357 Run Summary: Type Total Ran Passed Failed Inactive 00:05:45.357 suites 2 2 n/a 0 0 00:05:45.357 tests 24 24 24 0 0 00:05:45.357 asserts 621 621 621 0 n/a 00:05:45.357 00:05:45.357 Elapsed time = 1.007 seconds 00:05:45.357 00:05:45.357 real 0m3.447s 00:05:45.357 user 0m1.435s 00:05:45.357 sys 0m1.940s 00:05:45.357 13:58:31 unittest.unittest_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:45.357 13:58:31 unittest.unittest_bdev -- common/autotest_common.sh@10 -- # set +x 00:05:45.357 ************************************ 00:05:45.357 END TEST unittest_bdev 00:05:45.357 ************************************ 00:05:45.357 13:58:31 unittest -- common/autotest_common.sh@1142 -- # return 0 00:05:45.357 13:58:31 unittest -- unit/unittest.sh@215 -- # grep -q '#define SPDK_CONFIG_CRYPTO 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:45.357 13:58:31 unittest -- unit/unittest.sh@220 -- # grep -q '#define SPDK_CONFIG_VBDEV_COMPRESS 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:45.357 13:58:31 unittest -- unit/unittest.sh@225 -- # grep -q '#define SPDK_CONFIG_DPDK_COMPRESSDEV 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:45.357 13:58:31 unittest -- unit/unittest.sh@229 -- # grep -q '#define SPDK_CONFIG_RAID5F 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:45.357 13:58:31 unittest -- unit/unittest.sh@233 -- # run_test unittest_blob_blobfs unittest_blob 00:05:45.357 13:58:31 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:45.357 13:58:31 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.357 13:58:31 unittest -- common/autotest_common.sh@10 -- # set +x 00:05:45.357 ************************************ 00:05:45.357 START TEST unittest_blob_blobfs 00:05:45.357 ************************************ 00:05:45.357 13:58:31 unittest.unittest_blob_blobfs -- common/autotest_common.sh@1123 -- # unittest_blob 00:05:45.357 13:58:31 unittest.unittest_blob_blobfs -- unit/unittest.sh@39 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut ]] 00:05:45.357 13:58:31 unittest.unittest_blob_blobfs -- unit/unittest.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut 00:05:45.357 00:05:45.357 00:05:45.357 CUnit - A unit testing framework for C - Version 2.1-3 00:05:45.357 http://cunit.sourceforge.net/ 00:05:45.357 00:05:45.357 00:05:45.357 Suite: blob_nocopy_noextent 00:05:45.357 Test: blob_init ...[2024-07-15 13:58:31.255583] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5490:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:05:45.357 passed 00:05:45.357 Test: blob_thin_provision ...passed 00:05:45.357 Test: blob_read_only ...passed 00:05:45.357 Test: bs_load ...[2024-07-15 13:58:31.324954] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 965:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:05:45.357 passed 00:05:45.357 Test: bs_load_custom_cluster_size ...passed 00:05:45.616 Test: bs_load_after_failed_grow ...passed 00:05:45.616 Test: bs_cluster_sz ...[2024-07-15 13:58:31.360166] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3824:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:05:45.616 [2024-07-15 13:58:31.360754] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5621:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:05:45.616 [2024-07-15 13:58:31.361091] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3883:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:05:45.616 passed 00:05:45.616 Test: bs_resize_md ...passed 00:05:45.616 Test: bs_destroy ...passed 00:05:45.616 Test: bs_type ...passed 00:05:45.616 Test: bs_super_block ...passed 00:05:45.616 Test: bs_test_recover_cluster_count ...passed 00:05:45.616 Test: bs_grow_live ...passed 00:05:45.616 Test: bs_grow_live_no_space ...passed 00:05:45.616 Test: bs_test_grow ...passed 00:05:45.616 Test: blob_serialize_test ...passed 00:05:45.617 Test: super_block_crc ...passed 00:05:45.617 Test: blob_thin_prov_write_count_io ...passed 00:05:45.617 Test: blob_thin_prov_unmap_cluster ...passed 00:05:45.617 Test: bs_load_iter_test ...passed 00:05:45.617 Test: blob_relations ...[2024-07-15 13:58:31.572905] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:45.617 [2024-07-15 13:58:31.573241] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:45.617 [2024-07-15 13:58:31.574060] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:45.617 [2024-07-15 13:58:31.574249] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:45.617 passed 00:05:45.617 Test: blob_relations2 ...[2024-07-15 13:58:31.590158] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:45.617 [2024-07-15 13:58:31.590460] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:45.617 [2024-07-15 13:58:31.590549] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:45.617 [2024-07-15 13:58:31.590754] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:45.617 [2024-07-15 13:58:31.592084] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:45.617 [2024-07-15 13:58:31.592265] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:45.617 [2024-07-15 13:58:31.592883] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:45.617 [2024-07-15 13:58:31.593064] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:45.617 passed 00:05:45.617 Test: blob_relations3 ...passed 00:05:45.876 Test: blobstore_clean_power_failure ...passed 00:05:45.876 Test: blob_delete_snapshot_power_failure ...[2024-07-15 13:58:31.773230] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:05:45.876 [2024-07-15 13:58:31.787720] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:05:45.876 [2024-07-15 13:58:31.788111] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:05:45.876 [2024-07-15 13:58:31.788197] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:45.876 [2024-07-15 13:58:31.802241] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:05:45.876 [2024-07-15 13:58:31.802587] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:05:45.876 [2024-07-15 13:58:31.802817] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:05:45.876 [2024-07-15 13:58:31.803006] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:45.876 [2024-07-15 13:58:31.817149] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8227:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:05:45.876 [2024-07-15 13:58:31.817494] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:45.876 [2024-07-15 13:58:31.831958] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8096:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:05:45.876 [2024-07-15 13:58:31.832350] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:45.876 [2024-07-15 13:58:31.846980] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8040:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:05:45.876 [2024-07-15 13:58:31.847392] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:46.135 passed 00:05:46.135 Test: blob_create_snapshot_power_failure ...[2024-07-15 13:58:31.891443] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:05:46.135 [2024-07-15 13:58:31.918181] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:05:46.135 [2024-07-15 13:58:31.932219] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6446:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:05:46.135 passed 00:05:46.135 Test: blob_io_unit ...passed 00:05:46.135 Test: blob_io_unit_compatibility ...passed 00:05:46.135 Test: blob_ext_md_pages ...passed 00:05:46.135 Test: blob_esnap_io_4096_4096 ...passed 00:05:46.135 Test: blob_esnap_io_512_512 ...passed 00:05:46.135 Test: blob_esnap_io_4096_512 ...passed 00:05:46.135 Test: blob_esnap_io_512_4096 ...passed 00:05:46.393 Test: blob_esnap_clone_resize ...passed 00:05:46.393 Suite: blob_bs_nocopy_noextent 00:05:46.393 Test: blob_open ...passed 00:05:46.393 Test: blob_create ...[2024-07-15 13:58:32.225192] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:05:46.393 passed 00:05:46.393 Test: blob_create_loop ...passed 00:05:46.393 Test: blob_create_fail ...[2024-07-15 13:58:32.326434] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:05:46.393 passed 00:05:46.393 Test: blob_create_internal ...passed 00:05:46.652 Test: blob_create_zero_extent ...passed 00:05:46.652 Test: blob_snapshot ...passed 00:05:46.652 Test: blob_clone ...passed 00:05:46.652 Test: blob_inflate ...[2024-07-15 13:58:32.529854] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:05:46.652 passed 00:05:46.652 Test: blob_delete ...passed 00:05:46.652 Test: blob_resize_test ...[2024-07-15 13:58:32.604389] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7845:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:05:46.652 passed 00:05:46.910 Test: blob_resize_thin_test ...passed 00:05:46.910 Test: channel_ops ...passed 00:05:46.910 Test: blob_super ...passed 00:05:46.910 Test: blob_rw_verify_iov ...passed 00:05:46.910 Test: blob_unmap ...passed 00:05:46.910 Test: blob_iter ...passed 00:05:46.910 Test: blob_parse_md ...passed 00:05:47.168 Test: bs_load_pending_removal ...passed 00:05:47.168 Test: bs_unload ...[2024-07-15 13:58:32.936441] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5878:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:05:47.168 passed 00:05:47.168 Test: bs_usable_clusters ...passed 00:05:47.168 Test: blob_crc ...[2024-07-15 13:58:33.011441] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:05:47.168 [2024-07-15 13:58:33.011877] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:05:47.168 passed 00:05:47.168 Test: blob_flags ...passed 00:05:47.168 Test: bs_version ...passed 00:05:47.168 Test: blob_set_xattrs_test ...[2024-07-15 13:58:33.121771] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:05:47.168 [2024-07-15 13:58:33.122073] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:05:47.168 passed 00:05:47.454 Test: blob_thin_prov_alloc ...passed 00:05:47.454 Test: blob_insert_cluster_msg_test ...passed 00:05:47.454 Test: blob_thin_prov_rw ...passed 00:05:47.454 Test: blob_thin_prov_rle ...passed 00:05:47.454 Test: blob_thin_prov_rw_iov ...passed 00:05:47.454 Test: blob_snapshot_rw ...passed 00:05:47.454 Test: blob_snapshot_rw_iov ...passed 00:05:47.712 Test: blob_inflate_rw ...passed 00:05:47.712 Test: blob_snapshot_freeze_io ...passed 00:05:47.972 Test: blob_operation_split_rw ...passed 00:05:47.972 Test: blob_operation_split_rw_iov ...passed 00:05:47.972 Test: blob_simultaneous_operations ...[2024-07-15 13:58:33.951659] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:05:47.972 [2024-07-15 13:58:33.951999] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:47.972 [2024-07-15 13:58:33.953368] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:05:47.972 [2024-07-15 13:58:33.953559] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:47.972 [2024-07-15 13:58:33.967577] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:05:47.972 [2024-07-15 13:58:33.967794] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:47.972 [2024-07-15 13:58:33.967989] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:05:47.972 [2024-07-15 13:58:33.968216] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:48.231 passed 00:05:48.231 Test: blob_persist_test ...passed 00:05:48.231 Test: blob_decouple_snapshot ...passed 00:05:48.231 Test: blob_seek_io_unit ...passed 00:05:48.231 Test: blob_nested_freezes ...passed 00:05:48.491 Test: blob_clone_resize ...passed 00:05:48.491 Test: blob_shallow_copy ...[2024-07-15 13:58:34.289753] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7332:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:05:48.491 [2024-07-15 13:58:34.290430] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7342:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:05:48.491 [2024-07-15 13:58:34.290911] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7350:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:05:48.491 passed 00:05:48.491 Suite: blob_blob_nocopy_noextent 00:05:48.491 Test: blob_write ...passed 00:05:48.491 Test: blob_read ...passed 00:05:48.491 Test: blob_rw_verify ...passed 00:05:48.491 Test: blob_rw_verify_iov_nomem ...passed 00:05:48.749 Test: blob_rw_iov_read_only ...passed 00:05:48.749 Test: blob_xattr ...passed 00:05:48.749 Test: blob_dirty_shutdown ...passed 00:05:48.749 Test: blob_is_degraded ...passed 00:05:48.749 Suite: blob_esnap_bs_nocopy_noextent 00:05:48.749 Test: blob_esnap_create ...passed 00:05:48.749 Test: blob_esnap_thread_add_remove ...passed 00:05:48.749 Test: blob_esnap_clone_snapshot ...passed 00:05:49.008 Test: blob_esnap_clone_inflate ...passed 00:05:49.008 Test: blob_esnap_clone_decouple ...passed 00:05:49.008 Test: blob_esnap_clone_reload ...passed 00:05:49.008 Test: blob_esnap_hotplug ...passed 00:05:49.008 Test: blob_set_parent ...[2024-07-15 13:58:34.927557] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7613:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:05:49.008 [2024-07-15 13:58:34.927899] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7619:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:05:49.008 [2024-07-15 13:58:34.928178] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7548:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:05:49.008 [2024-07-15 13:58:34.928343] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7555:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:05:49.008 [2024-07-15 13:58:34.928952] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7594:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:05:49.008 passed 00:05:49.008 Test: blob_set_external_parent ...[2024-07-15 13:58:34.967942] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7787:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:05:49.008 [2024-07-15 13:58:34.968279] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7795:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:05:49.008 [2024-07-15 13:58:34.968407] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7748:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:05:49.008 [2024-07-15 13:58:34.968839] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7754:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:05:49.008 passed 00:05:49.008 Suite: blob_nocopy_extent 00:05:49.008 Test: blob_init ...[2024-07-15 13:58:34.982354] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5490:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:05:49.008 passed 00:05:49.267 Test: blob_thin_provision ...passed 00:05:49.267 Test: blob_read_only ...passed 00:05:49.267 Test: bs_load ...[2024-07-15 13:58:35.038028] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 965:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:05:49.267 passed 00:05:49.267 Test: bs_load_custom_cluster_size ...passed 00:05:49.267 Test: bs_load_after_failed_grow ...passed 00:05:49.267 Test: bs_cluster_sz ...[2024-07-15 13:58:35.067571] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3824:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:05:49.267 [2024-07-15 13:58:35.067989] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5621:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:05:49.267 [2024-07-15 13:58:35.068218] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3883:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:05:49.267 passed 00:05:49.267 Test: bs_resize_md ...passed 00:05:49.267 Test: bs_destroy ...passed 00:05:49.267 Test: bs_type ...passed 00:05:49.267 Test: bs_super_block ...passed 00:05:49.267 Test: bs_test_recover_cluster_count ...passed 00:05:49.267 Test: bs_grow_live ...passed 00:05:49.267 Test: bs_grow_live_no_space ...passed 00:05:49.267 Test: bs_test_grow ...passed 00:05:49.267 Test: blob_serialize_test ...passed 00:05:49.267 Test: super_block_crc ...passed 00:05:49.267 Test: blob_thin_prov_write_count_io ...passed 00:05:49.267 Test: blob_thin_prov_unmap_cluster ...passed 00:05:49.267 Test: bs_load_iter_test ...passed 00:05:49.267 Test: blob_relations ...[2024-07-15 13:58:35.253937] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:49.267 [2024-07-15 13:58:35.254193] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:49.267 [2024-07-15 13:58:35.255178] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:49.267 [2024-07-15 13:58:35.255370] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:49.267 passed 00:05:49.598 Test: blob_relations2 ...[2024-07-15 13:58:35.270944] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:49.598 [2024-07-15 13:58:35.271251] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:49.598 [2024-07-15 13:58:35.271461] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:49.598 [2024-07-15 13:58:35.271629] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:49.598 [2024-07-15 13:58:35.273030] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:49.598 [2024-07-15 13:58:35.273224] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:49.598 [2024-07-15 13:58:35.273744] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:49.598 [2024-07-15 13:58:35.273920] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:49.598 passed 00:05:49.598 Test: blob_relations3 ...passed 00:05:49.598 Test: blobstore_clean_power_failure ...passed 00:05:49.598 Test: blob_delete_snapshot_power_failure ...[2024-07-15 13:58:35.452621] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:05:49.598 [2024-07-15 13:58:35.466474] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:05:49.598 [2024-07-15 13:58:35.480132] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:05:49.598 [2024-07-15 13:58:35.480491] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:05:49.598 [2024-07-15 13:58:35.480593] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:49.598 [2024-07-15 13:58:35.494329] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:05:49.598 [2024-07-15 13:58:35.494660] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:05:49.598 [2024-07-15 13:58:35.494885] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:05:49.598 [2024-07-15 13:58:35.495073] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:49.598 [2024-07-15 13:58:35.508924] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:05:49.598 [2024-07-15 13:58:35.509209] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:05:49.598 [2024-07-15 13:58:35.509378] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:05:49.598 [2024-07-15 13:58:35.509565] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:49.598 [2024-07-15 13:58:35.523131] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8227:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:05:49.598 [2024-07-15 13:58:35.523483] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:49.598 [2024-07-15 13:58:35.537191] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8096:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:05:49.598 [2024-07-15 13:58:35.537568] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:49.598 [2024-07-15 13:58:35.552667] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8040:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:05:49.598 [2024-07-15 13:58:35.553105] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:49.598 passed 00:05:49.598 Test: blob_create_snapshot_power_failure ...[2024-07-15 13:58:35.596340] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:05:49.865 [2024-07-15 13:58:35.610365] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:05:49.865 [2024-07-15 13:58:35.637544] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:05:49.865 [2024-07-15 13:58:35.651580] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6446:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:05:49.865 passed 00:05:49.865 Test: blob_io_unit ...passed 00:05:49.865 Test: blob_io_unit_compatibility ...passed 00:05:49.865 Test: blob_ext_md_pages ...passed 00:05:49.865 Test: blob_esnap_io_4096_4096 ...passed 00:05:49.865 Test: blob_esnap_io_512_512 ...passed 00:05:49.865 Test: blob_esnap_io_4096_512 ...passed 00:05:49.865 Test: blob_esnap_io_512_4096 ...passed 00:05:50.124 Test: blob_esnap_clone_resize ...passed 00:05:50.124 Suite: blob_bs_nocopy_extent 00:05:50.124 Test: blob_open ...passed 00:05:50.124 Test: blob_create ...[2024-07-15 13:58:35.943210] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:05:50.124 passed 00:05:50.124 Test: blob_create_loop ...passed 00:05:50.124 Test: blob_create_fail ...[2024-07-15 13:58:36.055843] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:05:50.124 passed 00:05:50.124 Test: blob_create_internal ...passed 00:05:50.382 Test: blob_create_zero_extent ...passed 00:05:50.382 Test: blob_snapshot ...passed 00:05:50.382 Test: blob_clone ...passed 00:05:50.382 Test: blob_inflate ...[2024-07-15 13:58:36.266269] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:05:50.382 passed 00:05:50.382 Test: blob_delete ...passed 00:05:50.382 Test: blob_resize_test ...[2024-07-15 13:58:36.341548] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7845:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:05:50.382 passed 00:05:50.640 Test: blob_resize_thin_test ...passed 00:05:50.640 Test: channel_ops ...passed 00:05:50.640 Test: blob_super ...passed 00:05:50.640 Test: blob_rw_verify_iov ...passed 00:05:50.640 Test: blob_unmap ...passed 00:05:50.640 Test: blob_iter ...passed 00:05:50.640 Test: blob_parse_md ...passed 00:05:50.899 Test: bs_load_pending_removal ...passed 00:05:50.899 Test: bs_unload ...[2024-07-15 13:58:36.685644] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5878:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:05:50.899 passed 00:05:50.899 Test: bs_usable_clusters ...passed 00:05:50.899 Test: blob_crc ...[2024-07-15 13:58:36.761881] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:05:50.899 [2024-07-15 13:58:36.762250] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:05:50.899 passed 00:05:50.899 Test: blob_flags ...passed 00:05:50.899 Test: bs_version ...passed 00:05:50.899 Test: blob_set_xattrs_test ...[2024-07-15 13:58:36.876313] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:05:50.899 [2024-07-15 13:58:36.876625] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:05:50.899 passed 00:05:51.158 Test: blob_thin_prov_alloc ...passed 00:05:51.158 Test: blob_insert_cluster_msg_test ...passed 00:05:51.158 Test: blob_thin_prov_rw ...passed 00:05:51.158 Test: blob_thin_prov_rle ...passed 00:05:51.158 Test: blob_thin_prov_rw_iov ...passed 00:05:51.158 Test: blob_snapshot_rw ...passed 00:05:51.416 Test: blob_snapshot_rw_iov ...passed 00:05:51.416 Test: blob_inflate_rw ...passed 00:05:51.416 Test: blob_snapshot_freeze_io ...passed 00:05:51.675 Test: blob_operation_split_rw ...passed 00:05:51.675 Test: blob_operation_split_rw_iov ...passed 00:05:51.935 Test: blob_simultaneous_operations ...[2024-07-15 13:58:37.679305] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:05:51.935 [2024-07-15 13:58:37.679700] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:51.935 [2024-07-15 13:58:37.680924] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:05:51.935 [2024-07-15 13:58:37.681094] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:51.935 [2024-07-15 13:58:37.692157] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:05:51.935 [2024-07-15 13:58:37.692404] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:51.935 [2024-07-15 13:58:37.692542] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:05:51.935 [2024-07-15 13:58:37.692771] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:51.935 passed 00:05:51.935 Test: blob_persist_test ...passed 00:05:51.935 Test: blob_decouple_snapshot ...passed 00:05:51.935 Test: blob_seek_io_unit ...passed 00:05:51.935 Test: blob_nested_freezes ...passed 00:05:52.194 Test: blob_clone_resize ...passed 00:05:52.194 Test: blob_shallow_copy ...[2024-07-15 13:58:37.997106] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7332:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:05:52.194 [2024-07-15 13:58:37.997679] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7342:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:05:52.194 [2024-07-15 13:58:37.998057] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7350:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:05:52.194 passed 00:05:52.194 Suite: blob_blob_nocopy_extent 00:05:52.194 Test: blob_write ...passed 00:05:52.194 Test: blob_read ...passed 00:05:52.194 Test: blob_rw_verify ...passed 00:05:52.194 Test: blob_rw_verify_iov_nomem ...passed 00:05:52.453 Test: blob_rw_iov_read_only ...passed 00:05:52.453 Test: blob_xattr ...passed 00:05:52.453 Test: blob_dirty_shutdown ...passed 00:05:52.453 Test: blob_is_degraded ...passed 00:05:52.453 Suite: blob_esnap_bs_nocopy_extent 00:05:52.453 Test: blob_esnap_create ...passed 00:05:52.453 Test: blob_esnap_thread_add_remove ...passed 00:05:52.453 Test: blob_esnap_clone_snapshot ...passed 00:05:52.712 Test: blob_esnap_clone_inflate ...passed 00:05:52.712 Test: blob_esnap_clone_decouple ...passed 00:05:52.712 Test: blob_esnap_clone_reload ...passed 00:05:52.712 Test: blob_esnap_hotplug ...passed 00:05:52.712 Test: blob_set_parent ...[2024-07-15 13:58:38.638718] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7613:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:05:52.712 [2024-07-15 13:58:38.639056] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7619:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:05:52.712 [2024-07-15 13:58:38.639275] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7548:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:05:52.712 [2024-07-15 13:58:38.639438] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7555:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:05:52.712 [2024-07-15 13:58:38.639880] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7594:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:05:52.712 passed 00:05:52.712 Test: blob_set_external_parent ...[2024-07-15 13:58:38.681010] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7787:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:05:52.712 [2024-07-15 13:58:38.681284] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7795:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:05:52.712 [2024-07-15 13:58:38.681466] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7748:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:05:52.712 [2024-07-15 13:58:38.681873] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7754:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:05:52.712 passed 00:05:52.712 Suite: blob_copy_noextent 00:05:52.712 Test: blob_init ...[2024-07-15 13:58:38.695293] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5490:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:05:52.712 passed 00:05:52.971 Test: blob_thin_provision ...passed 00:05:52.971 Test: blob_read_only ...passed 00:05:52.971 Test: bs_load ...[2024-07-15 13:58:38.748977] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 965:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:05:52.971 passed 00:05:52.971 Test: bs_load_custom_cluster_size ...passed 00:05:52.971 Test: bs_load_after_failed_grow ...passed 00:05:52.971 Test: bs_cluster_sz ...[2024-07-15 13:58:38.775929] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3824:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:05:52.971 [2024-07-15 13:58:38.776153] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5621:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:05:52.971 [2024-07-15 13:58:38.776341] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3883:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:05:52.971 passed 00:05:52.971 Test: bs_resize_md ...passed 00:05:52.971 Test: bs_destroy ...passed 00:05:52.971 Test: bs_type ...passed 00:05:52.971 Test: bs_super_block ...passed 00:05:52.971 Test: bs_test_recover_cluster_count ...passed 00:05:52.971 Test: bs_grow_live ...passed 00:05:52.971 Test: bs_grow_live_no_space ...passed 00:05:52.971 Test: bs_test_grow ...passed 00:05:52.971 Test: blob_serialize_test ...passed 00:05:52.971 Test: super_block_crc ...passed 00:05:52.971 Test: blob_thin_prov_write_count_io ...passed 00:05:52.971 Test: blob_thin_prov_unmap_cluster ...passed 00:05:52.971 Test: bs_load_iter_test ...passed 00:05:53.230 Test: blob_relations ...[2024-07-15 13:58:38.974226] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:53.230 [2024-07-15 13:58:38.974526] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:53.230 [2024-07-15 13:58:38.974964] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:53.230 [2024-07-15 13:58:38.975116] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:53.230 passed 00:05:53.230 Test: blob_relations2 ...[2024-07-15 13:58:38.989356] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:53.230 [2024-07-15 13:58:38.989659] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:53.230 [2024-07-15 13:58:38.989748] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:53.230 [2024-07-15 13:58:38.989883] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:53.230 [2024-07-15 13:58:38.990544] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:53.230 [2024-07-15 13:58:38.990702] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:53.230 [2024-07-15 13:58:38.990979] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:53.230 [2024-07-15 13:58:38.991116] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:53.230 passed 00:05:53.230 Test: blob_relations3 ...passed 00:05:53.230 Test: blobstore_clean_power_failure ...passed 00:05:53.230 Test: blob_delete_snapshot_power_failure ...[2024-07-15 13:58:39.173174] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:05:53.230 [2024-07-15 13:58:39.186472] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:05:53.230 [2024-07-15 13:58:39.186860] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:05:53.230 [2024-07-15 13:58:39.186930] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:53.230 [2024-07-15 13:58:39.200255] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:05:53.230 [2024-07-15 13:58:39.200561] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:05:53.230 [2024-07-15 13:58:39.200626] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:05:53.230 [2024-07-15 13:58:39.200767] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:53.230 [2024-07-15 13:58:39.214153] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8227:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:05:53.230 [2024-07-15 13:58:39.214518] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:53.230 [2024-07-15 13:58:39.228030] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8096:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:05:53.230 [2024-07-15 13:58:39.228405] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:53.489 [2024-07-15 13:58:39.242025] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8040:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:05:53.489 [2024-07-15 13:58:39.242375] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:53.489 passed 00:05:53.489 Test: blob_create_snapshot_power_failure ...[2024-07-15 13:58:39.284799] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:05:53.489 [2024-07-15 13:58:39.310989] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:05:53.489 [2024-07-15 13:58:39.324004] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6446:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:05:53.489 passed 00:05:53.489 Test: blob_io_unit ...passed 00:05:53.489 Test: blob_io_unit_compatibility ...passed 00:05:53.489 Test: blob_ext_md_pages ...passed 00:05:53.489 Test: blob_esnap_io_4096_4096 ...passed 00:05:53.489 Test: blob_esnap_io_512_512 ...passed 00:05:53.748 Test: blob_esnap_io_4096_512 ...passed 00:05:53.748 Test: blob_esnap_io_512_4096 ...passed 00:05:53.748 Test: blob_esnap_clone_resize ...passed 00:05:53.748 Suite: blob_bs_copy_noextent 00:05:53.748 Test: blob_open ...passed 00:05:53.748 Test: blob_create ...[2024-07-15 13:58:39.607685] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:05:53.748 passed 00:05:53.748 Test: blob_create_loop ...passed 00:05:53.748 Test: blob_create_fail ...[2024-07-15 13:58:39.705513] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:05:53.748 passed 00:05:54.006 Test: blob_create_internal ...passed 00:05:54.006 Test: blob_create_zero_extent ...passed 00:05:54.006 Test: blob_snapshot ...passed 00:05:54.006 Test: blob_clone ...passed 00:05:54.006 Test: blob_inflate ...[2024-07-15 13:58:39.900918] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:05:54.006 passed 00:05:54.006 Test: blob_delete ...passed 00:05:54.006 Test: blob_resize_test ...[2024-07-15 13:58:39.978636] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7845:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:05:54.006 passed 00:05:54.264 Test: blob_resize_thin_test ...passed 00:05:54.264 Test: channel_ops ...passed 00:05:54.264 Test: blob_super ...passed 00:05:54.264 Test: blob_rw_verify_iov ...passed 00:05:54.264 Test: blob_unmap ...passed 00:05:54.264 Test: blob_iter ...passed 00:05:54.264 Test: blob_parse_md ...passed 00:05:54.522 Test: bs_load_pending_removal ...passed 00:05:54.522 Test: bs_unload ...[2024-07-15 13:58:40.322074] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5878:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:05:54.522 passed 00:05:54.522 Test: bs_usable_clusters ...passed 00:05:54.522 Test: blob_crc ...[2024-07-15 13:58:40.394123] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:05:54.522 [2024-07-15 13:58:40.394459] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:05:54.522 passed 00:05:54.522 Test: blob_flags ...passed 00:05:54.523 Test: bs_version ...passed 00:05:54.523 Test: blob_set_xattrs_test ...[2024-07-15 13:58:40.503365] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:05:54.523 [2024-07-15 13:58:40.503766] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:05:54.523 passed 00:05:54.780 Test: blob_thin_prov_alloc ...passed 00:05:54.780 Test: blob_insert_cluster_msg_test ...passed 00:05:54.780 Test: blob_thin_prov_rw ...passed 00:05:54.780 Test: blob_thin_prov_rle ...passed 00:05:54.780 Test: blob_thin_prov_rw_iov ...passed 00:05:54.780 Test: blob_snapshot_rw ...passed 00:05:55.038 Test: blob_snapshot_rw_iov ...passed 00:05:55.038 Test: blob_inflate_rw ...passed 00:05:55.038 Test: blob_snapshot_freeze_io ...passed 00:05:55.296 Test: blob_operation_split_rw ...passed 00:05:55.296 Test: blob_operation_split_rw_iov ...passed 00:05:55.296 Test: blob_simultaneous_operations ...[2024-07-15 13:58:41.286932] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:05:55.296 [2024-07-15 13:58:41.287314] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:55.296 [2024-07-15 13:58:41.287780] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:05:55.296 [2024-07-15 13:58:41.287926] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:55.296 [2024-07-15 13:58:41.290487] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:05:55.296 [2024-07-15 13:58:41.290632] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:55.296 [2024-07-15 13:58:41.290857] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:05:55.296 [2024-07-15 13:58:41.290996] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:55.554 passed 00:05:55.554 Test: blob_persist_test ...passed 00:05:55.554 Test: blob_decouple_snapshot ...passed 00:05:55.554 Test: blob_seek_io_unit ...passed 00:05:55.554 Test: blob_nested_freezes ...passed 00:05:55.554 Test: blob_clone_resize ...passed 00:05:55.554 Test: blob_shallow_copy ...[2024-07-15 13:58:41.553865] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7332:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:05:55.554 [2024-07-15 13:58:41.554305] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7342:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:05:55.554 [2024-07-15 13:58:41.554662] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7350:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:05:55.812 passed 00:05:55.813 Suite: blob_blob_copy_noextent 00:05:55.813 Test: blob_write ...passed 00:05:55.813 Test: blob_read ...passed 00:05:55.813 Test: blob_rw_verify ...passed 00:05:55.813 Test: blob_rw_verify_iov_nomem ...passed 00:05:55.813 Test: blob_rw_iov_read_only ...passed 00:05:55.813 Test: blob_xattr ...passed 00:05:56.070 Test: blob_dirty_shutdown ...passed 00:05:56.071 Test: blob_is_degraded ...passed 00:05:56.071 Suite: blob_esnap_bs_copy_noextent 00:05:56.071 Test: blob_esnap_create ...passed 00:05:56.071 Test: blob_esnap_thread_add_remove ...passed 00:05:56.071 Test: blob_esnap_clone_snapshot ...passed 00:05:56.071 Test: blob_esnap_clone_inflate ...passed 00:05:56.328 Test: blob_esnap_clone_decouple ...passed 00:05:56.329 Test: blob_esnap_clone_reload ...passed 00:05:56.329 Test: blob_esnap_hotplug ...passed 00:05:56.329 Test: blob_set_parent ...[2024-07-15 13:58:42.189062] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7613:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:05:56.329 [2024-07-15 13:58:42.189399] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7619:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:05:56.329 [2024-07-15 13:58:42.189623] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7548:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:05:56.329 [2024-07-15 13:58:42.189826] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7555:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:05:56.329 [2024-07-15 13:58:42.190221] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7594:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:05:56.329 passed 00:05:56.329 Test: blob_set_external_parent ...[2024-07-15 13:58:42.225280] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7787:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:05:56.329 [2024-07-15 13:58:42.225599] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7795:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:05:56.329 [2024-07-15 13:58:42.225797] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7748:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:05:56.329 [2024-07-15 13:58:42.226146] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7754:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:05:56.329 passed 00:05:56.329 Suite: blob_copy_extent 00:05:56.329 Test: blob_init ...[2024-07-15 13:58:42.238497] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5490:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:05:56.329 passed 00:05:56.329 Test: blob_thin_provision ...passed 00:05:56.329 Test: blob_read_only ...passed 00:05:56.329 Test: bs_load ...[2024-07-15 13:58:42.286577] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 965:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:05:56.329 passed 00:05:56.329 Test: bs_load_custom_cluster_size ...passed 00:05:56.329 Test: bs_load_after_failed_grow ...passed 00:05:56.329 Test: bs_cluster_sz ...[2024-07-15 13:58:42.311770] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3824:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:05:56.329 [2024-07-15 13:58:42.312016] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5621:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:05:56.329 [2024-07-15 13:58:42.312189] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3883:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:05:56.329 passed 00:05:56.587 Test: bs_resize_md ...passed 00:05:56.587 Test: bs_destroy ...passed 00:05:56.587 Test: bs_type ...passed 00:05:56.587 Test: bs_super_block ...passed 00:05:56.587 Test: bs_test_recover_cluster_count ...passed 00:05:56.587 Test: bs_grow_live ...passed 00:05:56.587 Test: bs_grow_live_no_space ...passed 00:05:56.587 Test: bs_test_grow ...passed 00:05:56.587 Test: blob_serialize_test ...passed 00:05:56.587 Test: super_block_crc ...passed 00:05:56.587 Test: blob_thin_prov_write_count_io ...passed 00:05:56.587 Test: blob_thin_prov_unmap_cluster ...passed 00:05:56.587 Test: bs_load_iter_test ...passed 00:05:56.587 Test: blob_relations ...[2024-07-15 13:58:42.488019] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:56.587 [2024-07-15 13:58:42.488354] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:56.587 [2024-07-15 13:58:42.488873] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:56.587 [2024-07-15 13:58:42.489024] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:56.587 passed 00:05:56.587 Test: blob_relations2 ...[2024-07-15 13:58:42.502896] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:56.587 [2024-07-15 13:58:42.503198] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:56.587 [2024-07-15 13:58:42.503278] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:56.587 [2024-07-15 13:58:42.503401] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:56.587 [2024-07-15 13:58:42.504125] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:56.587 [2024-07-15 13:58:42.504277] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:56.587 [2024-07-15 13:58:42.504625] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:05:56.587 [2024-07-15 13:58:42.504791] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:56.587 passed 00:05:56.587 Test: blob_relations3 ...passed 00:05:56.845 Test: blobstore_clean_power_failure ...passed 00:05:56.845 Test: blob_delete_snapshot_power_failure ...[2024-07-15 13:58:42.670686] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:05:56.845 [2024-07-15 13:58:42.683263] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:05:56.845 [2024-07-15 13:58:42.695820] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:05:56.845 [2024-07-15 13:58:42.696131] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:05:56.845 [2024-07-15 13:58:42.696284] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:56.845 [2024-07-15 13:58:42.708697] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:05:56.845 [2024-07-15 13:58:42.709027] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:05:56.845 [2024-07-15 13:58:42.709091] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:05:56.845 [2024-07-15 13:58:42.709319] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:56.845 [2024-07-15 13:58:42.722406] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:05:56.845 [2024-07-15 13:58:42.725644] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:05:56.845 [2024-07-15 13:58:42.725817] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:05:56.845 [2024-07-15 13:58:42.725942] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:56.845 [2024-07-15 13:58:42.738612] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8227:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:05:56.845 [2024-07-15 13:58:42.738958] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:56.845 [2024-07-15 13:58:42.756350] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8096:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:05:56.845 [2024-07-15 13:58:42.756752] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:56.845 [2024-07-15 13:58:42.770494] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8040:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:05:56.845 [2024-07-15 13:58:42.770898] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:56.845 passed 00:05:56.845 Test: blob_create_snapshot_power_failure ...[2024-07-15 13:58:42.812977] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:05:56.845 [2024-07-15 13:58:42.826352] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:05:57.102 [2024-07-15 13:58:42.852133] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:05:57.102 [2024-07-15 13:58:42.865463] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6446:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:05:57.102 passed 00:05:57.102 Test: blob_io_unit ...passed 00:05:57.102 Test: blob_io_unit_compatibility ...passed 00:05:57.102 Test: blob_ext_md_pages ...passed 00:05:57.102 Test: blob_esnap_io_4096_4096 ...passed 00:05:57.102 Test: blob_esnap_io_512_512 ...passed 00:05:57.102 Test: blob_esnap_io_4096_512 ...passed 00:05:57.102 Test: blob_esnap_io_512_4096 ...passed 00:05:57.102 Test: blob_esnap_clone_resize ...passed 00:05:57.102 Suite: blob_bs_copy_extent 00:05:57.359 Test: blob_open ...passed 00:05:57.359 Test: blob_create ...[2024-07-15 13:58:43.142874] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:05:57.359 passed 00:05:57.359 Test: blob_create_loop ...passed 00:05:57.359 Test: blob_create_fail ...[2024-07-15 13:58:43.246692] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:05:57.359 passed 00:05:57.359 Test: blob_create_internal ...passed 00:05:57.359 Test: blob_create_zero_extent ...passed 00:05:57.616 Test: blob_snapshot ...passed 00:05:57.616 Test: blob_clone ...passed 00:05:57.616 Test: blob_inflate ...[2024-07-15 13:58:43.443457] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:05:57.616 passed 00:05:57.616 Test: blob_delete ...passed 00:05:57.616 Test: blob_resize_test ...[2024-07-15 13:58:43.519772] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7845:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:05:57.616 passed 00:05:57.616 Test: blob_resize_thin_test ...passed 00:05:57.874 Test: channel_ops ...passed 00:05:57.874 Test: blob_super ...passed 00:05:57.874 Test: blob_rw_verify_iov ...passed 00:05:57.874 Test: blob_unmap ...passed 00:05:57.874 Test: blob_iter ...passed 00:05:57.874 Test: blob_parse_md ...passed 00:05:57.874 Test: bs_load_pending_removal ...passed 00:05:58.130 Test: bs_unload ...[2024-07-15 13:58:43.879591] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5878:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:05:58.130 passed 00:05:58.130 Test: bs_usable_clusters ...passed 00:05:58.130 Test: blob_crc ...[2024-07-15 13:58:43.958083] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:05:58.130 [2024-07-15 13:58:43.958496] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:05:58.130 passed 00:05:58.130 Test: blob_flags ...passed 00:05:58.130 Test: bs_version ...passed 00:05:58.130 Test: blob_set_xattrs_test ...[2024-07-15 13:58:44.076263] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:05:58.130 [2024-07-15 13:58:44.076705] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:05:58.130 passed 00:05:58.386 Test: blob_thin_prov_alloc ...passed 00:05:58.387 Test: blob_insert_cluster_msg_test ...passed 00:05:58.387 Test: blob_thin_prov_rw ...passed 00:05:58.387 Test: blob_thin_prov_rle ...passed 00:05:58.387 Test: blob_thin_prov_rw_iov ...passed 00:05:58.387 Test: blob_snapshot_rw ...passed 00:05:58.643 Test: blob_snapshot_rw_iov ...passed 00:05:58.643 Test: blob_inflate_rw ...passed 00:05:58.643 Test: blob_snapshot_freeze_io ...passed 00:05:58.901 Test: blob_operation_split_rw ...passed 00:05:58.901 Test: blob_operation_split_rw_iov ...passed 00:05:58.901 Test: blob_simultaneous_operations ...[2024-07-15 13:58:44.873302] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:05:58.901 [2024-07-15 13:58:44.873607] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:58.901 [2024-07-15 13:58:44.874096] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:05:58.901 [2024-07-15 13:58:44.874250] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:58.901 [2024-07-15 13:58:44.876736] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:05:58.901 [2024-07-15 13:58:44.876886] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:58.901 [2024-07-15 13:58:44.877011] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:05:58.901 [2024-07-15 13:58:44.877204] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:05:58.901 passed 00:05:59.157 Test: blob_persist_test ...passed 00:05:59.157 Test: blob_decouple_snapshot ...passed 00:05:59.157 Test: blob_seek_io_unit ...passed 00:05:59.157 Test: blob_nested_freezes ...passed 00:05:59.157 Test: blob_clone_resize ...passed 00:05:59.157 Test: blob_shallow_copy ...[2024-07-15 13:58:45.137168] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7332:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:05:59.157 [2024-07-15 13:58:45.137739] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7342:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:05:59.157 [2024-07-15 13:58:45.138104] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7350:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:05:59.157 passed 00:05:59.157 Suite: blob_blob_copy_extent 00:05:59.414 Test: blob_write ...passed 00:05:59.414 Test: blob_read ...passed 00:05:59.414 Test: blob_rw_verify ...passed 00:05:59.414 Test: blob_rw_verify_iov_nomem ...passed 00:05:59.414 Test: blob_rw_iov_read_only ...passed 00:05:59.414 Test: blob_xattr ...passed 00:05:59.669 Test: blob_dirty_shutdown ...passed 00:05:59.669 Test: blob_is_degraded ...passed 00:05:59.669 Suite: blob_esnap_bs_copy_extent 00:05:59.669 Test: blob_esnap_create ...passed 00:05:59.669 Test: blob_esnap_thread_add_remove ...passed 00:05:59.669 Test: blob_esnap_clone_snapshot ...passed 00:05:59.669 Test: blob_esnap_clone_inflate ...passed 00:05:59.669 Test: blob_esnap_clone_decouple ...passed 00:05:59.926 Test: blob_esnap_clone_reload ...passed 00:05:59.926 Test: blob_esnap_hotplug ...passed 00:05:59.926 Test: blob_set_parent ...[2024-07-15 13:58:45.768556] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7613:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:05:59.926 [2024-07-15 13:58:45.768882] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7619:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:05:59.926 [2024-07-15 13:58:45.769096] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7548:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:05:59.926 [2024-07-15 13:58:45.769252] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7555:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:05:59.926 [2024-07-15 13:58:45.769770] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7594:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:05:59.926 passed 00:05:59.926 Test: blob_set_external_parent ...[2024-07-15 13:58:45.805187] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7787:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:05:59.926 [2024-07-15 13:58:45.805542] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7795:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:05:59.926 [2024-07-15 13:58:45.805747] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7748:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:05:59.926 [2024-07-15 13:58:45.806195] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7754:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:05:59.926 passed 00:05:59.926 00:05:59.926 Run Summary: Type Total Ran Passed Failed Inactive 00:05:59.926 suites 16 16 n/a 0 0 00:05:59.926 tests 376 376 376 0 0 00:05:59.926 asserts 143965 143965 143965 0 n/a 00:05:59.926 00:05:59.926 Elapsed time = 14.335 seconds 00:05:59.926 13:58:45 unittest.unittest_blob_blobfs -- unit/unittest.sh@42 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob_bdev.c/blob_bdev_ut 00:05:59.926 00:05:59.926 00:05:59.926 CUnit - A unit testing framework for C - Version 2.1-3 00:05:59.926 http://cunit.sourceforge.net/ 00:05:59.926 00:05:59.926 00:05:59.926 Suite: blob_bdev 00:05:59.926 Test: create_bs_dev ...passed 00:05:59.926 Test: create_bs_dev_ro ...[2024-07-15 13:58:45.898526] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 529:spdk_bdev_create_bs_dev: *ERROR*: bdev name 'nope': unsupported options 00:05:59.926 passed 00:05:59.926 Test: create_bs_dev_rw ...passed 00:05:59.926 Test: claim_bs_dev ...[2024-07-15 13:58:45.899460] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 340:spdk_bs_bdev_claim: *ERROR*: could not claim bs dev 00:05:59.926 passed 00:05:59.926 Test: claim_bs_dev_ro ...passed 00:05:59.926 Test: deferred_destroy_refs ...passed 00:05:59.926 Test: deferred_destroy_channels ...passed 00:05:59.926 Test: deferred_destroy_threads ...passed 00:05:59.926 00:05:59.926 Run Summary: Type Total Ran Passed Failed Inactive 00:05:59.926 suites 1 1 n/a 0 0 00:05:59.926 tests 8 8 8 0 0 00:05:59.926 asserts 119 119 119 0 n/a 00:05:59.926 00:05:59.927 Elapsed time = 0.001 seconds 00:05:59.927 13:58:45 unittest.unittest_blob_blobfs -- unit/unittest.sh@43 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/tree.c/tree_ut 00:05:59.927 00:05:59.927 00:05:59.927 CUnit - A unit testing framework for C - Version 2.1-3 00:05:59.927 http://cunit.sourceforge.net/ 00:05:59.927 00:05:59.927 00:05:59.927 Suite: tree 00:05:59.927 Test: blobfs_tree_op_test ...passed 00:05:59.927 00:05:59.927 Run Summary: Type Total Ran Passed Failed Inactive 00:05:59.927 suites 1 1 n/a 0 0 00:05:59.927 tests 1 1 1 0 0 00:05:59.927 asserts 27 27 27 0 n/a 00:05:59.927 00:05:59.927 Elapsed time = 0.000 seconds 00:06:00.184 13:58:45 unittest.unittest_blob_blobfs -- unit/unittest.sh@44 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut 00:06:00.184 00:06:00.184 00:06:00.184 CUnit - A unit testing framework for C - Version 2.1-3 00:06:00.184 http://cunit.sourceforge.net/ 00:06:00.184 00:06:00.184 00:06:00.184 Suite: blobfs_async_ut 00:06:00.184 Test: fs_init ...passed 00:06:00.184 Test: fs_open ...passed 00:06:00.184 Test: fs_create ...passed 00:06:00.184 Test: fs_truncate ...passed 00:06:00.184 Test: fs_rename ...[2024-07-15 13:58:46.044423] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1478:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=file1 to deleted 00:06:00.184 passed 00:06:00.184 Test: fs_rw_async ...passed 00:06:00.184 Test: fs_writev_readv_async ...passed 00:06:00.184 Test: tree_find_buffer_ut ...passed 00:06:00.184 Test: channel_ops ...passed 00:06:00.184 Test: channel_ops_sync ...passed 00:06:00.184 00:06:00.184 Run Summary: Type Total Ran Passed Failed Inactive 00:06:00.184 suites 1 1 n/a 0 0 00:06:00.184 tests 10 10 10 0 0 00:06:00.184 asserts 292 292 292 0 n/a 00:06:00.184 00:06:00.184 Elapsed time = 0.147 seconds 00:06:00.184 13:58:46 unittest.unittest_blob_blobfs -- unit/unittest.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut 00:06:00.184 00:06:00.184 00:06:00.184 CUnit - A unit testing framework for C - Version 2.1-3 00:06:00.184 http://cunit.sourceforge.net/ 00:06:00.184 00:06:00.184 00:06:00.184 Suite: blobfs_sync_ut 00:06:00.184 Test: cache_read_after_write ...[2024-07-15 13:58:46.183637] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1478:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=testfile to deleted 00:06:00.442 passed 00:06:00.442 Test: file_length ...passed 00:06:00.442 Test: append_write_to_extend_blob ...passed 00:06:00.442 Test: partial_buffer ...passed 00:06:00.442 Test: cache_write_null_buffer ...passed 00:06:00.442 Test: fs_create_sync ...passed 00:06:00.442 Test: fs_rename_sync ...passed 00:06:00.442 Test: cache_append_no_cache ...passed 00:06:00.442 Test: fs_delete_file_without_close ...passed 00:06:00.442 00:06:00.442 Run Summary: Type Total Ran Passed Failed Inactive 00:06:00.442 suites 1 1 n/a 0 0 00:06:00.442 tests 9 9 9 0 0 00:06:00.442 asserts 345 345 345 0 n/a 00:06:00.442 00:06:00.442 Elapsed time = 0.301 seconds 00:06:00.442 13:58:46 unittest.unittest_blob_blobfs -- unit/unittest.sh@47 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut 00:06:00.442 00:06:00.442 00:06:00.442 CUnit - A unit testing framework for C - Version 2.1-3 00:06:00.442 http://cunit.sourceforge.net/ 00:06:00.442 00:06:00.442 00:06:00.442 Suite: blobfs_bdev_ut 00:06:00.442 Test: spdk_blobfs_bdev_detect_test ...[2024-07-15 13:58:46.350336] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:06:00.442 passed 00:06:00.442 Test: spdk_blobfs_bdev_create_test ...[2024-07-15 13:58:46.350998] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:06:00.442 passed 00:06:00.442 Test: spdk_blobfs_bdev_mount_test ...passed 00:06:00.442 00:06:00.442 Run Summary: Type Total Ran Passed Failed Inactive 00:06:00.442 suites 1 1 n/a 0 0 00:06:00.442 tests 3 3 3 0 0 00:06:00.442 asserts 9 9 9 0 n/a 00:06:00.442 00:06:00.442 Elapsed time = 0.001 seconds 00:06:00.442 00:06:00.442 real 0m15.130s 00:06:00.442 user 0m14.375s 00:06:00.442 sys 0m0.679s 00:06:00.442 13:58:46 unittest.unittest_blob_blobfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:00.442 13:58:46 unittest.unittest_blob_blobfs -- common/autotest_common.sh@10 -- # set +x 00:06:00.442 ************************************ 00:06:00.442 END TEST unittest_blob_blobfs 00:06:00.442 ************************************ 00:06:00.442 13:58:46 unittest -- common/autotest_common.sh@1142 -- # return 0 00:06:00.442 13:58:46 unittest -- unit/unittest.sh@234 -- # run_test unittest_event unittest_event 00:06:00.442 13:58:46 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:00.442 13:58:46 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.442 13:58:46 unittest -- common/autotest_common.sh@10 -- # set +x 00:06:00.442 ************************************ 00:06:00.442 START TEST unittest_event 00:06:00.442 ************************************ 00:06:00.442 13:58:46 unittest.unittest_event -- common/autotest_common.sh@1123 -- # unittest_event 00:06:00.442 13:58:46 unittest.unittest_event -- unit/unittest.sh@51 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/app.c/app_ut 00:06:00.442 00:06:00.442 00:06:00.442 CUnit - A unit testing framework for C - Version 2.1-3 00:06:00.442 http://cunit.sourceforge.net/ 00:06:00.442 00:06:00.442 00:06:00.442 Suite: app_suite 00:06:00.442 Test: test_spdk_app_parse_args ...app_ut: invalid option -- 'z' 00:06:00.442 app_ut [options] 00:06:00.442 00:06:00.442 CPU options: 00:06:00.442 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:06:00.442 (like [0,1,10]) 00:06:00.442 --lcores lcore to CPU mapping list. The list is in the format: 00:06:00.442 [<,lcores[@CPUs]>...] 00:06:00.442 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:06:00.442 Within the group, '-' is used for range separator, 00:06:00.442 ',' is used for single number separator. 00:06:00.442 '( )' can be omitted for single element group, 00:06:00.442 '@' can be omitted if cpus and lcores have the same value 00:06:00.442 --disable-cpumask-locks Disable CPU core lock files. 00:06:00.442 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:06:00.442 pollers in the app support interrupt mode) 00:06:00.442 -p, --main-core main (primary) core for DPDK 00:06:00.442 00:06:00.442 Configuration options: 00:06:00.442 -c, --config, --json JSON config file 00:06:00.442 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:06:00.442 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:06:00.442 --wait-for-rpc wait for RPCs to initialize subsystems 00:06:00.442 --rpcs-allowed comma-separated list of permitted RPCS 00:06:00.442 --json-ignore-init-errors don't exit on invalid config entry 00:06:00.442 00:06:00.442 Memory options: 00:06:00.442 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:06:00.442 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:06:00.442 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:06:00.442 -R, --huge-unlink unlink huge files after initialization 00:06:00.442 -n, --mem-channels number of memory channels used for DPDK 00:06:00.442 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:06:00.442 --msg-mempool-size global message memory pool size in count (default: 262143) 00:06:00.442 --no-huge run without using hugepages 00:06:00.442 -i, --shm-id shared memory ID (optional) 00:06:00.442 -g, --single-file-segments force creating just one hugetlbfs file 00:06:00.442 00:06:00.442 PCI options: 00:06:00.442 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:06:00.442 -B, --pci-blocked pci addr to block (can be used more than once) 00:06:00.442 -u, --no-pci disable PCI access 00:06:00.442 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:06:00.442 00:06:00.442 Log options: 00:06:00.442 -L, --logflag enable log flag (all, app_rpc, json_util, rpc, thread, trace) 00:06:00.442 --silence-noticelog disable notice level logging to stderr 00:06:00.442 00:06:00.442 Trace options: 00:06:00.442 --num-trace-entries number of trace entries for each core, must be power of 2, 00:06:00.442 setting 0 to disable trace (default 32768) 00:06:00.442 Tracepoints vary in size and can use more than one trace entry. 00:06:00.442 -e, --tpoint-group [:] 00:06:00.442 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:06:00.442 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:06:00.442 a tracepoint group. First tpoint inside a group can be enabled by 00:06:00.442 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:06:00.442 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:06:00.442 in /include/spdk_internal/trace_defs.h 00:06:00.442 00:06:00.442 Other options: 00:06:00.442 -h, --help show this usage 00:06:00.442 -v, --version print SPDK version 00:06:00.442 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:06:00.442 --env-context Opaque context for use of the env implementation 00:06:00.442 app_ut [options] 00:06:00.442 00:06:00.442 CPU options: 00:06:00.442 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:06:00.442 (like [0,1,10]) 00:06:00.442 --lcores lcore to CPU mapping list. The list is in the format: 00:06:00.442 [<,lcores[@CPUs]>...] 00:06:00.442 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:06:00.442 Within the group, '-' is used for range separator, 00:06:00.442 ',' is used for single number separator. 00:06:00.442 '( )' can be omitted for single element group, 00:06:00.442 app_ut: unrecognized option '--test-long-opt' 00:06:00.442 '@' can be omitted if cpus and lcores have the same value 00:06:00.442 --disable-cpumask-locks Disable CPU core lock files. 00:06:00.442 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:06:00.442 pollers in the app support interrupt mode) 00:06:00.442 -p, --main-core main (primary) core for DPDK 00:06:00.442 00:06:00.442 Configuration options: 00:06:00.442 -c, --config, --json JSON config file 00:06:00.442 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:06:00.442 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:06:00.442 --wait-for-rpc wait for RPCs to initialize subsystems 00:06:00.442 --rpcs-allowed comma-separated list of permitted RPCS 00:06:00.442 --json-ignore-init-errors don't exit on invalid config entry 00:06:00.442 00:06:00.442 Memory options: 00:06:00.442 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:06:00.442 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:06:00.442 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:06:00.442 -R, --huge-unlink unlink huge files after initialization 00:06:00.442 -n, --mem-channels number of memory channels used for DPDK 00:06:00.442 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:06:00.442 --msg-mempool-size global message memory pool size in count (default: 262143) 00:06:00.442 --no-huge run without using hugepages 00:06:00.442 -i, --shm-id shared memory ID (optional) 00:06:00.442 -g, --single-file-segments force creating just one hugetlbfs file 00:06:00.442 00:06:00.442 PCI options: 00:06:00.442 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:06:00.442 -B, --pci-blocked pci addr to block (can be used more than once) 00:06:00.442 -u, --no-pci disable PCI access 00:06:00.442 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:06:00.442 00:06:00.442 Log options: 00:06:00.442 -L, --logflag enable log flag (all, app_rpc, json_util, rpc, thread, trace) 00:06:00.442 --silence-noticelog disable notice level logging to stderr 00:06:00.442 00:06:00.442 Trace options: 00:06:00.442 --num-trace-entries number of trace entries for each core, must be power of 2, 00:06:00.442 setting 0 to disable trace (default 32768) 00:06:00.442 Tracepoints vary in size and can use more than one trace entry. 00:06:00.442 -e, --tpoint-group [:] 00:06:00.442 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:06:00.442 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:06:00.442 a tracepoint group. First tpoint inside a group can be enabled by 00:06:00.442 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:06:00.442 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:06:00.442 in /include/spdk_internal/trace_defs.h 00:06:00.442 00:06:00.442 Other options: 00:06:00.442 -h, --help show this usage 00:06:00.442 -v, --version print SPDK version 00:06:00.442 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:06:00.442 --env-context Opaque context for use of the env implementation 00:06:00.442 [2024-07-15 13:58:46.441048] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1191:spdk_app_parse_args: *ERROR*: Duplicated option 'c' between app-specific command line parameter and generic spdk opts. 00:06:00.442 [2024-07-15 13:58:46.441387] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1372:spdk_app_parse_args: *ERROR*: -B and -W cannot be used at the same time 00:06:00.442 app_ut [options] 00:06:00.442 00:06:00.442 CPU options: 00:06:00.442 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:06:00.442 (like [0,1,10]) 00:06:00.442 --lcores lcore to CPU mapping list. The list is in the format: 00:06:00.442 [<,lcores[@CPUs]>...] 00:06:00.442 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:06:00.442 Within the group, '-' is used for range separator, 00:06:00.442 ',' is used for single number separator. 00:06:00.442 '( )' can be omitted for single element group, 00:06:00.442 '@' can be omitted if cpus and lcores have the same value 00:06:00.442 --disable-cpumask-locks Disable CPU core lock files. 00:06:00.442 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:06:00.442 pollers in the app support interrupt mode) 00:06:00.442 -p, --main-core main (primary) core for DPDK 00:06:00.442 00:06:00.442 Configuration options: 00:06:00.442 -c, --config, --json JSON config file 00:06:00.442 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:06:00.442 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:06:00.442 --wait-for-rpc wait for RPCs to initialize subsystems 00:06:00.442 --rpcs-allowed comma-separated list of permitted RPCS 00:06:00.442 --json-ignore-init-errors don't exit on invalid config entry 00:06:00.442 00:06:00.442 Memory options: 00:06:00.442 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:06:00.442 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:06:00.442 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:06:00.442 -R, --huge-unlink unlink huge files after initialization 00:06:00.442 -n, --mem-channels number of memory channels used for DPDK 00:06:00.442 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:06:00.442 --msg-mempool-size global message memory pool size in count (default: 262143) 00:06:00.442 --no-huge run without using hugepages 00:06:00.442 -i, --shm-id shared memory ID (optional) 00:06:00.442 -g, --single-file-segments force creating just one hugetlbfs file 00:06:00.442 00:06:00.442 PCI options: 00:06:00.442 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:06:00.442 -B, --pci-blocked pci addr to block (can be used more than once) 00:06:00.442 -u, --no-pci disable PCI access 00:06:00.442 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:06:00.442 00:06:00.442 Log options: 00:06:00.442 -L, --logflag enable log flag (all, app_rpc, json_util, rpc, thread, trace) 00:06:00.442 --silence-noticelog disable notice level logging to stderr 00:06:00.442 00:06:00.442 Trace options: 00:06:00.442 --num-trace-entries number of trace entries for each core, must be power of 2, 00:06:00.442 setting 0 to disable trace (default 32768) 00:06:00.442 Tracepoints vary in size and can use more than one trace entry. 00:06:00.442 -e, --tpoint-group [:] 00:06:00.442 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:06:00.442 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:06:00.442 a tracepoint group. First tpoint inside a group can be enabled by 00:06:00.442 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:06:00.442 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:06:00.442 in /include/spdk_internal/trace_defs.h 00:06:00.442 00:06:00.442 Other options: 00:06:00.442 -h, --help show this usage 00:06:00.442 -v, --version print SPDK version 00:06:00.442 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:06:00.442 --env-context Opaque context for use of the env implementation 00:06:00.442 passed 00:06:00.442 00:06:00.442 [2024-07-15 13:58:46.441662] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1277:spdk_app_parse_args: *ERROR*: Invalid main core --single-file-segments 00:06:00.442 Run Summary: Type Total Ran Passed Failed Inactive 00:06:00.442 suites 1 1 n/a 0 0 00:06:00.442 tests 1 1 1 0 0 00:06:00.442 asserts 8 8 8 0 n/a 00:06:00.442 00:06:00.442 Elapsed time = 0.002 seconds 00:06:00.698 13:58:46 unittest.unittest_event -- unit/unittest.sh@52 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/reactor.c/reactor_ut 00:06:00.698 00:06:00.698 00:06:00.698 CUnit - A unit testing framework for C - Version 2.1-3 00:06:00.698 http://cunit.sourceforge.net/ 00:06:00.698 00:06:00.698 00:06:00.698 Suite: app_suite 00:06:00.698 Test: test_create_reactor ...passed 00:06:00.698 Test: test_init_reactors ...passed 00:06:00.698 Test: test_event_call ...passed 00:06:00.698 Test: test_schedule_thread ...passed 00:06:00.698 Test: test_reschedule_thread ...passed 00:06:00.698 Test: test_bind_thread ...passed 00:06:00.698 Test: test_for_each_reactor ...passed 00:06:00.698 Test: test_reactor_stats ...passed 00:06:00.698 Test: test_scheduler ...passed 00:06:00.698 Test: test_governor ...passed 00:06:00.698 00:06:00.698 Run Summary: Type Total Ran Passed Failed Inactive 00:06:00.698 suites 1 1 n/a 0 0 00:06:00.698 tests 10 10 10 0 0 00:06:00.698 asserts 344 344 344 0 n/a 00:06:00.698 00:06:00.698 Elapsed time = 0.014 seconds 00:06:00.699 00:06:00.699 real 0m0.088s 00:06:00.699 user 0m0.037s 00:06:00.699 sys 0m0.031s 00:06:00.699 13:58:46 unittest.unittest_event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:00.699 13:58:46 unittest.unittest_event -- common/autotest_common.sh@10 -- # set +x 00:06:00.699 ************************************ 00:06:00.699 END TEST unittest_event 00:06:00.699 ************************************ 00:06:00.699 13:58:46 unittest -- common/autotest_common.sh@1142 -- # return 0 00:06:00.699 13:58:46 unittest -- unit/unittest.sh@235 -- # uname -s 00:06:00.699 13:58:46 unittest -- unit/unittest.sh@235 -- # '[' Linux = Linux ']' 00:06:00.699 13:58:46 unittest -- unit/unittest.sh@236 -- # run_test unittest_ftl unittest_ftl 00:06:00.699 13:58:46 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:00.699 13:58:46 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.699 13:58:46 unittest -- common/autotest_common.sh@10 -- # set +x 00:06:00.699 ************************************ 00:06:00.699 START TEST unittest_ftl 00:06:00.699 ************************************ 00:06:00.699 13:58:46 unittest.unittest_ftl -- common/autotest_common.sh@1123 -- # unittest_ftl 00:06:00.699 13:58:46 unittest.unittest_ftl -- unit/unittest.sh@56 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_band.c/ftl_band_ut 00:06:00.699 00:06:00.699 00:06:00.699 CUnit - A unit testing framework for C - Version 2.1-3 00:06:00.699 http://cunit.sourceforge.net/ 00:06:00.699 00:06:00.699 00:06:00.699 Suite: ftl_band_suite 00:06:00.699 Test: test_band_block_offset_from_addr_base ...passed 00:06:00.699 Test: test_band_block_offset_from_addr_offset ...passed 00:06:00.699 Test: test_band_addr_from_block_offset ...passed 00:06:00.699 Test: test_band_set_addr ...passed 00:06:00.699 Test: test_invalidate_addr ...passed 00:06:00.956 Test: test_next_xfer_addr ...passed 00:06:00.956 00:06:00.956 Run Summary: Type Total Ran Passed Failed Inactive 00:06:00.956 suites 1 1 n/a 0 0 00:06:00.956 tests 6 6 6 0 0 00:06:00.956 asserts 30356 30356 30356 0 n/a 00:06:00.956 00:06:00.956 Elapsed time = 0.147 seconds 00:06:00.956 13:58:46 unittest.unittest_ftl -- unit/unittest.sh@57 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut 00:06:00.956 00:06:00.956 00:06:00.956 CUnit - A unit testing framework for C - Version 2.1-3 00:06:00.956 http://cunit.sourceforge.net/ 00:06:00.956 00:06:00.956 00:06:00.956 Suite: ftl_bitmap 00:06:00.956 Test: test_ftl_bitmap_create ...[2024-07-15 13:58:46.776287] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 52:ftl_bitmap_create: *ERROR*: Buffer for bitmap must be aligned to 8 bytes 00:06:00.956 [2024-07-15 13:58:46.776494] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 58:ftl_bitmap_create: *ERROR*: Size of buffer for bitmap must be divisible by 8 bytes 00:06:00.956 passed 00:06:00.956 Test: test_ftl_bitmap_get ...passed 00:06:00.956 Test: test_ftl_bitmap_set ...passed 00:06:00.956 Test: test_ftl_bitmap_clear ...passed 00:06:00.956 Test: test_ftl_bitmap_find_first_set ...passed 00:06:00.956 Test: test_ftl_bitmap_find_first_clear ...passed 00:06:00.956 Test: test_ftl_bitmap_count_set ...passed 00:06:00.956 00:06:00.956 Run Summary: Type Total Ran Passed Failed Inactive 00:06:00.957 suites 1 1 n/a 0 0 00:06:00.957 tests 7 7 7 0 0 00:06:00.957 asserts 137 137 137 0 n/a 00:06:00.957 00:06:00.957 Elapsed time = 0.001 seconds 00:06:00.957 13:58:46 unittest.unittest_ftl -- unit/unittest.sh@58 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_io.c/ftl_io_ut 00:06:00.957 00:06:00.957 00:06:00.957 CUnit - A unit testing framework for C - Version 2.1-3 00:06:00.957 http://cunit.sourceforge.net/ 00:06:00.957 00:06:00.957 00:06:00.957 Suite: ftl_io_suite 00:06:00.957 Test: test_completion ...passed 00:06:00.957 Test: test_multiple_ios ...passed 00:06:00.957 00:06:00.957 Run Summary: Type Total Ran Passed Failed Inactive 00:06:00.957 suites 1 1 n/a 0 0 00:06:00.957 tests 2 2 2 0 0 00:06:00.957 asserts 47 47 47 0 n/a 00:06:00.957 00:06:00.957 Elapsed time = 0.002 seconds 00:06:00.957 13:58:46 unittest.unittest_ftl -- unit/unittest.sh@59 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut 00:06:00.957 00:06:00.957 00:06:00.957 CUnit - A unit testing framework for C - Version 2.1-3 00:06:00.957 http://cunit.sourceforge.net/ 00:06:00.957 00:06:00.957 00:06:00.957 Suite: ftl_mngt 00:06:00.957 Test: test_next_step ...passed 00:06:00.957 Test: test_continue_step ...passed 00:06:00.957 Test: test_get_func_and_step_cntx_alloc ...passed 00:06:00.957 Test: test_fail_step ...passed 00:06:00.957 Test: test_mngt_call_and_call_rollback ...passed 00:06:00.957 Test: test_nested_process_failure ...passed 00:06:00.957 Test: test_call_init_success ...passed 00:06:00.957 Test: test_call_init_failure ...passed 00:06:00.957 00:06:00.957 Run Summary: Type Total Ran Passed Failed Inactive 00:06:00.957 suites 1 1 n/a 0 0 00:06:00.957 tests 8 8 8 0 0 00:06:00.957 asserts 196 196 196 0 n/a 00:06:00.957 00:06:00.957 Elapsed time = 0.001 seconds 00:06:00.957 13:58:46 unittest.unittest_ftl -- unit/unittest.sh@60 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut 00:06:00.957 00:06:00.957 00:06:00.957 CUnit - A unit testing framework for C - Version 2.1-3 00:06:00.957 http://cunit.sourceforge.net/ 00:06:00.957 00:06:00.957 00:06:00.957 Suite: ftl_mempool 00:06:00.957 Test: test_ftl_mempool_create ...passed 00:06:00.957 Test: test_ftl_mempool_get_put ...passed 00:06:00.957 00:06:00.957 Run Summary: Type Total Ran Passed Failed Inactive 00:06:00.957 suites 1 1 n/a 0 0 00:06:00.957 tests 2 2 2 0 0 00:06:00.957 asserts 36 36 36 0 n/a 00:06:00.957 00:06:00.957 Elapsed time = 0.000 seconds 00:06:00.957 13:58:46 unittest.unittest_ftl -- unit/unittest.sh@61 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut 00:06:00.957 00:06:00.957 00:06:00.957 CUnit - A unit testing framework for C - Version 2.1-3 00:06:00.957 http://cunit.sourceforge.net/ 00:06:00.957 00:06:00.957 00:06:00.957 Suite: ftl_addr64_suite 00:06:00.957 Test: test_addr_cached ...passed 00:06:00.957 00:06:00.957 Run Summary: Type Total Ran Passed Failed Inactive 00:06:00.957 suites 1 1 n/a 0 0 00:06:00.957 tests 1 1 1 0 0 00:06:00.957 asserts 1536 1536 1536 0 n/a 00:06:00.957 00:06:00.957 Elapsed time = 0.000 seconds 00:06:00.957 13:58:46 unittest.unittest_ftl -- unit/unittest.sh@62 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_sb/ftl_sb_ut 00:06:00.957 00:06:00.957 00:06:00.957 CUnit - A unit testing framework for C - Version 2.1-3 00:06:00.957 http://cunit.sourceforge.net/ 00:06:00.957 00:06:00.957 00:06:00.957 Suite: ftl_sb 00:06:00.957 Test: test_sb_crc_v2 ...passed 00:06:00.957 Test: test_sb_crc_v3 ...passed 00:06:00.957 Test: test_sb_v3_md_layout ...[2024-07-15 13:58:46.881000] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 143:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Missing regions 00:06:00.957 [2024-07-15 13:58:46.881283] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 131:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:06:00.957 [2024-07-15 13:58:46.881328] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:06:00.957 [2024-07-15 13:58:46.881375] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:06:00.957 [2024-07-15 13:58:46.881414] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:06:00.957 [2024-07-15 13:58:46.881508] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 93:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Unsupported MD region type found 00:06:00.957 [2024-07-15 13:58:46.881554] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:06:00.957 [2024-07-15 13:58:46.881611] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:06:00.957 passed 00:06:00.957 Test: test_sb_v5_md_layout ...[2024-07-15 13:58:46.881669] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:06:00.957 [2024-07-15 13:58:46.881710] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:06:00.957 [2024-07-15 13:58:46.881773] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:06:00.957 passed 00:06:00.957 00:06:00.957 Run Summary: Type Total Ran Passed Failed Inactive 00:06:00.957 suites 1 1 n/a 0 0 00:06:00.957 tests 4 4 4 0 0 00:06:00.957 asserts 160 160 160 0 n/a 00:06:00.957 00:06:00.957 Elapsed time = 0.002 seconds 00:06:00.957 13:58:46 unittest.unittest_ftl -- unit/unittest.sh@63 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut 00:06:00.957 00:06:00.957 00:06:00.957 CUnit - A unit testing framework for C - Version 2.1-3 00:06:00.957 http://cunit.sourceforge.net/ 00:06:00.957 00:06:00.957 00:06:00.957 Suite: ftl_layout_upgrade 00:06:00.957 Test: test_l2p_upgrade ...passed 00:06:00.957 00:06:00.957 Run Summary: Type Total Ran Passed Failed Inactive 00:06:00.957 suites 1 1 n/a 0 0 00:06:00.957 tests 1 1 1 0 0 00:06:00.957 asserts 152 152 152 0 n/a 00:06:00.957 00:06:00.957 Elapsed time = 0.000 seconds 00:06:00.957 13:58:46 unittest.unittest_ftl -- unit/unittest.sh@64 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_p2l.c/ftl_p2l_ut 00:06:00.957 00:06:00.957 00:06:00.957 CUnit - A unit testing framework for C - Version 2.1-3 00:06:00.957 http://cunit.sourceforge.net/ 00:06:00.957 00:06:00.957 00:06:00.957 Suite: ftl_p2l_suite 00:06:00.957 Test: test_p2l_num_pages ...passed 00:06:01.522 Test: test_ckpt_issue ...passed 00:06:02.087 Test: test_persist_band_p2l ...passed 00:06:02.344 Test: test_clean_restore_p2l ...passed 00:06:03.720 Test: test_dirty_restore_p2l ...passed 00:06:03.720 00:06:03.720 Run Summary: Type Total Ran Passed Failed Inactive 00:06:03.720 suites 1 1 n/a 0 0 00:06:03.720 tests 5 5 5 0 0 00:06:03.720 asserts 10020 10020 10020 0 n/a 00:06:03.720 00:06:03.720 Elapsed time = 2.365 seconds 00:06:03.720 00:06:03.720 real 0m2.762s 00:06:03.720 user 0m0.918s 00:06:03.720 sys 0m1.824s 00:06:03.720 13:58:49 unittest.unittest_ftl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:03.720 ************************************ 00:06:03.720 END TEST unittest_ftl 00:06:03.720 ************************************ 00:06:03.720 13:58:49 unittest.unittest_ftl -- common/autotest_common.sh@10 -- # set +x 00:06:03.720 13:58:49 unittest -- common/autotest_common.sh@1142 -- # return 0 00:06:03.720 13:58:49 unittest -- unit/unittest.sh@239 -- # run_test unittest_accel /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:06:03.720 13:58:49 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:03.720 13:58:49 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.720 13:58:49 unittest -- common/autotest_common.sh@10 -- # set +x 00:06:03.720 ************************************ 00:06:03.720 START TEST unittest_accel 00:06:03.720 ************************************ 00:06:03.720 13:58:49 unittest.unittest_accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:06:03.720 00:06:03.720 00:06:03.720 CUnit - A unit testing framework for C - Version 2.1-3 00:06:03.720 http://cunit.sourceforge.net/ 00:06:03.720 00:06:03.720 00:06:03.720 Suite: accel_sequence 00:06:03.720 Test: test_sequence_fill_copy ...passed 00:06:03.720 Test: test_sequence_abort ...passed 00:06:03.720 Test: test_sequence_append_error ...passed 00:06:03.720 Test: test_sequence_completion_error ...[2024-07-15 13:58:49.391255] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1945:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7f62d1c287c0 00:06:03.720 [2024-07-15 13:58:49.391619] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1945:accel_sequence_task_cb: *ERROR*: Failed to execute decompress operation, sequence: 0x7f62d1c287c0 00:06:03.720 [2024-07-15 13:58:49.391791] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1855:accel_process_sequence: *ERROR*: Failed to submit fill operation, sequence: 0x7f62d1c287c0 00:06:03.720 [2024-07-15 13:58:49.391877] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1855:accel_process_sequence: *ERROR*: Failed to submit decompress operation, sequence: 0x7f62d1c287c0 00:06:03.720 passed 00:06:03.720 Test: test_sequence_decompress ...passed 00:06:03.720 Test: test_sequence_reverse ...passed 00:06:03.720 Test: test_sequence_copy_elision ...passed 00:06:03.720 Test: test_sequence_accel_buffers ...passed 00:06:03.720 Test: test_sequence_memory_domain ...[2024-07-15 13:58:49.402051] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1747:accel_task_pull_data: *ERROR*: Failed to pull data from memory domain: UT_DMA, rc: -7 00:06:03.720 [2024-07-15 13:58:49.402293] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1786:accel_task_push_data: *ERROR*: Failed to push data to memory domain: UT_DMA, rc: -98 00:06:03.720 passed 00:06:03.720 Test: test_sequence_module_memory_domain ...passed 00:06:03.720 Test: test_sequence_crypto ...passed 00:06:03.720 Test: test_sequence_driver ...[2024-07-15 13:58:49.408176] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1894:accel_process_sequence: *ERROR*: Failed to execute sequence: 0x7f62d0aa77c0 using driver: ut 00:06:03.720 passed 00:06:03.720 Test: test_sequence_same_iovs ...[2024-07-15 13:58:49.408323] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1958:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7f62d0aa77c0 through driver: ut 00:06:03.720 passed 00:06:03.720 Test: test_sequence_crc32 ...passed 00:06:03.720 Suite: accel 00:06:03.720 Test: test_spdk_accel_task_complete ...passed 00:06:03.720 Test: test_get_task ...passed 00:06:03.720 Test: test_spdk_accel_submit_copy ...passed 00:06:03.720 Test: test_spdk_accel_submit_dualcast ...[2024-07-15 13:58:49.412916] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 422:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:06:03.720 [2024-07-15 13:58:49.412998] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 422:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:06:03.720 passed 00:06:03.720 Test: test_spdk_accel_submit_compare ...passed 00:06:03.720 Test: test_spdk_accel_submit_fill ...passed 00:06:03.720 Test: test_spdk_accel_submit_crc32c ...passed 00:06:03.720 Test: test_spdk_accel_submit_crc32cv ...passed 00:06:03.720 Test: test_spdk_accel_submit_copy_crc32c ...passed 00:06:03.720 Test: test_spdk_accel_submit_xor ...passed 00:06:03.720 Test: test_spdk_accel_module_find_by_name ...passed 00:06:03.720 Test: test_spdk_accel_module_register ...passed 00:06:03.720 00:06:03.720 Run Summary: Type Total Ran Passed Failed Inactive 00:06:03.720 suites 2 2 n/a 0 0 00:06:03.720 tests 26 26 26 0 0 00:06:03.720 asserts 830 830 830 0 n/a 00:06:03.720 00:06:03.720 Elapsed time = 0.032 seconds 00:06:03.720 00:06:03.720 real 0m0.064s 00:06:03.720 user 0m0.027s 00:06:03.720 sys 0m0.037s 00:06:03.720 13:58:49 unittest.unittest_accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:03.720 13:58:49 unittest.unittest_accel -- common/autotest_common.sh@10 -- # set +x 00:06:03.720 ************************************ 00:06:03.720 END TEST unittest_accel 00:06:03.720 ************************************ 00:06:03.720 13:58:49 unittest -- common/autotest_common.sh@1142 -- # return 0 00:06:03.720 13:58:49 unittest -- unit/unittest.sh@240 -- # run_test unittest_ioat /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:06:03.720 13:58:49 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:03.720 13:58:49 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.720 13:58:49 unittest -- common/autotest_common.sh@10 -- # set +x 00:06:03.720 ************************************ 00:06:03.721 START TEST unittest_ioat 00:06:03.721 ************************************ 00:06:03.721 13:58:49 unittest.unittest_ioat -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:06:03.721 00:06:03.721 00:06:03.721 CUnit - A unit testing framework for C - Version 2.1-3 00:06:03.721 http://cunit.sourceforge.net/ 00:06:03.721 00:06:03.721 00:06:03.721 Suite: ioat 00:06:03.721 Test: ioat_state_check ...passed 00:06:03.721 00:06:03.721 Run Summary: Type Total Ran Passed Failed Inactive 00:06:03.721 suites 1 1 n/a 0 0 00:06:03.721 tests 1 1 1 0 0 00:06:03.721 asserts 32 32 32 0 n/a 00:06:03.721 00:06:03.721 Elapsed time = 0.000 seconds 00:06:03.721 00:06:03.721 real 0m0.026s 00:06:03.721 user 0m0.018s 00:06:03.721 sys 0m0.008s 00:06:03.721 13:58:49 unittest.unittest_ioat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:03.721 13:58:49 unittest.unittest_ioat -- common/autotest_common.sh@10 -- # set +x 00:06:03.721 ************************************ 00:06:03.721 END TEST unittest_ioat 00:06:03.721 ************************************ 00:06:03.721 13:58:49 unittest -- common/autotest_common.sh@1142 -- # return 0 00:06:03.721 13:58:49 unittest -- unit/unittest.sh@241 -- # grep -q '#define SPDK_CONFIG_IDXD 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:03.721 13:58:49 unittest -- unit/unittest.sh@242 -- # run_test unittest_idxd_user /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:06:03.721 13:58:49 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:03.721 13:58:49 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.721 13:58:49 unittest -- common/autotest_common.sh@10 -- # set +x 00:06:03.721 ************************************ 00:06:03.721 START TEST unittest_idxd_user 00:06:03.721 ************************************ 00:06:03.721 13:58:49 unittest.unittest_idxd_user -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:06:03.721 00:06:03.721 00:06:03.721 CUnit - A unit testing framework for C - Version 2.1-3 00:06:03.721 http://cunit.sourceforge.net/ 00:06:03.721 00:06:03.721 00:06:03.721 Suite: idxd_user 00:06:03.721 Test: test_idxd_wait_cmd ...[2024-07-15 13:58:49.572987] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:06:03.721 [2024-07-15 13:58:49.573230] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 46:idxd_wait_cmd: *ERROR*: Command timeout, waited 1 00:06:03.721 passed 00:06:03.721 Test: test_idxd_reset_dev ...passed 00:06:03.721 Test: test_idxd_group_config ...passed 00:06:03.721 Test: test_idxd_wq_config ...passed 00:06:03.721 00:06:03.721 [2024-07-15 13:58:49.573342] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:06:03.721 [2024-07-15 13:58:49.573385] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 132:idxd_reset_dev: *ERROR*: Error resetting device 4294967274 00:06:03.721 Run Summary: Type Total Ran Passed Failed Inactive 00:06:03.721 suites 1 1 n/a 0 0 00:06:03.721 tests 4 4 4 0 0 00:06:03.721 asserts 20 20 20 0 n/a 00:06:03.721 00:06:03.721 Elapsed time = 0.001 seconds 00:06:03.721 00:06:03.721 real 0m0.022s 00:06:03.721 user 0m0.013s 00:06:03.721 sys 0m0.009s 00:06:03.721 13:58:49 unittest.unittest_idxd_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:03.721 ************************************ 00:06:03.721 END TEST unittest_idxd_user 00:06:03.721 ************************************ 00:06:03.721 13:58:49 unittest.unittest_idxd_user -- common/autotest_common.sh@10 -- # set +x 00:06:03.721 13:58:49 unittest -- common/autotest_common.sh@1142 -- # return 0 00:06:03.721 13:58:49 unittest -- unit/unittest.sh@244 -- # run_test unittest_iscsi unittest_iscsi 00:06:03.721 13:58:49 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:03.721 13:58:49 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.721 13:58:49 unittest -- common/autotest_common.sh@10 -- # set +x 00:06:03.721 ************************************ 00:06:03.721 START TEST unittest_iscsi 00:06:03.721 ************************************ 00:06:03.721 13:58:49 unittest.unittest_iscsi -- common/autotest_common.sh@1123 -- # unittest_iscsi 00:06:03.721 13:58:49 unittest.unittest_iscsi -- unit/unittest.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/conn.c/conn_ut 00:06:03.721 00:06:03.721 00:06:03.721 CUnit - A unit testing framework for C - Version 2.1-3 00:06:03.721 http://cunit.sourceforge.net/ 00:06:03.721 00:06:03.721 00:06:03.721 Suite: conn_suite 00:06:03.721 Test: read_task_split_in_order_case ...passed 00:06:03.721 Test: read_task_split_reverse_order_case ...passed 00:06:03.721 Test: propagate_scsi_error_status_for_split_read_tasks ...passed 00:06:03.721 Test: process_non_read_task_completion_test ...passed 00:06:03.721 Test: free_tasks_on_connection ...passed 00:06:03.721 Test: free_tasks_with_queued_datain ...passed 00:06:03.721 Test: abort_queued_datain_task_test ...passed 00:06:03.721 Test: abort_queued_datain_tasks_test ...passed 00:06:03.721 00:06:03.721 Run Summary: Type Total Ran Passed Failed Inactive 00:06:03.721 suites 1 1 n/a 0 0 00:06:03.721 tests 8 8 8 0 0 00:06:03.721 asserts 230 230 230 0 n/a 00:06:03.721 00:06:03.721 Elapsed time = 0.000 seconds 00:06:03.721 13:58:49 unittest.unittest_iscsi -- unit/unittest.sh@69 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/param.c/param_ut 00:06:03.721 00:06:03.721 00:06:03.721 CUnit - A unit testing framework for C - Version 2.1-3 00:06:03.721 http://cunit.sourceforge.net/ 00:06:03.721 00:06:03.721 00:06:03.721 Suite: iscsi_suite 00:06:03.721 Test: param_negotiation_test ...passed 00:06:03.721 Test: list_negotiation_test ...passed 00:06:03.721 Test: parse_valid_test ...passed 00:06:03.721 Test: parse_invalid_test ...[2024-07-15 13:58:49.672439] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 201:iscsi_parse_param: *ERROR*: '=' not found 00:06:03.721 [2024-07-15 13:58:49.672675] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 201:iscsi_parse_param: *ERROR*: '=' not found 00:06:03.721 [2024-07-15 13:58:49.672746] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 207:iscsi_parse_param: *ERROR*: Empty key 00:06:03.721 [2024-07-15 13:58:49.672822] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 247:iscsi_parse_param: *ERROR*: Overflow Val 8193 00:06:03.721 [2024-07-15 13:58:49.672966] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 247:iscsi_parse_param: *ERROR*: Overflow Val 256 00:06:03.721 [2024-07-15 13:58:49.673073] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 214:iscsi_parse_param: *ERROR*: Key name length is bigger than 63 00:06:03.721 passed 00:06:03.721 00:06:03.721 Run Summary: Type Total Ran Passed Failed Inactive 00:06:03.721 suites 1 1 n/a 0 0 00:06:03.721 tests 4 4 4 0 0 00:06:03.721 asserts 161 161 161 0 n/a 00:06:03.721 00:06:03.721 Elapsed time = 0.004 seconds 00:06:03.721 [2024-07-15 13:58:49.673209] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 228:iscsi_parse_param: *ERROR*: Duplicated Key B 00:06:03.721 13:58:49 unittest.unittest_iscsi -- unit/unittest.sh@70 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/tgt_node.c/tgt_node_ut 00:06:03.721 00:06:03.721 00:06:03.721 CUnit - A unit testing framework for C - Version 2.1-3 00:06:03.721 http://cunit.sourceforge.net/ 00:06:03.721 00:06:03.721 00:06:03.721 Suite: iscsi_target_node_suite 00:06:03.721 Test: add_lun_test_cases ...[2024-07-15 13:58:49.699359] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1252:iscsi_tgt_node_add_lun: *ERROR*: Target has active connections (count=1) 00:06:03.721 passed 00:06:03.721 Test: allow_any_allowed ...passed 00:06:03.721 Test: allow_ipv6_allowed ...passed 00:06:03.721 Test: allow_ipv6_denied ...passed 00:06:03.721 Test: allow_ipv6_invalid ...passed 00:06:03.721 Test: allow_ipv4_allowed ...passed 00:06:03.721 Test: allow_ipv4_denied ...passed 00:06:03.721 Test: allow_ipv4_invalid ...passed 00:06:03.721 Test: node_access_allowed ...[2024-07-15 13:58:49.699604] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1258:iscsi_tgt_node_add_lun: *ERROR*: Specified LUN ID (-2) is negative 00:06:03.721 [2024-07-15 13:58:49.699695] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1264:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:06:03.721 [2024-07-15 13:58:49.699753] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1264:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:06:03.721 [2024-07-15 13:58:49.699795] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1270:iscsi_tgt_node_add_lun: *ERROR*: spdk_scsi_dev_add_lun failed 00:06:03.721 passed 00:06:03.721 Test: node_access_denied_by_empty_netmask ...passed 00:06:03.721 Test: node_access_multi_initiator_groups_cases ...passed 00:06:03.721 Test: allow_iscsi_name_multi_maps_case ...passed 00:06:03.721 Test: chap_param_test_cases ...[2024-07-15 13:58:49.700151] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1039:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=0) 00:06:03.721 [2024-07-15 13:58:49.700200] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1039:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=0,r=0,m=1) 00:06:03.721 [2024-07-15 13:58:49.700256] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1039:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=0,m=1) 00:06:03.721 [2024-07-15 13:58:49.700293] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1039:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=1) 00:06:03.721 passed 00:06:03.721 00:06:03.721 Run Summary: Type Total Ran Passed Failed Inactive 00:06:03.721 suites 1 1 n/a 0 0 00:06:03.721 tests 13 13 13 0 0 00:06:03.721 asserts 50 50 50 0 n/a 00:06:03.721 00:06:03.721 Elapsed time = 0.001 seconds 00:06:03.721 [2024-07-15 13:58:49.700333] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1030:iscsi_check_chap_params: *ERROR*: Invalid auth group ID (-1) 00:06:03.721 13:58:49 unittest.unittest_iscsi -- unit/unittest.sh@71 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/iscsi.c/iscsi_ut 00:06:04.003 00:06:04.003 00:06:04.003 CUnit - A unit testing framework for C - Version 2.1-3 00:06:04.003 http://cunit.sourceforge.net/ 00:06:04.003 00:06:04.003 00:06:04.003 Suite: iscsi_suite 00:06:04.003 Test: op_login_check_target_test ...passed 00:06:04.003 Test: op_login_session_normal_test ...[2024-07-15 13:58:49.725649] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1439:iscsi_op_login_check_target: *ERROR*: access denied 00:06:04.003 [2024-07-15 13:58:49.725897] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1636:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:06:04.003 [2024-07-15 13:58:49.725945] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1636:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:06:04.003 [2024-07-15 13:58:49.725981] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1636:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:06:04.003 [2024-07-15 13:58:49.726037] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 695:append_iscsi_sess: *ERROR*: spdk_get_iscsi_sess_by_tsih failed 00:06:04.003 [2024-07-15 13:58:49.726118] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1472:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:06:04.003 passed 00:06:04.003 Test: maxburstlength_test ...[2024-07-15 13:58:49.726201] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 702:append_iscsi_sess: *ERROR*: no MCS session for init port name=iqn.2017-11.spdk.io:i0001, tsih=256, cid=0 00:06:04.003 [2024-07-15 13:58:49.726259] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1472:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:06:04.003 [2024-07-15 13:58:49.726403] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4229:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:06:04.003 passed 00:06:04.003 Test: underflow_for_read_transfer_test ...passed 00:06:04.003 Test: underflow_for_zero_read_transfer_test ...passed 00:06:04.003 Test: underflow_for_request_sense_test ...[2024-07-15 13:58:49.726459] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4566:iscsi_pdu_hdr_handle: *ERROR*: processing PDU header (opcode=5) failed on NULL(NULL) 00:06:04.003 passed 00:06:04.003 Test: underflow_for_check_condition_test ...passed 00:06:04.003 Test: add_transfer_task_test ...passed 00:06:04.003 Test: get_transfer_task_test ...passed 00:06:04.003 Test: del_transfer_task_test ...passed 00:06:04.003 Test: clear_all_transfer_tasks_test ...passed 00:06:04.003 Test: build_iovs_test ...passed 00:06:04.003 Test: build_iovs_with_md_test ...passed 00:06:04.003 Test: pdu_hdr_op_login_test ...[2024-07-15 13:58:49.727250] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1256:iscsi_op_login_rsp_init: *ERROR*: transit error 00:06:04.003 [2024-07-15 13:58:49.727355] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1263:iscsi_op_login_rsp_init: *ERROR*: unsupported version min 1/max 0, expecting 0 00:06:04.003 passed 00:06:04.003 Test: pdu_hdr_op_text_test ...passed 00:06:04.003 Test: pdu_hdr_op_logout_test ...[2024-07-15 13:58:49.727414] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1277:iscsi_op_login_rsp_init: *ERROR*: Received reserved NSG code: 2 00:06:04.003 [2024-07-15 13:58:49.727489] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2258:iscsi_pdu_hdr_op_text: *ERROR*: data segment len(=69) > immediate data len(=68) 00:06:04.003 [2024-07-15 13:58:49.727551] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2290:iscsi_pdu_hdr_op_text: *ERROR*: final and continue 00:06:04.003 [2024-07-15 13:58:49.727586] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2303:iscsi_pdu_hdr_op_text: *ERROR*: The correct itt is 5679, and the current itt is 5678... 00:06:04.003 passed 00:06:04.003 Test: pdu_hdr_op_scsi_test ...[2024-07-15 13:58:49.727643] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2533:iscsi_pdu_hdr_op_logout: *ERROR*: Target can accept logout only with reason "close the session" on discovery session. 1 is not acceptable reason. 00:06:04.003 [2024-07-15 13:58:49.727775] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3354:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:06:04.003 [2024-07-15 13:58:49.727808] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3354:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:06:04.003 [2024-07-15 13:58:49.727851] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3382:iscsi_pdu_hdr_op_scsi: *ERROR*: Bidirectional CDB is not supported 00:06:04.003 [2024-07-15 13:58:49.727924] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3415:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=69) > immediate data len(=68) 00:06:04.003 [2024-07-15 13:58:49.727994] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3422:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=68) > task transfer len(=67) 00:06:04.003 [2024-07-15 13:58:49.728133] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3446:iscsi_pdu_hdr_op_scsi: *ERROR*: Reject scsi cmd with EDTL > 0 but (R | W) == 0 00:06:04.003 passed 00:06:04.003 Test: pdu_hdr_op_task_mgmt_test ...[2024-07-15 13:58:49.728215] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3623:iscsi_pdu_hdr_op_task: *ERROR*: ISCSI_OP_TASK not allowed in discovery and invalid session 00:06:04.003 [2024-07-15 13:58:49.728268] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3712:iscsi_pdu_hdr_op_task: *ERROR*: unsupported function 0 00:06:04.003 passed 00:06:04.003 Test: pdu_hdr_op_nopout_test ...passed 00:06:04.003 Test: pdu_hdr_op_data_test ...[2024-07-15 13:58:49.728404] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3731:iscsi_pdu_hdr_op_nopout: *ERROR*: ISCSI_OP_NOPOUT not allowed in discovery session 00:06:04.003 [2024-07-15 13:58:49.728482] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3753:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:06:04.003 [2024-07-15 13:58:49.728525] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3753:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:06:04.003 [2024-07-15 13:58:49.728572] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3761:iscsi_pdu_hdr_op_nopout: *ERROR*: got NOPOUT ITT=0xffffffff, I=0 00:06:04.003 [2024-07-15 13:58:49.728610] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4204:iscsi_pdu_hdr_op_data: *ERROR*: ISCSI_OP_SCSI_DATAOUT not allowed in discovery session 00:06:04.003 [2024-07-15 13:58:49.728656] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=0 00:06:04.003 [2024-07-15 13:58:49.728698] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4229:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:06:04.003 [2024-07-15 13:58:49.728757] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4234:iscsi_pdu_hdr_op_data: *ERROR*: The r2t task tag is 0, and the dataout task tag is 1 00:06:04.003 [2024-07-15 13:58:49.728802] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4240:iscsi_pdu_hdr_op_data: *ERROR*: DataSN(1) exp=0 error 00:06:04.003 passed 00:06:04.003 Test: empty_text_with_cbit_test ...passed 00:06:04.003 Test: pdu_payload_read_test ...[2024-07-15 13:58:49.728867] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4251:iscsi_pdu_hdr_op_data: *ERROR*: offset(4096) error 00:06:04.003 [2024-07-15 13:58:49.728899] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4261:iscsi_pdu_hdr_op_data: *ERROR*: R2T burst(65536) > MaxBurstLength(65535) 00:06:04.003 [2024-07-15 13:58:49.729749] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4649:iscsi_pdu_payload_read: *ERROR*: Data(65537) > MaxSegment(65536) 00:06:04.003 passed 00:06:04.003 Test: data_out_pdu_sequence_test ...passed 00:06:04.003 Test: immediate_data_and_data_out_pdu_sequence_test ...passed 00:06:04.003 00:06:04.003 Run Summary: Type Total Ran Passed Failed Inactive 00:06:04.003 suites 1 1 n/a 0 0 00:06:04.003 tests 24 24 24 0 0 00:06:04.003 asserts 150253 150253 150253 0 n/a 00:06:04.003 00:06:04.003 Elapsed time = 0.008 seconds 00:06:04.003 13:58:49 unittest.unittest_iscsi -- unit/unittest.sh@72 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/init_grp.c/init_grp_ut 00:06:04.003 00:06:04.003 00:06:04.003 CUnit - A unit testing framework for C - Version 2.1-3 00:06:04.003 http://cunit.sourceforge.net/ 00:06:04.003 00:06:04.003 00:06:04.003 Suite: init_grp_suite 00:06:04.003 Test: create_initiator_group_success_case ...passed 00:06:04.003 Test: find_initiator_group_success_case ...passed 00:06:04.003 Test: register_initiator_group_twice_case ...passed 00:06:04.003 Test: add_initiator_name_success_case ...passed 00:06:04.003 Test: add_initiator_name_fail_case ...[2024-07-15 13:58:49.758076] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 54:iscsi_init_grp_add_initiator: *ERROR*: > MAX_INITIATOR(=256) is not allowed 00:06:04.003 passed 00:06:04.003 Test: delete_all_initiator_names_success_case ...passed 00:06:04.003 Test: add_netmask_success_case ...passed 00:06:04.003 Test: add_netmask_fail_case ...passed 00:06:04.003 Test: delete_all_netmasks_success_case ...passed 00:06:04.003 Test: initiator_name_overwrite_all_to_any_case ...passed 00:06:04.003 Test: netmask_overwrite_all_to_any_case ...[2024-07-15 13:58:49.758590] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 188:iscsi_init_grp_add_netmask: *ERROR*: > MAX_NETMASK(=256) is not allowed 00:06:04.003 passed 00:06:04.003 Test: add_delete_initiator_names_case ...passed 00:06:04.003 Test: add_duplicated_initiator_names_case ...passed 00:06:04.003 Test: delete_nonexisting_initiator_names_case ...passed 00:06:04.003 Test: add_delete_netmasks_case ...passed 00:06:04.003 Test: add_duplicated_netmasks_case ...passed 00:06:04.003 Test: delete_nonexisting_netmasks_case ...passed 00:06:04.003 00:06:04.003 Run Summary: Type Total Ran Passed Failed Inactive 00:06:04.003 suites 1 1 n/a 0 0 00:06:04.003 tests 17 17 17 0 0 00:06:04.003 asserts 108 108 108 0 n/a 00:06:04.003 00:06:04.003 Elapsed time = 0.001 seconds 00:06:04.003 13:58:49 unittest.unittest_iscsi -- unit/unittest.sh@73 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/portal_grp.c/portal_grp_ut 00:06:04.003 00:06:04.003 00:06:04.003 CUnit - A unit testing framework for C - Version 2.1-3 00:06:04.003 http://cunit.sourceforge.net/ 00:06:04.003 00:06:04.003 00:06:04.003 Suite: portal_grp_suite 00:06:04.003 Test: portal_create_ipv4_normal_case ...passed 00:06:04.003 Test: portal_create_ipv6_normal_case ...passed 00:06:04.004 Test: portal_create_ipv4_wildcard_case ...passed 00:06:04.004 Test: portal_create_ipv6_wildcard_case ...passed 00:06:04.004 Test: portal_create_twice_case ...passed 00:06:04.004 Test: portal_grp_register_unregister_case ...passed 00:06:04.004 Test: portal_grp_register_twice_case ...passed 00:06:04.004 Test: portal_grp_add_delete_case ...[2024-07-15 13:58:49.780418] /home/vagrant/spdk_repo/spdk/lib/iscsi/portal_grp.c: 113:iscsi_portal_create: *ERROR*: portal (192.168.2.0, 3260) already exists 00:06:04.004 passed 00:06:04.004 Test: portal_grp_add_delete_twice_case ...passed 00:06:04.004 00:06:04.004 Run Summary: Type Total Ran Passed Failed Inactive 00:06:04.004 suites 1 1 n/a 0 0 00:06:04.004 tests 9 9 9 0 0 00:06:04.004 asserts 44 44 44 0 n/a 00:06:04.004 00:06:04.004 Elapsed time = 0.002 seconds 00:06:04.004 00:06:04.004 real 0m0.160s 00:06:04.004 user 0m0.094s 00:06:04.004 sys 0m0.066s 00:06:04.004 13:58:49 unittest.unittest_iscsi -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:04.004 13:58:49 unittest.unittest_iscsi -- common/autotest_common.sh@10 -- # set +x 00:06:04.004 ************************************ 00:06:04.004 END TEST unittest_iscsi 00:06:04.004 ************************************ 00:06:04.004 13:58:49 unittest -- common/autotest_common.sh@1142 -- # return 0 00:06:04.004 13:58:49 unittest -- unit/unittest.sh@245 -- # run_test unittest_json unittest_json 00:06:04.004 13:58:49 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:04.004 13:58:49 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.004 13:58:49 unittest -- common/autotest_common.sh@10 -- # set +x 00:06:04.004 ************************************ 00:06:04.004 START TEST unittest_json 00:06:04.004 ************************************ 00:06:04.004 13:58:49 unittest.unittest_json -- common/autotest_common.sh@1123 -- # unittest_json 00:06:04.004 13:58:49 unittest.unittest_json -- unit/unittest.sh@77 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_parse.c/json_parse_ut 00:06:04.004 00:06:04.004 00:06:04.004 CUnit - A unit testing framework for C - Version 2.1-3 00:06:04.004 http://cunit.sourceforge.net/ 00:06:04.004 00:06:04.004 00:06:04.004 Suite: json 00:06:04.004 Test: test_parse_literal ...passed 00:06:04.004 Test: test_parse_string_simple ...passed 00:06:04.004 Test: test_parse_string_control_chars ...passed 00:06:04.004 Test: test_parse_string_utf8 ...passed 00:06:04.004 Test: test_parse_string_escapes_twochar ...passed 00:06:04.004 Test: test_parse_string_escapes_unicode ...passed 00:06:04.004 Test: test_parse_number ...passed 00:06:04.004 Test: test_parse_array ...passed 00:06:04.004 Test: test_parse_object ...passed 00:06:04.004 Test: test_parse_nesting ...passed 00:06:04.004 Test: test_parse_comment ...passed 00:06:04.004 00:06:04.004 Run Summary: Type Total Ran Passed Failed Inactive 00:06:04.004 suites 1 1 n/a 0 0 00:06:04.004 tests 11 11 11 0 0 00:06:04.004 asserts 1516 1516 1516 0 n/a 00:06:04.004 00:06:04.004 Elapsed time = 0.001 seconds 00:06:04.004 13:58:49 unittest.unittest_json -- unit/unittest.sh@78 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_util.c/json_util_ut 00:06:04.004 00:06:04.004 00:06:04.004 CUnit - A unit testing framework for C - Version 2.1-3 00:06:04.004 http://cunit.sourceforge.net/ 00:06:04.004 00:06:04.004 00:06:04.004 Suite: json 00:06:04.004 Test: test_strequal ...passed 00:06:04.004 Test: test_num_to_uint16 ...passed 00:06:04.004 Test: test_num_to_int32 ...passed 00:06:04.004 Test: test_num_to_uint64 ...passed 00:06:04.004 Test: test_decode_object ...passed 00:06:04.004 Test: test_decode_array ...passed 00:06:04.004 Test: test_decode_bool ...passed 00:06:04.004 Test: test_decode_uint16 ...passed 00:06:04.004 Test: test_decode_int32 ...passed 00:06:04.004 Test: test_decode_uint32 ...passed 00:06:04.004 Test: test_decode_uint64 ...passed 00:06:04.004 Test: test_decode_string ...passed 00:06:04.004 Test: test_decode_uuid ...passed 00:06:04.004 Test: test_find ...passed 00:06:04.004 Test: test_find_array ...passed 00:06:04.004 Test: test_iterating ...passed 00:06:04.004 Test: test_free_object ...passed 00:06:04.004 00:06:04.004 Run Summary: Type Total Ran Passed Failed Inactive 00:06:04.004 suites 1 1 n/a 0 0 00:06:04.004 tests 17 17 17 0 0 00:06:04.004 asserts 236 236 236 0 n/a 00:06:04.004 00:06:04.004 Elapsed time = 0.001 seconds 00:06:04.004 13:58:49 unittest.unittest_json -- unit/unittest.sh@79 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_write.c/json_write_ut 00:06:04.004 00:06:04.004 00:06:04.004 CUnit - A unit testing framework for C - Version 2.1-3 00:06:04.004 http://cunit.sourceforge.net/ 00:06:04.004 00:06:04.004 00:06:04.004 Suite: json 00:06:04.004 Test: test_write_literal ...passed 00:06:04.004 Test: test_write_string_simple ...passed 00:06:04.004 Test: test_write_string_escapes ...passed 00:06:04.004 Test: test_write_string_utf16le ...passed 00:06:04.004 Test: test_write_number_int32 ...passed 00:06:04.004 Test: test_write_number_uint32 ...passed 00:06:04.004 Test: test_write_number_uint128 ...passed 00:06:04.004 Test: test_write_string_number_uint128 ...passed 00:06:04.004 Test: test_write_number_int64 ...passed 00:06:04.004 Test: test_write_number_uint64 ...passed 00:06:04.004 Test: test_write_number_double ...passed 00:06:04.004 Test: test_write_uuid ...passed 00:06:04.004 Test: test_write_array ...passed 00:06:04.004 Test: test_write_object ...passed 00:06:04.004 Test: test_write_nesting ...passed 00:06:04.004 Test: test_write_val ...passed 00:06:04.004 00:06:04.004 Run Summary: Type Total Ran Passed Failed Inactive 00:06:04.004 suites 1 1 n/a 0 0 00:06:04.004 tests 16 16 16 0 0 00:06:04.004 asserts 918 918 918 0 n/a 00:06:04.004 00:06:04.004 Elapsed time = 0.003 seconds 00:06:04.004 13:58:49 unittest.unittest_json -- unit/unittest.sh@80 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut 00:06:04.004 00:06:04.004 00:06:04.004 CUnit - A unit testing framework for C - Version 2.1-3 00:06:04.004 http://cunit.sourceforge.net/ 00:06:04.004 00:06:04.004 00:06:04.004 Suite: jsonrpc 00:06:04.004 Test: test_parse_request ...passed 00:06:04.004 Test: test_parse_request_streaming ...passed 00:06:04.004 00:06:04.004 Run Summary: Type Total Ran Passed Failed Inactive 00:06:04.004 suites 1 1 n/a 0 0 00:06:04.004 tests 2 2 2 0 0 00:06:04.004 asserts 289 289 289 0 n/a 00:06:04.004 00:06:04.004 Elapsed time = 0.002 seconds 00:06:04.004 00:06:04.004 real 0m0.086s 00:06:04.004 user 0m0.048s 00:06:04.004 sys 0m0.038s 00:06:04.004 13:58:49 unittest.unittest_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:04.004 ************************************ 00:06:04.004 END TEST unittest_json 00:06:04.004 ************************************ 00:06:04.004 13:58:49 unittest.unittest_json -- common/autotest_common.sh@10 -- # set +x 00:06:04.004 13:58:49 unittest -- common/autotest_common.sh@1142 -- # return 0 00:06:04.004 13:58:49 unittest -- unit/unittest.sh@246 -- # run_test unittest_rpc unittest_rpc 00:06:04.004 13:58:49 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:04.004 13:58:49 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.004 13:58:49 unittest -- common/autotest_common.sh@10 -- # set +x 00:06:04.004 ************************************ 00:06:04.004 START TEST unittest_rpc 00:06:04.004 ************************************ 00:06:04.004 13:58:49 unittest.unittest_rpc -- common/autotest_common.sh@1123 -- # unittest_rpc 00:06:04.004 13:58:49 unittest.unittest_rpc -- unit/unittest.sh@84 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rpc/rpc.c/rpc_ut 00:06:04.004 00:06:04.004 00:06:04.004 CUnit - A unit testing framework for C - Version 2.1-3 00:06:04.004 http://cunit.sourceforge.net/ 00:06:04.004 00:06:04.004 00:06:04.004 Suite: rpc 00:06:04.004 Test: test_jsonrpc_handler ...passed 00:06:04.004 Test: test_spdk_rpc_is_method_allowed ...passed 00:06:04.004 Test: test_rpc_get_methods ...[2024-07-15 13:58:49.992892] /home/vagrant/spdk_repo/spdk/lib/rpc/rpc.c: 446:rpc_get_methods: *ERROR*: spdk_json_decode_object failed 00:06:04.004 passed 00:06:04.004 Test: test_rpc_spdk_get_version ...passed 00:06:04.004 Test: test_spdk_rpc_listen_close ...passed 00:06:04.004 Test: test_rpc_run_multiple_servers ...passed 00:06:04.004 00:06:04.004 Run Summary: Type Total Ran Passed Failed Inactive 00:06:04.004 suites 1 1 n/a 0 0 00:06:04.004 tests 6 6 6 0 0 00:06:04.004 asserts 23 23 23 0 n/a 00:06:04.004 00:06:04.004 Elapsed time = 0.000 seconds 00:06:04.004 00:06:04.004 real 0m0.024s 00:06:04.004 user 0m0.015s 00:06:04.004 sys 0m0.009s 00:06:04.004 ************************************ 00:06:04.004 END TEST unittest_rpc 00:06:04.004 ************************************ 00:06:04.004 13:58:50 unittest.unittest_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:04.004 13:58:50 unittest.unittest_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.262 13:58:50 unittest -- common/autotest_common.sh@1142 -- # return 0 00:06:04.262 13:58:50 unittest -- unit/unittest.sh@247 -- # run_test unittest_notify /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:06:04.262 13:58:50 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:04.262 13:58:50 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.262 13:58:50 unittest -- common/autotest_common.sh@10 -- # set +x 00:06:04.262 ************************************ 00:06:04.262 START TEST unittest_notify 00:06:04.262 ************************************ 00:06:04.262 13:58:50 unittest.unittest_notify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:06:04.262 00:06:04.262 00:06:04.262 CUnit - A unit testing framework for C - Version 2.1-3 00:06:04.262 http://cunit.sourceforge.net/ 00:06:04.262 00:06:04.262 00:06:04.262 Suite: app_suite 00:06:04.262 Test: notify ...passed 00:06:04.262 00:06:04.262 Run Summary: Type Total Ran Passed Failed Inactive 00:06:04.262 suites 1 1 n/a 0 0 00:06:04.262 tests 1 1 1 0 0 00:06:04.262 asserts 13 13 13 0 n/a 00:06:04.262 00:06:04.262 Elapsed time = 0.000 seconds 00:06:04.262 00:06:04.262 real 0m0.020s 00:06:04.262 user 0m0.011s 00:06:04.262 sys 0m0.009s 00:06:04.262 ************************************ 00:06:04.262 END TEST unittest_notify 00:06:04.262 ************************************ 00:06:04.262 13:58:50 unittest.unittest_notify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:04.262 13:58:50 unittest.unittest_notify -- common/autotest_common.sh@10 -- # set +x 00:06:04.262 13:58:50 unittest -- common/autotest_common.sh@1142 -- # return 0 00:06:04.262 13:58:50 unittest -- unit/unittest.sh@248 -- # run_test unittest_nvme unittest_nvme 00:06:04.262 13:58:50 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:04.263 13:58:50 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.263 13:58:50 unittest -- common/autotest_common.sh@10 -- # set +x 00:06:04.263 ************************************ 00:06:04.263 START TEST unittest_nvme 00:06:04.263 ************************************ 00:06:04.263 13:58:50 unittest.unittest_nvme -- common/autotest_common.sh@1123 -- # unittest_nvme 00:06:04.263 13:58:50 unittest.unittest_nvme -- unit/unittest.sh@88 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme.c/nvme_ut 00:06:04.263 00:06:04.263 00:06:04.263 CUnit - A unit testing framework for C - Version 2.1-3 00:06:04.263 http://cunit.sourceforge.net/ 00:06:04.263 00:06:04.263 00:06:04.263 Suite: nvme 00:06:04.263 Test: test_opc_data_transfer ...passed 00:06:04.263 Test: test_spdk_nvme_transport_id_parse_trtype ...passed 00:06:04.263 Test: test_spdk_nvme_transport_id_parse_adrfam ...passed 00:06:04.263 Test: test_trid_parse_and_compare ...[2024-07-15 13:58:50.129583] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1199:parse_next_key: *ERROR*: Key without ':' or '=' separator 00:06:04.263 [2024-07-15 13:58:50.129840] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1256:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:06:04.263 [2024-07-15 13:58:50.129935] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1211:parse_next_key: *ERROR*: Key length 32 greater than maximum allowed 31 00:06:04.263 [2024-07-15 13:58:50.129978] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1256:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:06:04.263 [2024-07-15 13:58:50.130014] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1222:parse_next_key: *ERROR*: Key without value 00:06:04.263 passed 00:06:04.263 Test: test_trid_trtype_str ...passed 00:06:04.263 Test: test_trid_adrfam_str ...passed 00:06:04.263 Test: test_nvme_ctrlr_probe ...[2024-07-15 13:58:50.130098] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1256:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:06:04.263 [2024-07-15 13:58:50.130342] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:06:04.263 passed 00:06:04.263 Test: test_spdk_nvme_probe ...passed 00:06:04.263 Test: test_spdk_nvme_connect ...[2024-07-15 13:58:50.130443] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:06:04.263 [2024-07-15 13:58:50.130484] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:06:04.263 [2024-07-15 13:58:50.130627] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe trtype 256 (PCIE) not available 00:06:04.263 [2024-07-15 13:58:50.130669] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:06:04.263 passed 00:06:04.263 Test: test_nvme_ctrlr_probe_internal ...[2024-07-15 13:58:50.130763] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1010:spdk_nvme_connect: *ERROR*: No transport ID specified 00:06:04.263 [2024-07-15 13:58:50.131020] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:06:04.263 passed 00:06:04.263 Test: test_nvme_init_controllers ...passed 00:06:04.263 Test: test_nvme_driver_init ...[2024-07-15 13:58:50.131238] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:06:04.263 [2024-07-15 13:58:50.131298] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:06:04.263 [2024-07-15 13:58:50.131375] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 00:06:04.263 [2024-07-15 13:58:50.131448] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 578:nvme_driver_init: *ERROR*: primary process failed to reserve memory 00:06:04.263 [2024-07-15 13:58:50.131484] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:06:04.263 [2024-07-15 13:58:50.244820] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 596:nvme_driver_init: *ERROR*: timeout waiting for primary process to init 00:06:04.263 passed 00:06:04.263 Test: test_spdk_nvme_detach ...passed 00:06:04.263 Test: test_nvme_completion_poll_cb ...passed 00:06:04.263 Test: test_nvme_user_copy_cmd_complete ...passed 00:06:04.263 Test: test_nvme_allocate_request_null ...passed 00:06:04.263 Test: test_nvme_allocate_request ...passed 00:06:04.263 Test: test_nvme_free_request ...passed 00:06:04.263 Test: test_nvme_allocate_request_user_copy ...passed 00:06:04.263 Test: test_nvme_robust_mutex_init_shared ...passed 00:06:04.263 Test: test_nvme_request_check_timeout ...passed 00:06:04.263 Test: test_nvme_wait_for_completion ...passed 00:06:04.263 Test: test_spdk_nvme_parse_func ...passed 00:06:04.263 Test: test_spdk_nvme_detach_async ...[2024-07-15 13:58:50.245087] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 618:nvme_driver_init: *ERROR*: failed to initialize mutex 00:06:04.263 passed 00:06:04.263 Test: test_nvme_parse_addr ...[2024-07-15 13:58:50.245812] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1609:nvme_parse_addr: *ERROR*: addr and service must both be non-NULL 00:06:04.263 passed 00:06:04.263 00:06:04.263 Run Summary: Type Total Ran Passed Failed Inactive 00:06:04.263 suites 1 1 n/a 0 0 00:06:04.263 tests 25 25 25 0 0 00:06:04.263 asserts 326 326 326 0 n/a 00:06:04.263 00:06:04.263 Elapsed time = 0.005 seconds 00:06:04.263 13:58:50 unittest.unittest_nvme -- unit/unittest.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut 00:06:04.521 00:06:04.521 00:06:04.521 CUnit - A unit testing framework for C - Version 2.1-3 00:06:04.521 http://cunit.sourceforge.net/ 00:06:04.521 00:06:04.521 00:06:04.521 Suite: nvme_ctrlr 00:06:04.521 Test: test_nvme_ctrlr_init_en_1_rdy_0 ...[2024-07-15 13:58:50.270110] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:04.521 passed 00:06:04.521 Test: test_nvme_ctrlr_init_en_1_rdy_1 ...[2024-07-15 13:58:50.271659] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:04.521 passed 00:06:04.521 Test: test_nvme_ctrlr_init_en_0_rdy_0 ...[2024-07-15 13:58:50.272951] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:04.521 passed 00:06:04.521 Test: test_nvme_ctrlr_init_en_0_rdy_1 ...[2024-07-15 13:58:50.274215] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:04.521 passed 00:06:04.521 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_rr ...[2024-07-15 13:58:50.275560] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:04.521 [2024-07-15 13:58:50.276805] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-15 13:58:50.278019] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-15 13:58:50.279216] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:06:04.521 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_wrr ...[2024-07-15 13:58:50.281709] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:04.521 [2024-07-15 13:58:50.284085] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-15 13:58:50.285369] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:06:04.521 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_vs ...[2024-07-15 13:58:50.288003] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:04.521 [2024-07-15 13:58:50.289325] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-15 13:58:50.291861] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:06:04.521 Test: test_nvme_ctrlr_init_delay ...[2024-07-15 13:58:50.294510] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:04.521 passed 00:06:04.521 Test: test_alloc_io_qpair_rr_1 ...[2024-07-15 13:58:50.295978] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:04.521 [2024-07-15 13:58:50.296263] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:06:04.521 [2024-07-15 13:58:50.296505] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 394:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:06:04.521 [2024-07-15 13:58:50.296594] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 394:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:06:04.521 [2024-07-15 13:58:50.296657] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 394:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:06:04.521 passed 00:06:04.521 Test: test_ctrlr_get_default_ctrlr_opts ...passed 00:06:04.521 Test: test_ctrlr_get_default_io_qpair_opts ...passed 00:06:04.521 Test: test_alloc_io_qpair_wrr_1 ...passed 00:06:04.521 Test: test_alloc_io_qpair_wrr_2 ...[2024-07-15 13:58:50.296829] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:04.521 [2024-07-15 13:58:50.297060] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:04.521 [2024-07-15 13:58:50.297218] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:06:04.521 passed 00:06:04.521 Test: test_spdk_nvme_ctrlr_update_firmware ...[2024-07-15 13:58:50.297518] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4993:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_update_firmware invalid size! 00:06:04.521 [2024-07-15 13:58:50.297708] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5030:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:06:04.521 [2024-07-15 13:58:50.297844] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5070:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] nvme_ctrlr_cmd_fw_commit failed! 00:06:04.521 passed 00:06:04.521 Test: test_nvme_ctrlr_fail ...[2024-07-15 13:58:50.297944] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5030:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:06:04.521 [2024-07-15 13:58:50.298024] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [] in failed state. 00:06:04.521 passed 00:06:04.521 Test: test_nvme_ctrlr_construct_intel_support_log_page_list ...passed 00:06:04.521 Test: test_nvme_ctrlr_set_supported_features ...passed 00:06:04.521 Test: test_nvme_ctrlr_set_host_feature ...[2024-07-15 13:58:50.298196] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:04.521 passed 00:06:04.521 Test: test_spdk_nvme_ctrlr_doorbell_buffer_config ...passed 00:06:04.521 Test: test_nvme_ctrlr_test_active_ns ...[2024-07-15 13:58:50.299652] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:04.521 passed 00:06:04.521 Test: test_nvme_ctrlr_test_active_ns_error_case ...passed 00:06:04.521 Test: test_spdk_nvme_ctrlr_reconnect_io_qpair ...passed 00:06:04.521 Test: test_spdk_nvme_ctrlr_set_trid ...passed 00:06:04.521 Test: test_nvme_ctrlr_init_set_nvmf_ioccsz ...[2024-07-15 13:58:50.455315] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:04.521 passed 00:06:04.521 Test: test_nvme_ctrlr_init_set_num_queues ...[2024-07-15 13:58:50.462705] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:04.521 passed 00:06:04.521 Test: test_nvme_ctrlr_init_set_keep_alive_timeout ...[2024-07-15 13:58:50.463974] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:04.522 [2024-07-15 13:58:50.464091] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3002:nvme_ctrlr_set_keep_alive_timeout_done: *ERROR*: [] Keep alive timeout Get Feature failed: SC 6 SCT 0 00:06:04.522 passed 00:06:04.522 Test: test_alloc_io_qpair_fail ...[2024-07-15 13:58:50.465337] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:04.522 passed 00:06:04.522 Test: test_nvme_ctrlr_add_remove_process ...passed 00:06:04.522 Test: test_nvme_ctrlr_set_arbitration_feature ...passed 00:06:04.522 Test: test_nvme_ctrlr_set_state ...passed 00:06:04.522 Test: test_nvme_ctrlr_active_ns_list_v0 ...[2024-07-15 13:58:50.465457] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 506:spdk_nvme_ctrlr_alloc_io_qpair: *ERROR*: [] nvme_transport_ctrlr_connect_io_qpair() failed 00:06:04.522 [2024-07-15 13:58:50.465572] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1546:_nvme_ctrlr_set_state: *ERROR*: [] Specified timeout would cause integer overflow. Defaulting to no timeout. 00:06:04.522 [2024-07-15 13:58:50.465627] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:04.522 passed 00:06:04.522 Test: test_nvme_ctrlr_active_ns_list_v2 ...[2024-07-15 13:58:50.482228] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:04.522 passed 00:06:04.522 Test: test_nvme_ctrlr_ns_mgmt ...[2024-07-15 13:58:50.517118] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:04.522 passed 00:06:04.522 Test: test_nvme_ctrlr_reset ...[2024-07-15 13:58:50.518651] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:04.522 passed 00:06:04.522 Test: test_nvme_ctrlr_aer_callback ...[2024-07-15 13:58:50.518974] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:04.522 passed 00:06:04.522 Test: test_nvme_ctrlr_ns_attr_changed ...[2024-07-15 13:58:50.520351] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:04.781 passed 00:06:04.781 Test: test_nvme_ctrlr_identify_namespaces_iocs_specific_next ...passed 00:06:04.781 Test: test_nvme_ctrlr_set_supported_log_pages ...passed 00:06:04.781 Test: test_nvme_ctrlr_set_intel_supported_log_pages ...[2024-07-15 13:58:50.522166] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:04.781 passed 00:06:04.781 Test: test_nvme_ctrlr_parse_ana_log_page ...passed 00:06:04.781 Test: test_nvme_ctrlr_ana_resize ...[2024-07-15 13:58:50.523649] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:04.781 passed 00:06:04.781 Test: test_nvme_ctrlr_get_memory_domains ...passed 00:06:04.781 Test: test_nvme_transport_ctrlr_ready ...[2024-07-15 13:58:50.525259] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4152:nvme_ctrlr_process_init: *ERROR*: [] Transport controller ready step failed: rc -1 00:06:04.781 [2024-07-15 13:58:50.525325] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4204:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr operation failed with error: -1, ctrlr state: 53 (error) 00:06:04.781 passed 00:06:04.781 Test: test_nvme_ctrlr_disable ...[2024-07-15 13:58:50.525374] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4272:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:04.781 passed 00:06:04.781 00:06:04.781 Run Summary: Type Total Ran Passed Failed Inactive 00:06:04.781 suites 1 1 n/a 0 0 00:06:04.781 tests 44 44 44 0 0 00:06:04.781 asserts 10434 10434 10434 0 n/a 00:06:04.781 00:06:04.781 Elapsed time = 0.212 seconds 00:06:04.781 13:58:50 unittest.unittest_nvme -- unit/unittest.sh@90 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut 00:06:04.781 00:06:04.781 00:06:04.781 CUnit - A unit testing framework for C - Version 2.1-3 00:06:04.781 http://cunit.sourceforge.net/ 00:06:04.781 00:06:04.781 00:06:04.781 Suite: nvme_ctrlr_cmd 00:06:04.781 Test: test_get_log_pages ...passed 00:06:04.781 Test: test_set_feature_cmd ...passed 00:06:04.781 Test: test_set_feature_ns_cmd ...passed 00:06:04.781 Test: test_get_feature_cmd ...passed 00:06:04.781 Test: test_get_feature_ns_cmd ...passed 00:06:04.781 Test: test_abort_cmd ...passed 00:06:04.781 Test: test_set_host_id_cmds ...passed 00:06:04.781 Test: test_io_cmd_raw_no_payload_build ...passed 00:06:04.781 Test: test_io_raw_cmd ...passed 00:06:04.781 Test: test_io_raw_cmd_with_md ...passed 00:06:04.781 Test: test_namespace_attach ...passed 00:06:04.781 Test: test_namespace_detach ...passed 00:06:04.781 Test: test_namespace_create ...passed 00:06:04.781 Test: test_namespace_delete ...passed 00:06:04.781 Test: test_doorbell_buffer_config ...passed 00:06:04.781 Test: test_format_nvme ...passed 00:06:04.781 Test: test_fw_commit ...passed 00:06:04.781 Test: test_fw_image_download ...passed 00:06:04.781 Test: test_sanitize ...passed 00:06:04.781 Test: test_directive ...passed 00:06:04.781 Test: test_nvme_request_add_abort ...passed 00:06:04.781 Test: test_spdk_nvme_ctrlr_cmd_abort ...passed 00:06:04.781 Test: test_nvme_ctrlr_cmd_identify ...passed 00:06:04.781 Test: test_spdk_nvme_ctrlr_cmd_security_receive_send ...passed 00:06:04.781 00:06:04.781 [2024-07-15 13:58:50.565503] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr_cmd.c: 508:nvme_ctrlr_cmd_set_host_id: *ERROR*: Invalid host ID size 1024 00:06:04.781 Run Summary: Type Total Ran Passed Failed Inactive 00:06:04.781 suites 1 1 n/a 0 0 00:06:04.781 tests 24 24 24 0 0 00:06:04.781 asserts 198 198 198 0 n/a 00:06:04.781 00:06:04.781 Elapsed time = 0.000 seconds 00:06:04.781 13:58:50 unittest.unittest_nvme -- unit/unittest.sh@91 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut 00:06:04.781 00:06:04.781 00:06:04.781 CUnit - A unit testing framework for C - Version 2.1-3 00:06:04.781 http://cunit.sourceforge.net/ 00:06:04.781 00:06:04.781 00:06:04.781 Suite: nvme_ctrlr_cmd 00:06:04.781 Test: test_geometry_cmd ...passed 00:06:04.781 Test: test_spdk_nvme_ctrlr_is_ocssd_supported ...passed 00:06:04.781 00:06:04.781 Run Summary: Type Total Ran Passed Failed Inactive 00:06:04.781 suites 1 1 n/a 0 0 00:06:04.781 tests 2 2 2 0 0 00:06:04.781 asserts 7 7 7 0 n/a 00:06:04.781 00:06:04.781 Elapsed time = 0.000 seconds 00:06:04.781 13:58:50 unittest.unittest_nvme -- unit/unittest.sh@92 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut 00:06:04.781 00:06:04.781 00:06:04.781 CUnit - A unit testing framework for C - Version 2.1-3 00:06:04.781 http://cunit.sourceforge.net/ 00:06:04.781 00:06:04.781 00:06:04.781 Suite: nvme 00:06:04.781 Test: test_nvme_ns_construct ...passed 00:06:04.781 Test: test_nvme_ns_uuid ...passed 00:06:04.781 Test: test_nvme_ns_csi ...passed 00:06:04.781 Test: test_nvme_ns_data ...passed 00:06:04.781 Test: test_nvme_ns_set_identify_data ...passed 00:06:04.781 Test: test_spdk_nvme_ns_get_values ...passed 00:06:04.781 Test: test_spdk_nvme_ns_is_active ...passed 00:06:04.781 Test: spdk_nvme_ns_supports ...passed 00:06:04.781 Test: test_nvme_ns_has_supported_iocs_specific_data ...passed 00:06:04.781 Test: test_nvme_ctrlr_identify_ns_iocs_specific ...passed 00:06:04.781 Test: test_nvme_ctrlr_identify_id_desc ...passed 00:06:04.781 Test: test_nvme_ns_find_id_desc ...passed 00:06:04.781 00:06:04.781 Run Summary: Type Total Ran Passed Failed Inactive 00:06:04.781 suites 1 1 n/a 0 0 00:06:04.781 tests 12 12 12 0 0 00:06:04.781 asserts 95 95 95 0 n/a 00:06:04.781 00:06:04.781 Elapsed time = 0.000 seconds 00:06:04.781 13:58:50 unittest.unittest_nvme -- unit/unittest.sh@93 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut 00:06:04.781 00:06:04.781 00:06:04.781 CUnit - A unit testing framework for C - Version 2.1-3 00:06:04.781 http://cunit.sourceforge.net/ 00:06:04.781 00:06:04.781 00:06:04.781 Suite: nvme_ns_cmd 00:06:04.781 Test: split_test ...passed 00:06:04.781 Test: split_test2 ...passed 00:06:04.781 Test: split_test3 ...passed 00:06:04.781 Test: split_test4 ...passed 00:06:04.781 Test: test_nvme_ns_cmd_flush ...passed 00:06:04.781 Test: test_nvme_ns_cmd_dataset_management ...passed 00:06:04.781 Test: test_nvme_ns_cmd_copy ...passed 00:06:04.781 Test: test_io_flags ...[2024-07-15 13:58:50.638457] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xfffc 00:06:04.781 passed 00:06:04.781 Test: test_nvme_ns_cmd_write_zeroes ...passed 00:06:04.781 Test: test_nvme_ns_cmd_write_uncorrectable ...passed 00:06:04.781 Test: test_nvme_ns_cmd_reservation_register ...passed 00:06:04.781 Test: test_nvme_ns_cmd_reservation_release ...passed 00:06:04.781 Test: test_nvme_ns_cmd_reservation_acquire ...passed 00:06:04.781 Test: test_nvme_ns_cmd_reservation_report ...passed 00:06:04.781 Test: test_cmd_child_request ...passed 00:06:04.781 Test: test_nvme_ns_cmd_readv ...passed 00:06:04.781 Test: test_nvme_ns_cmd_read_with_md ...passed 00:06:04.781 Test: test_nvme_ns_cmd_writev ...passed 00:06:04.781 Test: test_nvme_ns_cmd_write_with_md ...[2024-07-15 13:58:50.639295] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 291:_nvme_ns_cmd_split_request_prp: *ERROR*: child_length 200 not even multiple of lba_size 512 00:06:04.781 passed 00:06:04.781 Test: test_nvme_ns_cmd_zone_append_with_md ...passed 00:06:04.781 Test: test_nvme_ns_cmd_zone_appendv_with_md ...passed 00:06:04.781 Test: test_nvme_ns_cmd_comparev ...passed 00:06:04.781 Test: test_nvme_ns_cmd_compare_and_write ...passed 00:06:04.781 Test: test_nvme_ns_cmd_compare_with_md ...passed 00:06:04.781 Test: test_nvme_ns_cmd_comparev_with_md ...passed 00:06:04.781 Test: test_nvme_ns_cmd_setup_request ...passed 00:06:04.781 Test: test_spdk_nvme_ns_cmd_readv_with_md ...passed 00:06:04.781 Test: test_spdk_nvme_ns_cmd_writev_ext ...[2024-07-15 13:58:50.640645] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:06:04.781 passed 00:06:04.781 Test: test_spdk_nvme_ns_cmd_readv_ext ...passed 00:06:04.781 Test: test_nvme_ns_cmd_verify ...passed 00:06:04.781 Test: test_nvme_ns_cmd_io_mgmt_send ...passed 00:06:04.781 Test: test_nvme_ns_cmd_io_mgmt_recv ...passed 00:06:04.781 00:06:04.781 Run Summary: Type Total Ran Passed Failed Inactive 00:06:04.781 suites 1 1 n/a 0 0 00:06:04.781 tests 32 32 32 0 0 00:06:04.781 asserts 550 550 550 0 n/a 00:06:04.781 00:06:04.781 Elapsed time = 0.003 seconds 00:06:04.781 [2024-07-15 13:58:50.640750] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:06:04.781 13:58:50 unittest.unittest_nvme -- unit/unittest.sh@94 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut 00:06:04.781 00:06:04.781 00:06:04.781 CUnit - A unit testing framework for C - Version 2.1-3 00:06:04.781 http://cunit.sourceforge.net/ 00:06:04.781 00:06:04.781 00:06:04.781 Suite: nvme_ns_cmd 00:06:04.781 Test: test_nvme_ocssd_ns_cmd_vector_reset ...passed 00:06:04.781 Test: test_nvme_ocssd_ns_cmd_vector_reset_single_entry ...passed 00:06:04.781 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md ...passed 00:06:04.781 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md_single_entry ...passed 00:06:04.781 Test: test_nvme_ocssd_ns_cmd_vector_read ...passed 00:06:04.781 Test: test_nvme_ocssd_ns_cmd_vector_read_single_entry ...passed 00:06:04.781 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md ...passed 00:06:04.782 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md_single_entry ...passed 00:06:04.782 Test: test_nvme_ocssd_ns_cmd_vector_write ...passed 00:06:04.782 Test: test_nvme_ocssd_ns_cmd_vector_write_single_entry ...passed 00:06:04.782 Test: test_nvme_ocssd_ns_cmd_vector_copy ...passed 00:06:04.782 Test: test_nvme_ocssd_ns_cmd_vector_copy_single_entry ...passed 00:06:04.782 00:06:04.782 Run Summary: Type Total Ran Passed Failed Inactive 00:06:04.782 suites 1 1 n/a 0 0 00:06:04.782 tests 12 12 12 0 0 00:06:04.782 asserts 123 123 123 0 n/a 00:06:04.782 00:06:04.782 Elapsed time = 0.001 seconds 00:06:04.782 13:58:50 unittest.unittest_nvme -- unit/unittest.sh@95 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut 00:06:04.782 00:06:04.782 00:06:04.782 CUnit - A unit testing framework for C - Version 2.1-3 00:06:04.782 http://cunit.sourceforge.net/ 00:06:04.782 00:06:04.782 00:06:04.782 Suite: nvme_qpair 00:06:04.782 Test: test3 ...passed 00:06:04.782 Test: test_ctrlr_failed ...passed 00:06:04.782 Test: struct_packing ...passed 00:06:04.782 Test: test_nvme_qpair_process_completions ...[2024-07-15 13:58:50.694102] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:06:04.782 [2024-07-15 13:58:50.694362] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:06:04.782 [2024-07-15 13:58:50.694421] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:06:04.782 passed 00:06:04.782 Test: test_nvme_completion_is_retry ...passed 00:06:04.782 Test: test_get_status_string ...passed 00:06:04.782 Test: test_nvme_qpair_add_cmd_error_injection ...passed 00:06:04.782 Test: test_nvme_qpair_submit_request ...passed 00:06:04.782 Test: test_nvme_qpair_resubmit_request_with_transport_failed ...passed 00:06:04.782 Test: test_nvme_qpair_manual_complete_request ...passed 00:06:04.782 Test: test_nvme_qpair_init_deinit ...[2024-07-15 13:58:50.694522] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:06:04.782 passed 00:06:04.782 Test: test_nvme_get_sgl_print_info ...passed 00:06:04.782 00:06:04.782 Run Summary: Type Total Ran Passed Failed Inactive 00:06:04.782 suites 1 1 n/a 0 0 00:06:04.782 tests 12 12 12 0 0 00:06:04.782 asserts 154 154 154 0 n/a 00:06:04.782 00:06:04.782 Elapsed time = 0.001 seconds 00:06:04.782 [2024-07-15 13:58:50.694863] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:06:04.782 13:58:50 unittest.unittest_nvme -- unit/unittest.sh@96 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut 00:06:04.782 00:06:04.782 00:06:04.782 CUnit - A unit testing framework for C - Version 2.1-3 00:06:04.782 http://cunit.sourceforge.net/ 00:06:04.782 00:06:04.782 00:06:04.782 Suite: nvme_pcie 00:06:04.782 Test: test_prp_list_append ...[2024-07-15 13:58:50.716350] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1205:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:06:04.782 [2024-07-15 13:58:50.716654] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1234:nvme_pcie_prp_list_append: *ERROR*: PRP 2 not page aligned (0x900800) 00:06:04.782 [2024-07-15 13:58:50.716746] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1224:nvme_pcie_prp_list_append: *ERROR*: vtophys(0x100000) failed 00:06:04.782 [2024-07-15 13:58:50.717049] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1218:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:06:04.782 [2024-07-15 13:58:50.717162] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1218:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:06:04.782 passed 00:06:04.782 Test: test_nvme_pcie_hotplug_monitor ...passed 00:06:04.782 Test: test_shadow_doorbell_update ...passed 00:06:04.782 Test: test_build_contig_hw_sgl_request ...passed 00:06:04.782 Test: test_nvme_pcie_qpair_build_metadata ...passed 00:06:04.782 Test: test_nvme_pcie_qpair_build_prps_sgl_request ...passed 00:06:04.782 Test: test_nvme_pcie_qpair_build_hw_sgl_request ...passed 00:06:04.782 Test: test_nvme_pcie_qpair_build_contig_request ...passed 00:06:04.782 Test: test_nvme_pcie_ctrlr_regs_get_set ...passed 00:06:04.782 Test: test_nvme_pcie_ctrlr_map_unmap_cmb ...passed 00:06:04.782 Test: test_nvme_pcie_ctrlr_map_io_cmb ...[2024-07-15 13:58:50.717400] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1205:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:06:04.782 [2024-07-15 13:58:50.717525] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 442:nvme_pcie_ctrlr_map_io_cmb: *ERROR*: CMB is already in use for submission queues. 00:06:04.782 passed 00:06:04.782 Test: test_nvme_pcie_ctrlr_map_unmap_pmr ...passed 00:06:04.782 Test: test_nvme_pcie_ctrlr_config_pmr ...passed 00:06:04.782 Test: test_nvme_pcie_ctrlr_map_io_pmr ...[2024-07-15 13:58:50.717642] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 521:nvme_pcie_ctrlr_map_pmr: *ERROR*: invalid base indicator register value 00:06:04.782 [2024-07-15 13:58:50.717716] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 647:nvme_pcie_ctrlr_config_pmr: *ERROR*: PMR is already disabled 00:06:04.782 [2024-07-15 13:58:50.717820] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 699:nvme_pcie_ctrlr_map_io_pmr: *ERROR*: PMR is not supported by the controller 00:06:04.782 passed 00:06:04.782 00:06:04.782 Run Summary: Type Total Ran Passed Failed Inactive 00:06:04.782 suites 1 1 n/a 0 0 00:06:04.782 tests 14 14 14 0 0 00:06:04.782 asserts 235 235 235 0 n/a 00:06:04.782 00:06:04.782 Elapsed time = 0.002 seconds 00:06:04.782 13:58:50 unittest.unittest_nvme -- unit/unittest.sh@97 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut 00:06:04.782 00:06:04.782 00:06:04.782 CUnit - A unit testing framework for C - Version 2.1-3 00:06:04.782 http://cunit.sourceforge.net/ 00:06:04.782 00:06:04.782 00:06:04.782 Suite: nvme_ns_cmd 00:06:04.782 Test: nvme_poll_group_create_test ...passed 00:06:04.782 Test: nvme_poll_group_add_remove_test ...passed 00:06:04.782 Test: nvme_poll_group_process_completions ...passed 00:06:04.782 Test: nvme_poll_group_destroy_test ...passed 00:06:04.782 Test: nvme_poll_group_get_free_stats ...passed 00:06:04.782 00:06:04.782 Run Summary: Type Total Ran Passed Failed Inactive 00:06:04.782 suites 1 1 n/a 0 0 00:06:04.782 tests 5 5 5 0 0 00:06:04.782 asserts 75 75 75 0 n/a 00:06:04.782 00:06:04.782 Elapsed time = 0.000 seconds 00:06:04.782 13:58:50 unittest.unittest_nvme -- unit/unittest.sh@98 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut 00:06:04.782 00:06:04.782 00:06:04.782 CUnit - A unit testing framework for C - Version 2.1-3 00:06:04.782 http://cunit.sourceforge.net/ 00:06:04.782 00:06:04.782 00:06:04.782 Suite: nvme_quirks 00:06:04.782 Test: test_nvme_quirks_striping ...passed 00:06:04.782 00:06:04.782 Run Summary: Type Total Ran Passed Failed Inactive 00:06:04.782 suites 1 1 n/a 0 0 00:06:04.782 tests 1 1 1 0 0 00:06:04.782 asserts 5 5 5 0 n/a 00:06:04.782 00:06:04.782 Elapsed time = 0.000 seconds 00:06:04.782 13:58:50 unittest.unittest_nvme -- unit/unittest.sh@99 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut 00:06:05.041 00:06:05.041 00:06:05.041 CUnit - A unit testing framework for C - Version 2.1-3 00:06:05.041 http://cunit.sourceforge.net/ 00:06:05.041 00:06:05.041 00:06:05.041 Suite: nvme_tcp 00:06:05.041 Test: test_nvme_tcp_pdu_set_data_buf ...passed 00:06:05.041 Test: test_nvme_tcp_build_iovs ...passed 00:06:05.041 Test: test_nvme_tcp_build_sgl_request ...passed 00:06:05.041 Test: test_nvme_tcp_pdu_set_data_buf_with_md ...passed 00:06:05.041 Test: test_nvme_tcp_build_iovs_with_md ...passed 00:06:05.041 Test: test_nvme_tcp_req_complete_safe ...passed 00:06:05.041 Test: test_nvme_tcp_req_get ...passed 00:06:05.041 Test: test_nvme_tcp_req_init ...passed 00:06:05.041 Test: test_nvme_tcp_qpair_capsule_cmd_send ...[2024-07-15 13:58:50.791779] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 848:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x7ffc1460e2b0, and the iovcnt=16, remaining_size=28672 00:06:05.041 passed 00:06:05.041 Test: test_nvme_tcp_qpair_write_pdu ...passed 00:06:05.041 Test: test_nvme_tcp_qpair_set_recv_state ...passed 00:06:05.041 Test: test_nvme_tcp_alloc_reqs ...[2024-07-15 13:58:50.792372] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc1460ffe0 is same with the state(6) to be set 00:06:05.041 passed 00:06:05.041 Test: test_nvme_tcp_qpair_send_h2c_term_req ...passed 00:06:05.041 Test: test_nvme_tcp_pdu_ch_handle ...[2024-07-15 13:58:50.792661] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc1460f1a0 is same with the state(5) to be set 00:06:05.041 [2024-07-15 13:58:50.792742] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1190:nvme_tcp_pdu_ch_handle: *ERROR*: Already received IC_RESP PDU, and we should reject this pdu=0x7ffc1460fd30 00:06:05.041 [2024-07-15 13:58:50.792791] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1249:nvme_tcp_pdu_ch_handle: *ERROR*: Expected PDU header length 128, got 0 00:06:05.041 [2024-07-15 13:58:50.792876] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc1460f660 is same with the state(5) to be set 00:06:05.041 [2024-07-15 13:58:50.792936] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1200:nvme_tcp_pdu_ch_handle: *ERROR*: The TCP/IP tqpair connection is not negotiated 00:06:05.041 [2024-07-15 13:58:50.793022] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc1460f660 is same with the state(5) to be set 00:06:05.041 [2024-07-15 13:58:50.793061] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:06:05.041 [2024-07-15 13:58:50.793108] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc1460f660 is same with the state(5) to be set 00:06:05.041 passed 00:06:05.041 Test: test_nvme_tcp_qpair_connect_sock ...[2024-07-15 13:58:50.793160] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc1460f660 is same with the state(5) to be set 00:06:05.041 [2024-07-15 13:58:50.793202] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc1460f660 is same with the state(5) to be set 00:06:05.041 [2024-07-15 13:58:50.793261] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc1460f660 is same with the state(5) to be set 00:06:05.041 [2024-07-15 13:58:50.793300] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc1460f660 is same with the state(5) to be set 00:06:05.041 [2024-07-15 13:58:50.793342] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc1460f660 is same with the state(5) to be set 00:06:05.041 [2024-07-15 13:58:50.793443] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 3 00:06:05.041 [2024-07-15 13:58:50.793491] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2345:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:06:05.041 passed 00:06:05.041 Test: test_nvme_tcp_qpair_icreq_send ...passed 00:06:05.041 Test: test_nvme_tcp_c2h_payload_handle ...passed 00:06:05.041 Test: test_nvme_tcp_icresp_handle ...[2024-07-15 13:58:50.793707] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2345:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:06:05.041 [2024-07-15 13:58:50.793814] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1357:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7ffc1460f870): PDU Sequence Error 00:06:05.042 [2024-07-15 13:58:50.793868] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1576:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp PFV 0, got 1 00:06:05.042 [2024-07-15 13:58:50.793913] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1583:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp maxh2cdata >=4096, got 2048 00:06:05.042 [2024-07-15 13:58:50.793955] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc1460f1a0 is same with the state(5) to be set 00:06:05.042 passed 00:06:05.042 Test: test_nvme_tcp_pdu_payload_handle ...passed 00:06:05.042 Test: test_nvme_tcp_capsule_resp_hdr_handle ...[2024-07-15 13:58:50.794005] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1592:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp cpda <=31, got 64 00:06:05.042 [2024-07-15 13:58:50.794047] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc1460f1a0 is same with the state(5) to be set 00:06:05.042 [2024-07-15 13:58:50.794097] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc1460f1a0 is same with the state(0) to be set 00:06:05.042 [2024-07-15 13:58:50.794147] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1357:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7ffc1460fd30): PDU Sequence Error 00:06:05.042 [2024-07-15 13:58:50.794215] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1653:nvme_tcp_capsule_resp_hdr_handle: *ERROR*: no tcp_req is found with cid=1 for tqpair=0x7ffc1460e470 00:06:05.042 passed 00:06:05.042 Test: test_nvme_tcp_ctrlr_connect_qpair ...passed 00:06:05.042 Test: test_nvme_tcp_ctrlr_disconnect_qpair ...[2024-07-15 13:58:50.794319] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 358:nvme_tcp_ctrlr_disconnect_qpair: *ERROR*: tqpair=0x7ffc1460daf0, errno=0, rc=0 00:06:05.042 [2024-07-15 13:58:50.794376] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc1460daf0 is same with the state(5) to be set 00:06:05.042 [2024-07-15 13:58:50.794433] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffc1460daf0 is same with the state(5) to be set 00:06:05.042 [2024-07-15 13:58:50.794498] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ffc1460daf0 (0): Success 00:06:05.042 [2024-07-15 13:58:50.794547] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ffc1460daf0 (0): Success 00:06:05.042 passed 00:06:05.042 Test: test_nvme_tcp_ctrlr_create_io_qpair ...[2024-07-15 13:58:50.858249] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2516:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:06:05.042 [2024-07-15 13:58:50.858398] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2516:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:06:05.042 passed 00:06:05.042 Test: test_nvme_tcp_ctrlr_delete_io_qpair ...passed 00:06:05.042 Test: test_nvme_tcp_poll_group_get_stats ...passed 00:06:05.042 Test: test_nvme_tcp_ctrlr_construct ...[2024-07-15 13:58:50.858657] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2964:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:06:05.042 [2024-07-15 13:58:50.858711] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2964:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:06:05.042 [2024-07-15 13:58:50.858959] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2516:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:06:05.042 [2024-07-15 13:58:50.859016] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:06:05.042 passed 00:06:05.042 Test: test_nvme_tcp_qpair_submit_request ...[2024-07-15 13:58:50.859118] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 254 00:06:05.042 [2024-07-15 13:58:50.859172] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:06:05.042 [2024-07-15 13:58:50.859270] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000007d80 with addr=192.168.1.78, port=23 00:06:05.042 [2024-07-15 13:58:50.859338] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:06:05.042 passed 00:06:05.042 00:06:05.042 Run Summary: Type Total Ran Passed Failed Inactive 00:06:05.042 suites 1 1 n/a 0 0 00:06:05.042 tests 27 27 27 0 0 00:06:05.042 asserts 624 624 624 0 n/a 00:06:05.042 00:06:05.042 Elapsed time = 0.068 seconds 00:06:05.042 [2024-07-15 13:58:50.859472] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 848:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x614000000c40, and the iovcnt=1, remaining_size=1024 00:06:05.042 [2024-07-15 13:58:50.859527] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1035:nvme_tcp_qpair_submit_request: *ERROR*: nvme_tcp_req_init() failed 00:06:05.042 13:58:50 unittest.unittest_nvme -- unit/unittest.sh@100 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut 00:06:05.042 00:06:05.042 00:06:05.042 CUnit - A unit testing framework for C - Version 2.1-3 00:06:05.042 http://cunit.sourceforge.net/ 00:06:05.042 00:06:05.042 00:06:05.042 Suite: nvme_transport 00:06:05.042 Test: test_nvme_get_transport ...passed 00:06:05.042 Test: test_nvme_transport_poll_group_connect_qpair ...passed 00:06:05.042 Test: test_nvme_transport_poll_group_disconnect_qpair ...passed 00:06:05.042 Test: test_nvme_transport_poll_group_add_remove ...passed 00:06:05.042 Test: test_ctrlr_get_memory_domains ...passed 00:06:05.042 00:06:05.042 Run Summary: Type Total Ran Passed Failed Inactive 00:06:05.042 suites 1 1 n/a 0 0 00:06:05.042 tests 5 5 5 0 0 00:06:05.042 asserts 28 28 28 0 n/a 00:06:05.042 00:06:05.042 Elapsed time = 0.000 seconds 00:06:05.042 13:58:50 unittest.unittest_nvme -- unit/unittest.sh@101 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut 00:06:05.042 00:06:05.042 00:06:05.042 CUnit - A unit testing framework for C - Version 2.1-3 00:06:05.042 http://cunit.sourceforge.net/ 00:06:05.042 00:06:05.042 00:06:05.042 Suite: nvme_io_msg 00:06:05.042 Test: test_nvme_io_msg_send ...passed 00:06:05.042 Test: test_nvme_io_msg_process ...passed 00:06:05.042 Test: test_nvme_io_msg_ctrlr_register_unregister ...passed 00:06:05.042 00:06:05.042 Run Summary: Type Total Ran Passed Failed Inactive 00:06:05.042 suites 1 1 n/a 0 0 00:06:05.042 tests 3 3 3 0 0 00:06:05.042 asserts 56 56 56 0 n/a 00:06:05.042 00:06:05.042 Elapsed time = 0.000 seconds 00:06:05.042 13:58:50 unittest.unittest_nvme -- unit/unittest.sh@102 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut 00:06:05.042 00:06:05.042 00:06:05.042 CUnit - A unit testing framework for C - Version 2.1-3 00:06:05.042 http://cunit.sourceforge.net/ 00:06:05.042 00:06:05.042 00:06:05.042 Suite: nvme_pcie_common 00:06:05.042 Test: test_nvme_pcie_ctrlr_alloc_cmb ...[2024-07-15 13:58:50.939934] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 87:nvme_pcie_ctrlr_alloc_cmb: *ERROR*: Tried to allocate past valid CMB range! 00:06:05.042 passed 00:06:05.042 Test: test_nvme_pcie_qpair_construct_destroy ...passed 00:06:05.042 Test: test_nvme_pcie_ctrlr_cmd_create_delete_io_queue ...passed 00:06:05.042 Test: test_nvme_pcie_ctrlr_connect_qpair ...[2024-07-15 13:58:50.940756] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 504:nvme_completion_create_cq_cb: *ERROR*: nvme_create_io_cq failed! 00:06:05.042 [2024-07-15 13:58:50.940921] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 457:nvme_completion_create_sq_cb: *ERROR*: nvme_create_io_sq failed, deleting cq! 00:06:05.042 passed 00:06:05.042 Test: test_nvme_pcie_ctrlr_construct_admin_qpair ...[2024-07-15 13:58:50.940995] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 551:_nvme_pcie_ctrlr_create_io_qpair: *ERROR*: Failed to send request to create_io_cq 00:06:05.042 passed 00:06:05.042 Test: test_nvme_pcie_poll_group_get_stats ...[2024-07-15 13:58:50.941482] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1797:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:06:05.042 [2024-07-15 13:58:50.941566] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1797:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:06:05.042 passed 00:06:05.042 00:06:05.042 Run Summary: Type Total Ran Passed Failed Inactive 00:06:05.042 suites 1 1 n/a 0 0 00:06:05.042 tests 6 6 6 0 0 00:06:05.042 asserts 148 148 148 0 n/a 00:06:05.042 00:06:05.042 Elapsed time = 0.002 seconds 00:06:05.042 13:58:50 unittest.unittest_nvme -- unit/unittest.sh@103 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut 00:06:05.042 00:06:05.042 00:06:05.042 CUnit - A unit testing framework for C - Version 2.1-3 00:06:05.042 http://cunit.sourceforge.net/ 00:06:05.042 00:06:05.042 00:06:05.042 Suite: nvme_fabric 00:06:05.042 Test: test_nvme_fabric_prop_set_cmd ...passed 00:06:05.042 Test: test_nvme_fabric_prop_get_cmd ...passed 00:06:05.042 Test: test_nvme_fabric_get_discovery_log_page ...passed 00:06:05.042 Test: test_nvme_fabric_discover_probe ...passed 00:06:05.042 Test: test_nvme_fabric_qpair_connect ...passed 00:06:05.042 00:06:05.042 Run Summary: Type Total Ran Passed Failed Inactive 00:06:05.042 suites 1 1 n/a 0 0 00:06:05.042 tests 5 5 5 0 0 00:06:05.042 asserts 60 60 60 0 n/a 00:06:05.042 00:06:05.042 Elapsed time = 0.001 seconds 00:06:05.043 [2024-07-15 13:58:50.968001] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -125, trtype:(null) adrfam:(null) traddr: trsvcid: subnqn:nqn.2016-06.io.spdk:subsystem1 00:06:05.043 13:58:50 unittest.unittest_nvme -- unit/unittest.sh@104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut 00:06:05.043 00:06:05.043 00:06:05.043 CUnit - A unit testing framework for C - Version 2.1-3 00:06:05.043 http://cunit.sourceforge.net/ 00:06:05.043 00:06:05.043 00:06:05.043 Suite: nvme_opal 00:06:05.043 Test: test_opal_nvme_security_recv_send_done ...passed 00:06:05.043 Test: test_opal_add_short_atom_header ...passed 00:06:05.043 00:06:05.043 Run Summary: Type Total Ran Passed Failed Inactive 00:06:05.043 suites 1 1 n/a 0 0 00:06:05.043 tests 2 2 2 0 0 00:06:05.043 asserts 22 22 22 0 n/a 00:06:05.043 00:06:05.043 Elapsed time = 0.000 seconds 00:06:05.043 [2024-07-15 13:58:50.990016] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_opal.c: 171:opal_add_token_bytestring: *ERROR*: Error adding bytestring: end of buffer. 00:06:05.043 00:06:05.043 real 0m0.882s 00:06:05.043 user 0m0.419s 00:06:05.043 sys 0m0.303s 00:06:05.043 13:58:50 unittest.unittest_nvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:05.043 ************************************ 00:06:05.043 END TEST unittest_nvme 00:06:05.043 ************************************ 00:06:05.043 13:58:50 unittest.unittest_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:05.043 13:58:51 unittest -- common/autotest_common.sh@1142 -- # return 0 00:06:05.043 13:58:51 unittest -- unit/unittest.sh@249 -- # run_test unittest_log /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:06:05.043 13:58:51 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:05.043 13:58:51 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.301 13:58:51 unittest -- common/autotest_common.sh@10 -- # set +x 00:06:05.301 ************************************ 00:06:05.301 START TEST unittest_log 00:06:05.301 ************************************ 00:06:05.301 13:58:51 unittest.unittest_log -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:06:05.301 00:06:05.301 00:06:05.301 CUnit - A unit testing framework for C - Version 2.1-3 00:06:05.301 http://cunit.sourceforge.net/ 00:06:05.301 00:06:05.301 00:06:05.301 Suite: log 00:06:05.301 Test: log_test ...[2024-07-15 13:58:51.068547] log_ut.c: 56:log_test: *WARNING*: log warning unit test 00:06:05.301 passed 00:06:05.301 Test: deprecation ...[2024-07-15 13:58:51.068783] log_ut.c: 57:log_test: *DEBUG*: log test 00:06:05.301 log dump test: 00:06:05.301 00000000 6c 6f 67 20 64 75 6d 70 log dump 00:06:05.301 spdk dump test: 00:06:05.301 00000000 73 70 64 6b 20 64 75 6d 70 spdk dump 00:06:05.301 spdk dump test: 00:06:05.301 00000000 73 70 64 6b 20 64 75 6d 70 20 31 36 20 6d 6f 72 spdk dump 16 mor 00:06:05.301 00000010 65 20 63 68 61 72 73 e chars 00:06:06.238 passed 00:06:06.238 00:06:06.238 Run Summary: Type Total Ran Passed Failed Inactive 00:06:06.238 suites 1 1 n/a 0 0 00:06:06.238 tests 2 2 2 0 0 00:06:06.238 asserts 73 73 73 0 n/a 00:06:06.238 00:06:06.238 Elapsed time = 0.001 seconds 00:06:06.238 00:06:06.238 real 0m1.028s 00:06:06.238 user 0m0.015s 00:06:06.238 sys 0m0.013s 00:06:06.238 13:58:52 unittest.unittest_log -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:06.238 13:58:52 unittest.unittest_log -- common/autotest_common.sh@10 -- # set +x 00:06:06.238 ************************************ 00:06:06.238 END TEST unittest_log 00:06:06.238 ************************************ 00:06:06.238 13:58:52 unittest -- common/autotest_common.sh@1142 -- # return 0 00:06:06.238 13:58:52 unittest -- unit/unittest.sh@250 -- # run_test unittest_lvol /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:06:06.238 13:58:52 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:06.238 13:58:52 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.238 13:58:52 unittest -- common/autotest_common.sh@10 -- # set +x 00:06:06.238 ************************************ 00:06:06.238 START TEST unittest_lvol 00:06:06.238 ************************************ 00:06:06.238 13:58:52 unittest.unittest_lvol -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:06:06.238 00:06:06.238 00:06:06.238 CUnit - A unit testing framework for C - Version 2.1-3 00:06:06.238 http://cunit.sourceforge.net/ 00:06:06.238 00:06:06.238 00:06:06.238 Suite: lvol 00:06:06.238 Test: lvs_init_unload_success ...[2024-07-15 13:58:52.151606] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 892:spdk_lvs_unload: *ERROR*: Lvols still open on lvol store 00:06:06.238 passed 00:06:06.238 Test: lvs_init_destroy_success ...[2024-07-15 13:58:52.152238] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 962:spdk_lvs_destroy: *ERROR*: Lvols still open on lvol store 00:06:06.238 passed 00:06:06.238 Test: lvs_init_opts_success ...passed 00:06:06.238 Test: lvs_unload_lvs_is_null_fail ...[2024-07-15 13:58:52.152565] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 882:spdk_lvs_unload: *ERROR*: Lvol store is NULL 00:06:06.238 passed 00:06:06.238 Test: lvs_names ...[2024-07-15 13:58:52.152663] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 726:spdk_lvs_init: *ERROR*: No name specified. 00:06:06.238 [2024-07-15 13:58:52.152771] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 720:spdk_lvs_init: *ERROR*: Name has no null terminator. 00:06:06.238 [2024-07-15 13:58:52.153046] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 736:spdk_lvs_init: *ERROR*: lvolstore with name x already exists 00:06:06.238 passed 00:06:06.238 Test: lvol_create_destroy_success ...passed 00:06:06.238 Test: lvol_create_fail ...[2024-07-15 13:58:52.153774] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 689:spdk_lvs_init: *ERROR*: Blobstore device does not exist 00:06:06.238 [2024-07-15 13:58:52.153962] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1190:spdk_lvol_create: *ERROR*: lvol store does not exist 00:06:06.238 passed 00:06:06.238 Test: lvol_destroy_fail ...[2024-07-15 13:58:52.154369] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1026:lvol_delete_blob_cb: *ERROR*: Could not remove blob on lvol gracefully - forced removal 00:06:06.238 passed 00:06:06.238 Test: lvol_close ...[2024-07-15 13:58:52.154664] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1614:spdk_lvol_close: *ERROR*: lvol does not exist 00:06:06.238 [2024-07-15 13:58:52.154788] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 995:lvol_close_blob_cb: *ERROR*: Could not close blob on lvol 00:06:06.238 passed 00:06:06.238 Test: lvol_resize ...passed 00:06:06.238 Test: lvol_set_read_only ...passed 00:06:06.238 Test: test_lvs_load ...[2024-07-15 13:58:52.155813] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 631:lvs_opts_copy: *ERROR*: opts_size should not be zero value 00:06:06.238 [2024-07-15 13:58:52.155893] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 441:lvs_load: *ERROR*: Invalid options 00:06:06.238 passed 00:06:06.238 Test: lvols_load ...[2024-07-15 13:58:52.156221] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:06:06.238 [2024-07-15 13:58:52.156419] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:06:06.238 passed 00:06:06.238 Test: lvol_open ...passed 00:06:06.238 Test: lvol_snapshot ...passed 00:06:06.238 Test: lvol_snapshot_fail ...[2024-07-15 13:58:52.157262] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name snap already exists 00:06:06.238 passed 00:06:06.238 Test: lvol_clone ...passed 00:06:06.238 Test: lvol_clone_fail ...[2024-07-15 13:58:52.158001] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone already exists 00:06:06.238 passed 00:06:06.238 Test: lvol_iter_clones ...passed 00:06:06.238 Test: lvol_refcnt ...[2024-07-15 13:58:52.158655] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1572:spdk_lvol_destroy: *ERROR*: Cannot destroy lvol 5331aa1e-cc12-4b3e-9c9c-268e845e938b because it is still open 00:06:06.238 passed 00:06:06.238 Test: lvol_names ...[2024-07-15 13:58:52.158978] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:06:06.238 [2024-07-15 13:58:52.159144] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:06:06.238 [2024-07-15 13:58:52.159439] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1169:lvs_verify_lvol_name: *ERROR*: lvol with name tmp_name is being already created 00:06:06.238 passed 00:06:06.238 Test: lvol_create_thin_provisioned ...passed 00:06:06.238 Test: lvol_rename ...[2024-07-15 13:58:52.160043] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:06:06.238 [2024-07-15 13:58:52.160205] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1524:spdk_lvol_rename: *ERROR*: Lvol lvol_new already exists in lvol store lvs 00:06:06.238 passed 00:06:06.238 Test: lvs_rename ...[2024-07-15 13:58:52.160481] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 769:lvs_rename_cb: *ERROR*: Lvol store rename operation failed 00:06:06.238 passed 00:06:06.238 Test: lvol_inflate ...[2024-07-15 13:58:52.160798] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:06:06.238 passed 00:06:06.238 Test: lvol_decouple_parent ...[2024-07-15 13:58:52.161165] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:06:06.238 passed 00:06:06.238 Test: lvol_get_xattr ...passed 00:06:06.238 Test: lvol_esnap_reload ...passed 00:06:06.238 Test: lvol_esnap_create_bad_args ...[2024-07-15 13:58:52.161758] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1245:spdk_lvol_create_esnap_clone: *ERROR*: lvol store does not exist 00:06:06.238 [2024-07-15 13:58:52.161853] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:06:06.238 [2024-07-15 13:58:52.161946] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1258:spdk_lvol_create_esnap_clone: *ERROR*: Cannot create 'lvs/clone1': size 4198400 is not an integer multiple of cluster size 1048576 00:06:06.238 [2024-07-15 13:58:52.162150] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:06:06.238 [2024-07-15 13:58:52.162313] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone1 already exists 00:06:06.238 passed 00:06:06.238 Test: lvol_esnap_create_delete ...passed 00:06:06.238 Test: lvol_esnap_load_esnaps ...passed 00:06:06.238 Test: lvol_esnap_missing ...[2024-07-15 13:58:52.162700] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1832:lvs_esnap_bs_dev_create: *ERROR*: Blob 0x2a: no lvs context nor lvol context 00:06:06.238 [2024-07-15 13:58:52.162934] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:06:06.238 [2024-07-15 13:58:52.163037] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:06:06.238 passed 00:06:06.238 Test: lvol_esnap_hotplug ... 00:06:06.238 lvol_esnap_hotplug scenario 0: PASS - one missing, happy path 00:06:06.238 lvol_esnap_hotplug scenario 1: PASS - one missing, cb registers degraded_set 00:06:06.238 lvol_esnap_hotplug scenario 2: PASS - one missing, cb retuns -ENOMEM 00:06:06.238 [2024-07-15 13:58:52.163830] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol 1817c42d-6abb-4854-99fd-f52af4c516c8: failed to create esnap bs_dev: error -12 00:06:06.238 lvol_esnap_hotplug scenario 3: PASS - two missing with same esnap, happy path 00:06:06.238 lvol_esnap_hotplug scenario 4: PASS - two missing with same esnap, first -ENOMEM 00:06:06.238 [2024-07-15 13:58:52.164117] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol b20bed0d-701f-4155-a66d-30555d28b6c1: failed to create esnap bs_dev: error -12 00:06:06.238 lvol_esnap_hotplug scenario 5: PASS - two missing with same esnap, second -ENOMEM 00:06:06.238 [2024-07-15 13:58:52.164298] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol 5c2209dc-3c89-4384-94c0-78ec6bf44edd: failed to create esnap bs_dev: error -12 00:06:06.238 lvol_esnap_hotplug scenario 6: PASS - two missing with different esnaps, happy path 00:06:06.238 lvol_esnap_hotplug scenario 7: PASS - two missing with different esnaps, first still missing 00:06:06.239 lvol_esnap_hotplug scenario 8: PASS - three missing with same esnap, happy path 00:06:06.239 lvol_esnap_hotplug scenario 9: PASS - three missing with same esnap, first still missing 00:06:06.239 lvol_esnap_hotplug scenario 10: PASS - three missing with same esnap, first two still missing 00:06:06.239 lvol_esnap_hotplug scenario 11: PASS - three missing with same esnap, middle still missing 00:06:06.239 lvol_esnap_hotplug scenario 12: PASS - three missing with same esnap, last still missing 00:06:06.239 passed 00:06:06.239 Test: lvol_get_by ...passed 00:06:06.239 Test: lvol_shallow_copy ...[2024-07-15 13:58:52.165686] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2274:spdk_lvol_shallow_copy: *ERROR*: lvol must not be NULL 00:06:06.239 [2024-07-15 13:58:52.165774] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2281:spdk_lvol_shallow_copy: *ERROR*: lvol bc9eac3f-21fb-4fcc-b2ef-9d5b8319078c shallow copy, ext_dev must not be NULL 00:06:06.239 passed 00:06:06.239 Test: lvol_set_parent ...[2024-07-15 13:58:52.166111] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2338:spdk_lvol_set_parent: *ERROR*: lvol must not be NULL 00:06:06.239 [2024-07-15 13:58:52.166188] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2344:spdk_lvol_set_parent: *ERROR*: snapshot must not be NULL 00:06:06.239 passed 00:06:06.239 Test: lvol_set_external_parent ...[2024-07-15 13:58:52.166450] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2393:spdk_lvol_set_external_parent: *ERROR*: lvol must not be NULL 00:06:06.239 [2024-07-15 13:58:52.166527] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2399:spdk_lvol_set_external_parent: *ERROR*: snapshot must not be NULL 00:06:06.239 [2024-07-15 13:58:52.166637] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2406:spdk_lvol_set_external_parent: *ERROR*: lvol lvol and esnap have the same UUID 00:06:06.239 passed 00:06:06.239 00:06:06.239 Run Summary: Type Total Ran Passed Failed Inactive 00:06:06.239 suites 1 1 n/a 0 0 00:06:06.239 tests 37 37 37 0 0 00:06:06.239 asserts 1505 1505 1505 0 n/a 00:06:06.239 00:06:06.239 Elapsed time = 0.015 seconds 00:06:06.239 00:06:06.239 real 0m0.040s 00:06:06.239 user 0m0.026s 00:06:06.239 sys 0m0.014s 00:06:06.239 13:58:52 unittest.unittest_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:06.239 13:58:52 unittest.unittest_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:06.239 ************************************ 00:06:06.239 END TEST unittest_lvol 00:06:06.239 ************************************ 00:06:06.239 13:58:52 unittest -- common/autotest_common.sh@1142 -- # return 0 00:06:06.239 13:58:52 unittest -- unit/unittest.sh@251 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:06.239 13:58:52 unittest -- unit/unittest.sh@252 -- # run_test unittest_nvme_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:06:06.239 13:58:52 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:06.239 13:58:52 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.239 13:58:52 unittest -- common/autotest_common.sh@10 -- # set +x 00:06:06.239 ************************************ 00:06:06.239 START TEST unittest_nvme_rdma 00:06:06.239 ************************************ 00:06:06.239 13:58:52 unittest.unittest_nvme_rdma -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:06:06.498 00:06:06.498 00:06:06.498 CUnit - A unit testing framework for C - Version 2.1-3 00:06:06.498 http://cunit.sourceforge.net/ 00:06:06.498 00:06:06.498 00:06:06.498 Suite: nvme_rdma 00:06:06.498 Test: test_nvme_rdma_build_sgl_request ...[2024-07-15 13:58:52.251112] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1379:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -34 00:06:06.498 [2024-07-15 13:58:52.251481] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1552:nvme_rdma_build_sgl_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:06:06.498 [2024-07-15 13:58:52.251677] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1608:nvme_rdma_build_sgl_request: *ERROR*: Size of SGL descriptors (64) exceeds ICD (60) 00:06:06.498 passed 00:06:06.498 Test: test_nvme_rdma_build_sgl_inline_request ...passed 00:06:06.498 Test: test_nvme_rdma_build_contig_request ...[2024-07-15 13:58:52.251872] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1489:nvme_rdma_build_contig_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:06:06.498 passed 00:06:06.498 Test: test_nvme_rdma_build_contig_inline_request ...passed 00:06:06.498 Test: test_nvme_rdma_create_reqs ...[2024-07-15 13:58:52.252149] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 931:nvme_rdma_create_reqs: *ERROR*: Failed to allocate rdma_reqs 00:06:06.498 passed 00:06:06.498 Test: test_nvme_rdma_create_rsps ...[2024-07-15 13:58:52.252670] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 849:nvme_rdma_create_rsps: *ERROR*: Failed to allocate rsp_sgls 00:06:06.498 passed 00:06:06.498 Test: test_nvme_rdma_ctrlr_create_qpair ...[2024-07-15 13:58:52.252860] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1746:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:06:06.498 passed 00:06:06.498 Test: test_nvme_rdma_poller_create ...[2024-07-15 13:58:52.252950] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1746:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:06:06.498 passed 00:06:06.498 Test: test_nvme_rdma_qpair_process_cm_event ...passed 00:06:06.498 Test: test_nvme_rdma_ctrlr_construct ...[2024-07-15 13:58:52.253143] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 450:nvme_rdma_qpair_process_cm_event: *ERROR*: Unexpected Acceptor Event [255] 00:06:06.498 passed 00:06:06.498 Test: test_nvme_rdma_req_put_and_get ...passed 00:06:06.499 Test: test_nvme_rdma_req_init ...passed 00:06:06.499 Test: test_nvme_rdma_validate_cm_event ...[2024-07-15 13:58:52.253412] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_CONNECT_RESPONSE (5) from CM event channel (status = 0) 00:06:06.499 [2024-07-15 13:58:52.253474] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 10) 00:06:06.499 passed 00:06:06.499 Test: test_nvme_rdma_qpair_init ...passed 00:06:06.499 Test: test_nvme_rdma_qpair_submit_request ...passed 00:06:06.499 Test: test_rdma_ctrlr_get_memory_domains ...passed 00:06:06.499 Test: test_rdma_get_memory_translation ...[2024-07-15 13:58:52.253633] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1368:nvme_rdma_get_memory_translation: *ERROR*: DMA memory translation failed, rc -1, iov count 0 00:06:06.499 [2024-07-15 13:58:52.253710] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1379:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -1 00:06:06.499 passed 00:06:06.499 Test: test_get_rdma_qpair_from_wc ...passed 00:06:06.499 Test: test_nvme_rdma_ctrlr_get_max_sges ...passed 00:06:06.499 Test: test_nvme_rdma_poll_group_get_stats ...[2024-07-15 13:58:52.253898] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3204:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:06:06.499 [2024-07-15 13:58:52.253964] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3204:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:06:06.499 passed 00:06:06.499 Test: test_nvme_rdma_qpair_set_poller ...[2024-07-15 13:58:52.254215] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2916:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 0. 00:06:06.499 [2024-07-15 13:58:52.254279] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2962:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device 0xfeedbeef 00:06:06.499 [2024-07-15 13:58:52.254348] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 647:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x7ffefbde2800 on poll group 0x60c000000040 00:06:06.499 [2024-07-15 13:58:52.254413] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2916:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 0. 00:06:06.499 [2024-07-15 13:58:52.254514] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2962:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device (nil) 00:06:06.499 [2024-07-15 13:58:52.254599] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 647:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x7ffefbde2800 on poll group 0x60c000000040 00:06:06.499 passed 00:06:06.499 00:06:06.499 [2024-07-15 13:58:52.254698] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 625:nvme_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 0: Success 00:06:06.499 Run Summary: Type Total Ran Passed Failed Inactive 00:06:06.499 suites 1 1 n/a 0 0 00:06:06.499 tests 21 21 21 0 0 00:06:06.499 asserts 397 397 397 0 n/a 00:06:06.499 00:06:06.499 Elapsed time = 0.004 seconds 00:06:06.499 00:06:06.499 real 0m0.029s 00:06:06.499 user 0m0.018s 00:06:06.499 sys 0m0.011s 00:06:06.499 13:58:52 unittest.unittest_nvme_rdma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:06.499 13:58:52 unittest.unittest_nvme_rdma -- common/autotest_common.sh@10 -- # set +x 00:06:06.499 ************************************ 00:06:06.499 END TEST unittest_nvme_rdma 00:06:06.499 ************************************ 00:06:06.499 13:58:52 unittest -- common/autotest_common.sh@1142 -- # return 0 00:06:06.499 13:58:52 unittest -- unit/unittest.sh@253 -- # run_test unittest_nvmf_transport /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:06:06.499 13:58:52 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:06.499 13:58:52 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.499 13:58:52 unittest -- common/autotest_common.sh@10 -- # set +x 00:06:06.499 ************************************ 00:06:06.499 START TEST unittest_nvmf_transport 00:06:06.499 ************************************ 00:06:06.499 13:58:52 unittest.unittest_nvmf_transport -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:06:06.499 00:06:06.499 00:06:06.499 CUnit - A unit testing framework for C - Version 2.1-3 00:06:06.499 http://cunit.sourceforge.net/ 00:06:06.499 00:06:06.499 00:06:06.499 Suite: nvmf 00:06:06.499 Test: test_spdk_nvmf_transport_create ...[2024-07-15 13:58:52.335072] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 251:nvmf_transport_create: *ERROR*: Transport type 'new_ops' unavailable. 00:06:06.499 [2024-07-15 13:58:52.335400] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 271:nvmf_transport_create: *ERROR*: io_unit_size cannot be 0 00:06:06.499 [2024-07-15 13:58:52.335486] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 275:nvmf_transport_create: *ERROR*: io_unit_size 131072 is larger than iobuf pool large buffer size 65536 00:06:06.499 [2024-07-15 13:58:52.335657] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 258:nvmf_transport_create: *ERROR*: max_io_size 4096 must be a power of 2 and be greater than or equal 8KB 00:06:06.499 passed 00:06:06.499 Test: test_nvmf_transport_poll_group_create ...passed 00:06:06.499 Test: test_spdk_nvmf_transport_opts_init ...[2024-07-15 13:58:52.335895] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 792:spdk_nvmf_transport_opts_init: *ERROR*: Transport type invalid_ops unavailable. 00:06:06.499 [2024-07-15 13:58:52.336025] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 797:spdk_nvmf_transport_opts_init: *ERROR*: opts should not be NULL 00:06:06.499 [2024-07-15 13:58:52.336109] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 802:spdk_nvmf_transport_opts_init: *ERROR*: opts_size inside opts should not be zero value 00:06:06.499 passed 00:06:06.499 Test: test_spdk_nvmf_transport_listen_ext ...passed 00:06:06.499 00:06:06.499 Run Summary: Type Total Ran Passed Failed Inactive 00:06:06.499 suites 1 1 n/a 0 0 00:06:06.499 tests 4 4 4 0 0 00:06:06.499 asserts 49 49 49 0 n/a 00:06:06.499 00:06:06.499 Elapsed time = 0.001 seconds 00:06:06.499 00:06:06.499 real 0m0.029s 00:06:06.499 user 0m0.015s 00:06:06.499 sys 0m0.014s 00:06:06.499 13:58:52 unittest.unittest_nvmf_transport -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:06.499 13:58:52 unittest.unittest_nvmf_transport -- common/autotest_common.sh@10 -- # set +x 00:06:06.499 ************************************ 00:06:06.499 END TEST unittest_nvmf_transport 00:06:06.499 ************************************ 00:06:06.499 13:58:52 unittest -- common/autotest_common.sh@1142 -- # return 0 00:06:06.499 13:58:52 unittest -- unit/unittest.sh@254 -- # run_test unittest_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:06:06.499 13:58:52 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:06.499 13:58:52 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.499 13:58:52 unittest -- common/autotest_common.sh@10 -- # set +x 00:06:06.499 ************************************ 00:06:06.499 START TEST unittest_rdma 00:06:06.499 ************************************ 00:06:06.499 13:58:52 unittest.unittest_rdma -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:06:06.499 00:06:06.499 00:06:06.499 CUnit - A unit testing framework for C - Version 2.1-3 00:06:06.499 http://cunit.sourceforge.net/ 00:06:06.499 00:06:06.499 00:06:06.499 Suite: rdma_common 00:06:06.499 Test: test_spdk_rdma_pd ...[2024-07-15 13:58:52.417484] /home/vagrant/spdk_repo/spdk/lib/rdma_utils/rdma_utils.c: 398:spdk_rdma_utils_get_pd: *ERROR*: Failed to get PD 00:06:06.499 [2024-07-15 13:58:52.417803] /home/vagrant/spdk_repo/spdk/lib/rdma_utils/rdma_utils.c: 398:spdk_rdma_utils_get_pd: *ERROR*: Failed to get PD 00:06:06.499 passed 00:06:06.499 00:06:06.499 Run Summary: Type Total Ran Passed Failed Inactive 00:06:06.499 suites 1 1 n/a 0 0 00:06:06.499 tests 1 1 1 0 0 00:06:06.499 asserts 31 31 31 0 n/a 00:06:06.499 00:06:06.499 Elapsed time = 0.001 seconds 00:06:06.499 00:06:06.499 real 0m0.025s 00:06:06.499 user 0m0.015s 00:06:06.499 sys 0m0.009s 00:06:06.499 13:58:52 unittest.unittest_rdma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:06.499 13:58:52 unittest.unittest_rdma -- common/autotest_common.sh@10 -- # set +x 00:06:06.499 ************************************ 00:06:06.499 END TEST unittest_rdma 00:06:06.499 ************************************ 00:06:06.499 13:58:52 unittest -- common/autotest_common.sh@1142 -- # return 0 00:06:06.499 13:58:52 unittest -- unit/unittest.sh@257 -- # grep -q '#define SPDK_CONFIG_NVME_CUSE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:06.499 13:58:52 unittest -- unit/unittest.sh@258 -- # run_test unittest_nvme_cuse /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:06:06.499 13:58:52 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:06.499 13:58:52 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.499 13:58:52 unittest -- common/autotest_common.sh@10 -- # set +x 00:06:06.499 ************************************ 00:06:06.499 START TEST unittest_nvme_cuse 00:06:06.499 ************************************ 00:06:06.499 13:58:52 unittest.unittest_nvme_cuse -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:06:06.758 00:06:06.758 00:06:06.758 CUnit - A unit testing framework for C - Version 2.1-3 00:06:06.758 http://cunit.sourceforge.net/ 00:06:06.758 00:06:06.758 00:06:06.758 Suite: nvme_cuse 00:06:06.758 Test: test_cuse_nvme_submit_io_read_write ...passed 00:06:06.758 Test: test_cuse_nvme_submit_io_read_write_with_md ...passed 00:06:06.758 Test: test_cuse_nvme_submit_passthru_cmd ...passed 00:06:06.758 Test: test_cuse_nvme_submit_passthru_cmd_with_md ...passed 00:06:06.758 Test: test_nvme_cuse_get_cuse_ns_device ...passed 00:06:06.758 Test: test_cuse_nvme_submit_io ...[2024-07-15 13:58:52.501556] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 667:cuse_nvme_submit_io: *ERROR*: SUBMIT_IO: opc:0 not valid 00:06:06.758 passed 00:06:06.758 Test: test_cuse_nvme_reset ...[2024-07-15 13:58:52.501950] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 352:cuse_nvme_reset: *ERROR*: Namespace reset not supported 00:06:06.758 passed 00:06:07.017 Test: test_nvme_cuse_stop ...passed 00:06:07.017 Test: test_spdk_nvme_cuse_get_ctrlr_name ...passed 00:06:07.017 00:06:07.017 Run Summary: Type Total Ran Passed Failed Inactive 00:06:07.017 suites 1 1 n/a 0 0 00:06:07.017 tests 9 9 9 0 0 00:06:07.017 asserts 118 118 118 0 n/a 00:06:07.017 00:06:07.017 Elapsed time = 0.501 seconds 00:06:07.017 00:06:07.017 real 0m0.532s 00:06:07.017 user 0m0.259s 00:06:07.017 sys 0m0.269s 00:06:07.017 13:58:53 unittest.unittest_nvme_cuse -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:07.017 13:58:53 unittest.unittest_nvme_cuse -- common/autotest_common.sh@10 -- # set +x 00:06:07.017 ************************************ 00:06:07.017 END TEST unittest_nvme_cuse 00:06:07.017 ************************************ 00:06:07.277 13:58:53 unittest -- common/autotest_common.sh@1142 -- # return 0 00:06:07.277 13:58:53 unittest -- unit/unittest.sh@261 -- # run_test unittest_nvmf unittest_nvmf 00:06:07.277 13:58:53 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:07.277 13:58:53 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.277 13:58:53 unittest -- common/autotest_common.sh@10 -- # set +x 00:06:07.277 ************************************ 00:06:07.277 START TEST unittest_nvmf 00:06:07.277 ************************************ 00:06:07.277 13:58:53 unittest.unittest_nvmf -- common/autotest_common.sh@1123 -- # unittest_nvmf 00:06:07.277 13:58:53 unittest.unittest_nvmf -- unit/unittest.sh@108 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr.c/ctrlr_ut 00:06:07.277 00:06:07.277 00:06:07.277 CUnit - A unit testing framework for C - Version 2.1-3 00:06:07.277 http://cunit.sourceforge.net/ 00:06:07.277 00:06:07.277 00:06:07.277 Suite: nvmf 00:06:07.277 Test: test_get_log_page ...[2024-07-15 13:58:53.084180] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2 00:06:07.277 passed 00:06:07.277 Test: test_process_fabrics_cmd ...[2024-07-15 13:58:53.084541] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4730:nvmf_check_qpair_active: *ERROR*: Received command 0x0 on qid 0 before CONNECT 00:06:07.277 passed 00:06:07.277 Test: test_connect ...[2024-07-15 13:58:53.085048] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1012:nvmf_ctrlr_cmd_connect: *ERROR*: Connect command data length 0x3ff too small 00:06:07.277 [2024-07-15 13:58:53.085171] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 875:_nvmf_ctrlr_connect: *ERROR*: Connect command unsupported RECFMT 1234 00:06:07.277 [2024-07-15 13:58:53.085216] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1051:nvmf_ctrlr_cmd_connect: *ERROR*: Connect HOSTNQN is not null terminated 00:06:07.277 [2024-07-15 13:58:53.085266] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:subsystem1' does not allow host 'nqn.2016-06.io.spdk:host1' 00:06:07.277 [2024-07-15 13:58:53.085349] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 886:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE = 0 00:06:07.278 [2024-07-15 13:58:53.085411] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 893:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE for admin queue 32 (min 1, max 31) 00:06:07.278 [2024-07-15 13:58:53.085455] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 899:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE 64 (min 1, max 63) 00:06:07.278 [2024-07-15 13:58:53.085507] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 926:_nvmf_ctrlr_connect: *ERROR*: The NVMf target only supports dynamic mode (CNTLID = 0x1234). 00:06:07.278 [2024-07-15 13:58:53.085603] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0xffff 00:06:07.278 [2024-07-15 13:58:53.085686] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 676:nvmf_ctrlr_add_io_qpair: *ERROR*: I/O connect not allowed on discovery controller 00:06:07.278 [2024-07-15 13:58:53.085935] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 682:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect before ctrlr was enabled 00:06:07.278 [2024-07-15 13:58:53.086015] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 688:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOSQES 3 00:06:07.278 [2024-07-15 13:58:53.086083] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 695:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOCQES 3 00:06:07.278 [2024-07-15 13:58:53.086158] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 719:nvmf_ctrlr_add_io_qpair: *ERROR*: Requested QID 3 but Max QID is 2 00:06:07.278 [2024-07-15 13:58:53.086250] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 294:nvmf_ctrlr_add_qpair: *ERROR*: Got I/O connect with duplicate QID 1 (cntlid:0) 00:06:07.278 [2024-07-15 13:58:53.086397] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 806:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 4, group (nil)) 00:06:07.278 [2024-07-15 13:58:53.086465] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 806:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 0, group (nil)) 00:06:07.278 passed 00:06:07.278 Test: test_get_ns_id_desc_list ...passed 00:06:07.278 Test: test_identify_ns ...[2024-07-15 13:58:53.086676] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:07.278 [2024-07-15 13:58:53.086900] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4 00:06:07.278 [2024-07-15 13:58:53.086998] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:06:07.278 passed 00:06:07.278 Test: test_identify_ns_iocs_specific ...[2024-07-15 13:58:53.087121] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:07.278 [2024-07-15 13:58:53.087316] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:07.278 passed 00:06:07.278 Test: test_reservation_write_exclusive ...passed 00:06:07.278 Test: test_reservation_exclusive_access ...passed 00:06:07.278 Test: test_reservation_write_exclusive_regs_only_and_all_regs ...passed 00:06:07.278 Test: test_reservation_exclusive_access_regs_only_and_all_regs ...passed 00:06:07.278 Test: test_reservation_notification_log_page ...passed 00:06:07.278 Test: test_get_dif_ctx ...passed 00:06:07.278 Test: test_set_get_features ...[2024-07-15 13:58:53.087812] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1648:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:06:07.278 [2024-07-15 13:58:53.087891] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1648:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:06:07.278 [2024-07-15 13:58:53.087930] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1659:temp_threshold_opts_valid: *ERROR*: Invalid THSEL 3 00:06:07.278 passed 00:06:07.278 Test: test_identify_ctrlr ...passed 00:06:07.278 Test: test_identify_ctrlr_iocs_specific ...[2024-07-15 13:58:53.087976] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1735:nvmf_ctrlr_set_features_error_recovery: *ERROR*: Host set unsupported DULBE bit 00:06:07.278 passed 00:06:07.278 Test: test_custom_admin_cmd ...passed 00:06:07.278 Test: test_fused_compare_and_write ...[2024-07-15 13:58:53.088313] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4238:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong sequence of fused operations 00:06:07.278 [2024-07-15 13:58:53.088362] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4227:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:06:07.278 passed 00:06:07.278 Test: test_multi_async_event_reqs ...passed 00:06:07.278 Test: test_get_ana_log_page_one_ns_per_anagrp ...[2024-07-15 13:58:53.088404] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4245:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:06:07.278 passed 00:06:07.278 Test: test_get_ana_log_page_multi_ns_per_anagrp ...passed 00:06:07.278 Test: test_multi_async_events ...passed 00:06:07.278 Test: test_rae ...passed 00:06:07.278 Test: test_nvmf_ctrlr_create_destruct ...passed 00:06:07.278 Test: test_nvmf_ctrlr_use_zcopy ...passed 00:06:07.278 Test: test_spdk_nvmf_request_zcopy_start ...passed 00:06:07.278 Test: test_zcopy_read ...passed 00:06:07.278 Test: test_zcopy_write ...passed 00:06:07.278 Test: test_nvmf_property_set ...passed 00:06:07.278 Test: test_nvmf_ctrlr_get_features_host_behavior_support ...passed 00:06:07.278 Test: test_nvmf_ctrlr_set_features_host_behavior_support ...[2024-07-15 13:58:53.088947] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4730:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 1 before CONNECT 00:06:07.278 [2024-07-15 13:58:53.089009] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4756:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 1 in state 4 00:06:07.278 [2024-07-15 13:58:53.089208] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1946:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:06:07.278 [2024-07-15 13:58:53.089256] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1946:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:06:07.278 [2024-07-15 13:58:53.089332] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1969:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iovcnt: 0 00:06:07.278 [2024-07-15 13:58:53.089385] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1975:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iov_len: 0 00:06:07.278 [2024-07-15 13:58:53.089456] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1987:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid acre: 0x02 00:06:07.278 passed 00:06:07.278 Test: test_nvmf_ctrlr_ns_attachment ...passed 00:06:07.278 Test: test_nvmf_check_qpair_active ...[2024-07-15 13:58:53.089577] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4730:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 before CONNECT 00:06:07.278 [2024-07-15 13:58:53.089625] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4744:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 before authentication 00:06:07.278 passed 00:06:07.278 00:06:07.278 Run Summary: Type Total Ran Passed Failed Inactive 00:06:07.278 suites 1 1 n/a 0 0 00:06:07.278 tests 32 32 32 0 0 00:06:07.278 asserts 977 977 977 0 n/a 00:06:07.278 00:06:07.278 Elapsed time = 0.005 seconds 00:06:07.278 [2024-07-15 13:58:53.089668] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4756:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 0 00:06:07.278 [2024-07-15 13:58:53.089707] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4756:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 4 00:06:07.278 [2024-07-15 13:58:53.089753] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4756:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 5 00:06:07.278 13:58:53 unittest.unittest_nvmf -- unit/unittest.sh@109 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut 00:06:07.278 00:06:07.278 00:06:07.278 CUnit - A unit testing framework for C - Version 2.1-3 00:06:07.278 http://cunit.sourceforge.net/ 00:06:07.278 00:06:07.278 00:06:07.278 Suite: nvmf 00:06:07.278 Test: test_get_rw_params ...passed 00:06:07.278 Test: test_get_rw_ext_params ...passed 00:06:07.278 Test: test_lba_in_range ...passed 00:06:07.278 Test: test_get_dif_ctx ...passed 00:06:07.278 Test: test_nvmf_bdev_ctrlr_identify_ns ...passed 00:06:07.278 Test: test_spdk_nvmf_bdev_ctrlr_compare_and_write_cmd ...[2024-07-15 13:58:53.113684] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 447:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Fused command start lba / num blocks mismatch 00:06:07.278 [2024-07-15 13:58:53.113995] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 455:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: end of media 00:06:07.278 [2024-07-15 13:58:53.114134] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 462:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Write NLB 2 * block size 512 > SGL length 1023 00:06:07.278 passed 00:06:07.278 Test: test_nvmf_bdev_ctrlr_zcopy_start ...[2024-07-15 13:58:53.114198] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 965:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: end of media 00:06:07.278 [2024-07-15 13:58:53.114277] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 972:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: Read NLB 2 * block size 512 > SGL length 1023 00:06:07.278 passed 00:06:07.278 Test: test_nvmf_bdev_ctrlr_cmd ...[2024-07-15 13:58:53.114372] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 401:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: end of media 00:06:07.278 [2024-07-15 13:58:53.114412] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 408:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: Compare NLB 3 * block size 512 > SGL length 512 00:06:07.278 passed 00:06:07.278 Test: test_nvmf_bdev_ctrlr_read_write_cmd ...passed 00:06:07.278 Test: test_nvmf_bdev_ctrlr_nvme_passthru ...passed 00:06:07.278 00:06:07.278 [2024-07-15 13:58:53.114473] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 500:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: invalid write zeroes size, should not exceed 1Kib 00:06:07.278 [2024-07-15 13:58:53.114510] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 507:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: end of media 00:06:07.278 Run Summary: Type Total Ran Passed Failed Inactive 00:06:07.278 suites 1 1 n/a 0 0 00:06:07.278 tests 10 10 10 0 0 00:06:07.278 asserts 159 159 159 0 n/a 00:06:07.278 00:06:07.278 Elapsed time = 0.001 seconds 00:06:07.278 13:58:53 unittest.unittest_nvmf -- unit/unittest.sh@110 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut 00:06:07.278 00:06:07.278 00:06:07.278 CUnit - A unit testing framework for C - Version 2.1-3 00:06:07.278 http://cunit.sourceforge.net/ 00:06:07.278 00:06:07.278 00:06:07.278 Suite: nvmf 00:06:07.278 Test: test_discovery_log ...passed 00:06:07.278 Test: test_discovery_log_with_filters ...passed 00:06:07.278 00:06:07.278 Run Summary: Type Total Ran Passed Failed Inactive 00:06:07.278 suites 1 1 n/a 0 0 00:06:07.278 tests 2 2 2 0 0 00:06:07.278 asserts 238 238 238 0 n/a 00:06:07.278 00:06:07.278 Elapsed time = 0.003 seconds 00:06:07.278 13:58:53 unittest.unittest_nvmf -- unit/unittest.sh@111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/subsystem.c/subsystem_ut 00:06:07.278 00:06:07.278 00:06:07.278 CUnit - A unit testing framework for C - Version 2.1-3 00:06:07.278 http://cunit.sourceforge.net/ 00:06:07.278 00:06:07.278 00:06:07.278 Suite: nvmf 00:06:07.278 Test: nvmf_test_create_subsystem ...[2024-07-15 13:58:53.165474] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 125:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:". NQN must contain user specified name with a ':' as a prefix. 00:06:07.278 [2024-07-15 13:58:53.165693] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:' is invalid 00:06:07.278 [2024-07-15 13:58:53.165857] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 134:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub". At least one Label is too long. 00:06:07.278 [2024-07-15 13:58:53.165947] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub' is invalid 00:06:07.278 [2024-07-15 13:58:53.165990] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.3spdk:sub". Label names must start with a letter. 00:06:07.278 [2024-07-15 13:58:53.166034] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.3spdk:sub' is invalid 00:06:07.278 [2024-07-15 13:58:53.166115] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.-spdk:subsystem1". Label names must start with a letter. 00:06:07.278 [2024-07-15 13:58:53.166173] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.-spdk:subsystem1' is invalid 00:06:07.278 [2024-07-15 13:58:53.166211] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 183:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk-:subsystem1". Label names must end with an alphanumeric symbol. 00:06:07.278 [2024-07-15 13:58:53.166249] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk-:subsystem1' is invalid 00:06:07.278 [2024-07-15 13:58:53.166284] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io..spdk:subsystem1". Label names must start with a letter. 00:06:07.278 [2024-07-15 13:58:53.166321] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io..spdk:subsystem1' is invalid 00:06:07.278 [2024-07-15 13:58:53.166413] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 79:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa": length 224 > max 223 00:06:07.278 [2024-07-15 13:58:53.166516] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa' is invalid 00:06:07.279 [2024-07-15 13:58:53.166605] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 207:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk:�subsystem1". Label names must contain only valid utf-8. 00:06:07.279 [2024-07-15 13:58:53.166652] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:�subsystem1' is invalid 00:06:07.279 [2024-07-15 13:58:53.166748] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa": uuid is not the correct length 00:06:07.279 [2024-07-15 13:58:53.166790] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa' is invalid 00:06:07.279 [2024-07-15 13:58:53.166830] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:06:07.279 [2024-07-15 13:58:53.166882] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2' is invalid 00:06:07.279 [2024-07-15 13:58:53.166922] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:06:07.279 passed 00:06:07.279 Test: test_spdk_nvmf_subsystem_add_ns ...[2024-07-15 13:58:53.166958] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2' is invalid 00:06:07.279 passed 00:06:07.279 Test: test_spdk_nvmf_subsystem_add_fdp_ns ...[2024-07-15 13:58:53.167088] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 5 already in use 00:06:07.279 [2024-07-15 13:58:53.167139] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2027:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Invalid NSID 4294967295 00:06:07.279 passed 00:06:07.279 Test: test_spdk_nvmf_subsystem_set_sn ...passed 00:06:07.279 Test: test_spdk_nvmf_ns_visible ...[2024-07-15 13:58:53.167343] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem with id: 0 can only add FDP namespace. 00:06:07.279 [2024-07-15 13:58:53.167505] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "": length 0 < min 11 00:06:07.279 passed 00:06:07.279 Test: test_reservation_register ...[2024-07-15 13:58:53.167888] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3102:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:06:07.279 passed 00:06:07.279 Test: test_reservation_register_with_ptpl ...[2024-07-15 13:58:53.168007] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3160:nvmf_ns_reservation_register: *ERROR*: No registrant 00:06:07.279 passed 00:06:07.279 Test: test_reservation_acquire_preempt_1 ...[2024-07-15 13:58:53.169249] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3102:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:06:07.279 passed 00:06:07.279 Test: test_reservation_acquire_release_with_ptpl ...passed 00:06:07.279 Test: test_reservation_release ...[2024-07-15 13:58:53.170982] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3102:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:06:07.279 passed 00:06:07.279 Test: test_reservation_unregister_notification ...[2024-07-15 13:58:53.171216] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3102:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:06:07.279 passed 00:06:07.279 Test: test_reservation_release_notification ...[2024-07-15 13:58:53.171427] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3102:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:06:07.279 passed 00:06:07.279 Test: test_reservation_release_notification_write_exclusive ...[2024-07-15 13:58:53.171649] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3102:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:06:07.279 passed 00:06:07.279 Test: test_reservation_clear_notification ...[2024-07-15 13:58:53.171875] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3102:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:06:07.279 passed 00:06:07.279 Test: test_reservation_preempt_notification ...[2024-07-15 13:58:53.172114] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3102:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:06:07.279 passed 00:06:07.279 Test: test_spdk_nvmf_ns_event ...passed 00:06:07.279 Test: test_nvmf_ns_reservation_add_remove_registrant ...passed 00:06:07.279 Test: test_nvmf_subsystem_add_ctrlr ...passed 00:06:07.279 Test: test_spdk_nvmf_subsystem_add_host ...[2024-07-15 13:58:53.172701] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 264:nvmf_transport_create: *ERROR*: max_aq_depth 0 is less than minimum defined by NVMf spec, use min value 00:06:07.279 passed 00:06:07.279 Test: test_nvmf_ns_reservation_report ...[2024-07-15 13:58:53.172795] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to transport_ut transport 00:06:07.279 [2024-07-15 13:58:53.172923] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3465:nvmf_ns_reservation_report: *ERROR*: NVMeoF uses extended controller data structure, please set EDS bit in cdw11 and try again 00:06:07.279 passed 00:06:07.279 Test: test_nvmf_nqn_is_valid ...[2024-07-15 13:58:53.173006] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.": length 4 < min 11 00:06:07.279 [2024-07-15 13:58:53.173078] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:694e9ea4-4126-4c2b-b636-eb8cc2ca4e8": uuid is not the correct length 00:06:07.279 passed 00:06:07.279 Test: test_nvmf_ns_reservation_restore ...passed 00:06:07.279 Test: test_nvmf_subsystem_state_change ...[2024-07-15 13:58:53.173123] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io...spdk:cnode1". Label names must start with a letter. 00:06:07.279 [2024-07-15 13:58:53.173217] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2659:nvmf_ns_reservation_restore: *ERROR*: Existing bdev UUID is not same with configuration file 00:06:07.279 passed 00:06:07.279 Test: test_nvmf_reservation_custom_ops ...passed 00:06:07.279 00:06:07.279 Run Summary: Type Total Ran Passed Failed Inactive 00:06:07.279 suites 1 1 n/a 0 0 00:06:07.279 tests 24 24 24 0 0 00:06:07.279 asserts 499 499 499 0 n/a 00:06:07.279 00:06:07.279 Elapsed time = 0.008 seconds 00:06:07.279 13:58:53 unittest.unittest_nvmf -- unit/unittest.sh@112 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/tcp.c/tcp_ut 00:06:07.279 00:06:07.279 00:06:07.279 CUnit - A unit testing framework for C - Version 2.1-3 00:06:07.279 http://cunit.sourceforge.net/ 00:06:07.279 00:06:07.279 00:06:07.279 Suite: nvmf 00:06:07.279 Test: test_nvmf_tcp_create ...[2024-07-15 13:58:53.214874] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c: 745:nvmf_tcp_create: *ERROR*: Unsupported IO Unit size specified, 16 bytes 00:06:07.279 passed 00:06:07.279 Test: test_nvmf_tcp_destroy ...passed 00:06:07.279 Test: test_nvmf_tcp_poll_group_create ...passed 00:06:07.538 Test: test_nvmf_tcp_send_c2h_data ...passed 00:06:07.538 Test: test_nvmf_tcp_h2c_data_hdr_handle ...passed 00:06:07.538 Test: test_nvmf_tcp_in_capsule_data_handle ...passed 00:06:07.538 Test: test_nvmf_tcp_qpair_init_mem_resource ...passed 00:06:07.538 Test: test_nvmf_tcp_send_c2h_term_req ...[2024-07-15 13:58:53.300914] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:06:07.538 passed 00:06:07.538 Test: test_nvmf_tcp_send_capsule_resp_pdu ...passed 00:06:07.538 Test: test_nvmf_tcp_icreq_handle ...[2024-07-15 13:58:53.301007] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd5c982c20 is same with the state(5) to be set 00:06:07.538 [2024-07-15 13:58:53.301100] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd5c982c20 is same with the state(5) to be set 00:06:07.538 [2024-07-15 13:58:53.301137] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:06:07.538 [2024-07-15 13:58:53.301170] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd5c982c20 is same with the state(5) to be set 00:06:07.538 [2024-07-15 13:58:53.301238] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2122:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:06:07.538 [2024-07-15 13:58:53.301321] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:06:07.538 [2024-07-15 13:58:53.301375] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd5c982c20 is same with the state(5) to be set 00:06:07.538 [2024-07-15 13:58:53.301407] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2122:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:06:07.538 [2024-07-15 13:58:53.301442] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd5c982c20 is same with the state(5) to be set 00:06:07.538 [2024-07-15 13:58:53.301476] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:06:07.538 [2024-07-15 13:58:53.301510] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd5c982c20 is same with the state(5) to be set 00:06:07.538 [2024-07-15 13:58:53.301545] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write IC_RESP to socket: rc=0, errno=0 00:06:07.538 passed 00:06:07.538 Test: test_nvmf_tcp_check_xfer_type ...passed 00:06:07.538 Test: test_nvmf_tcp_invalid_sgl ...[2024-07-15 13:58:53.301601] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd5c982c20 is same with the state(5) to be set 00:06:07.538 [2024-07-15 13:58:53.301654] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2517:nvmf_tcp_req_parse_sgl: *ERROR*: SGL length 0x1001 exceeds max io size 0x1000 00:06:07.538 [2024-07-15 13:58:53.301693] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:06:07.538 passed 00:06:07.538 Test: test_nvmf_tcp_pdu_ch_handle ...[2024-07-15 13:58:53.301733] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd5c982c20 is same with the state(5) to be set 00:06:07.538 [2024-07-15 13:58:53.301787] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2249:nvmf_tcp_pdu_ch_handle: *ERROR*: Already received ICreq PDU, and reject this pdu=0x7ffd5c983980 00:06:07.538 [2024-07-15 13:58:53.301863] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:06:07.538 [2024-07-15 13:58:53.301916] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd5c9830e0 is same with the state(5) to be set 00:06:07.538 [2024-07-15 13:58:53.301960] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2306:nvmf_tcp_pdu_ch_handle: *ERROR*: PDU type=0x00, Expected ICReq header length 128, got 0 on tqpair=0x7ffd5c9830e0 00:06:07.538 [2024-07-15 13:58:53.301997] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:06:07.538 [2024-07-15 13:58:53.302030] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd5c9830e0 is same with the state(5) to be set 00:06:07.538 [2024-07-15 13:58:53.302063] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2259:nvmf_tcp_pdu_ch_handle: *ERROR*: The TCP/IP connection is not negotiated 00:06:07.538 [2024-07-15 13:58:53.302098] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:06:07.538 [2024-07-15 13:58:53.302150] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd5c9830e0 is same with the state(5) to be set 00:06:07.538 [2024-07-15 13:58:53.302192] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2298:nvmf_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x05 00:06:07.538 [2024-07-15 13:58:53.302226] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:06:07.538 [2024-07-15 13:58:53.302259] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd5c9830e0 is same with the state(5) to be set 00:06:07.538 [2024-07-15 13:58:53.302289] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:06:07.538 [2024-07-15 13:58:53.302327] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd5c9830e0 is same with the state(5) to be set 00:06:07.538 [2024-07-15 13:58:53.302379] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:06:07.538 [2024-07-15 13:58:53.302409] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd5c9830e0 is same with the state(5) to be set 00:06:07.538 [2024-07-15 13:58:53.302458] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:06:07.538 [2024-07-15 13:58:53.302496] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd5c9830e0 is same with the state(5) to be set 00:06:07.538 [2024-07-15 13:58:53.302528] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:06:07.538 passed 00:06:07.538 Test: test_nvmf_tcp_tls_add_remove_credentials ...[2024-07-15 13:58:53.302554] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd5c9830e0 is same with the state(5) to be set 00:06:07.538 [2024-07-15 13:58:53.302600] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:06:07.538 [2024-07-15 13:58:53.302627] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd5c9830e0 is same with the state(5) to be set 00:06:07.538 [2024-07-15 13:58:53.302666] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:06:07.538 [2024-07-15 13:58:53.302696] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffd5c9830e0 is same with the state(5) to be set 00:06:07.538 passed 00:06:07.538 Test: test_nvmf_tcp_tls_generate_psk_id ...[2024-07-15 13:58:53.322427] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 591:nvme_tcp_generate_psk_identity: *ERROR*: Out buffer too small! 00:06:07.538 [2024-07-15 13:58:53.322521] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 602:nvme_tcp_generate_psk_identity: *ERROR*: Unknown cipher suite requested! 00:06:07.538 passed 00:06:07.538 Test: test_nvmf_tcp_tls_generate_retained_psk ...[2024-07-15 13:58:53.322873] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 658:nvme_tcp_derive_retained_psk: *ERROR*: Unknown PSK hash requested! 00:06:07.538 passed 00:06:07.538 Test: test_nvmf_tcp_tls_generate_tls_psk ...[2024-07-15 13:58:53.322938] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 663:nvme_tcp_derive_retained_psk: *ERROR*: Insufficient buffer size for out key! 00:06:07.538 passed 00:06:07.538 00:06:07.538 [2024-07-15 13:58:53.323137] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 732:nvme_tcp_derive_tls_psk: *ERROR*: Unknown cipher suite requested! 00:06:07.538 [2024-07-15 13:58:53.323188] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 756:nvme_tcp_derive_tls_psk: *ERROR*: Insufficient buffer size for out key! 00:06:07.538 Run Summary: Type Total Ran Passed Failed Inactive 00:06:07.538 suites 1 1 n/a 0 0 00:06:07.538 tests 17 17 17 0 0 00:06:07.538 asserts 222 222 222 0 n/a 00:06:07.538 00:06:07.538 Elapsed time = 0.129 seconds 00:06:07.538 13:58:53 unittest.unittest_nvmf -- unit/unittest.sh@113 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/nvmf.c/nvmf_ut 00:06:07.538 00:06:07.538 00:06:07.538 CUnit - A unit testing framework for C - Version 2.1-3 00:06:07.538 http://cunit.sourceforge.net/ 00:06:07.538 00:06:07.538 00:06:07.538 Suite: nvmf 00:06:07.538 Test: test_nvmf_tgt_create_poll_group ...passed 00:06:07.538 00:06:07.538 Run Summary: Type Total Ran Passed Failed Inactive 00:06:07.538 suites 1 1 n/a 0 0 00:06:07.538 tests 1 1 1 0 0 00:06:07.538 asserts 17 17 17 0 n/a 00:06:07.538 00:06:07.538 Elapsed time = 0.024 seconds 00:06:07.538 00:06:07.538 real 0m0.397s 00:06:07.538 user 0m0.186s 00:06:07.538 sys 0m0.209s 00:06:07.538 13:58:53 unittest.unittest_nvmf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:07.538 13:58:53 unittest.unittest_nvmf -- common/autotest_common.sh@10 -- # set +x 00:06:07.538 ************************************ 00:06:07.538 END TEST unittest_nvmf 00:06:07.538 ************************************ 00:06:07.538 13:58:53 unittest -- common/autotest_common.sh@1142 -- # return 0 00:06:07.538 13:58:53 unittest -- unit/unittest.sh@262 -- # grep -q '#define SPDK_CONFIG_FC 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:07.538 13:58:53 unittest -- unit/unittest.sh@267 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:07.538 13:58:53 unittest -- unit/unittest.sh@268 -- # run_test unittest_nvmf_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:06:07.538 13:58:53 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:07.538 13:58:53 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.538 13:58:53 unittest -- common/autotest_common.sh@10 -- # set +x 00:06:07.538 ************************************ 00:06:07.538 START TEST unittest_nvmf_rdma 00:06:07.538 ************************************ 00:06:07.538 13:58:53 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:06:07.797 00:06:07.797 00:06:07.797 CUnit - A unit testing framework for C - Version 2.1-3 00:06:07.797 http://cunit.sourceforge.net/ 00:06:07.797 00:06:07.797 00:06:07.797 Suite: nvmf 00:06:07.797 Test: test_spdk_nvmf_rdma_request_parse_sgl ...[2024-07-15 13:58:53.544919] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1863:nvmf_rdma_request_parse_sgl: *ERROR*: SGL length 0x40000 exceeds max io size 0x20000 00:06:07.797 [2024-07-15 13:58:53.545224] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1913:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x1000 exceeds capsule length 0x0 00:06:07.797 [2024-07-15 13:58:53.545275] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1913:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x2000 exceeds capsule length 0x1000 00:06:07.797 passed 00:06:07.797 Test: test_spdk_nvmf_rdma_request_process ...passed 00:06:07.797 Test: test_nvmf_rdma_get_optimal_poll_group ...passed 00:06:07.797 Test: test_spdk_nvmf_rdma_request_parse_sgl_with_md ...passed 00:06:07.797 Test: test_nvmf_rdma_opts_init ...passed 00:06:07.797 Test: test_nvmf_rdma_request_free_data ...passed 00:06:07.797 Test: test_nvmf_rdma_resources_create ...passed 00:06:07.797 Test: test_nvmf_rdma_qpair_compare ...passed 00:06:07.797 Test: test_nvmf_rdma_resize_cq ...[2024-07-15 13:58:53.547250] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 954:nvmf_rdma_resize_cq: *ERROR*: iWARP doesn't support CQ resize. Current capacity 20, required 0 00:06:07.797 Using CQ of insufficient size may lead to CQ overrun 00:06:07.797 [2024-07-15 13:58:53.547375] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 959:nvmf_rdma_resize_cq: *ERROR*: RDMA CQE requirement (26) exceeds device max_cqe limitation (3) 00:06:07.797 [2024-07-15 13:58:53.547426] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 967:nvmf_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 0: Success 00:06:07.797 passed 00:06:07.797 00:06:07.797 Run Summary: Type Total Ran Passed Failed Inactive 00:06:07.797 suites 1 1 n/a 0 0 00:06:07.797 tests 9 9 9 0 0 00:06:07.797 asserts 579 579 579 0 n/a 00:06:07.797 00:06:07.797 Elapsed time = 0.003 seconds 00:06:07.797 00:06:07.797 real 0m0.032s 00:06:07.797 user 0m0.014s 00:06:07.797 sys 0m0.017s 00:06:07.797 13:58:53 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:07.797 ************************************ 00:06:07.797 END TEST unittest_nvmf_rdma 00:06:07.797 ************************************ 00:06:07.797 13:58:53 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:06:07.797 13:58:53 unittest -- common/autotest_common.sh@1142 -- # return 0 00:06:07.797 13:58:53 unittest -- unit/unittest.sh@271 -- # grep -q '#define SPDK_CONFIG_VFIO_USER 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:07.797 13:58:53 unittest -- unit/unittest.sh@275 -- # run_test unittest_scsi unittest_scsi 00:06:07.797 13:58:53 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:07.797 13:58:53 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.797 13:58:53 unittest -- common/autotest_common.sh@10 -- # set +x 00:06:07.797 ************************************ 00:06:07.797 START TEST unittest_scsi 00:06:07.797 ************************************ 00:06:07.797 13:58:53 unittest.unittest_scsi -- common/autotest_common.sh@1123 -- # unittest_scsi 00:06:07.797 13:58:53 unittest.unittest_scsi -- unit/unittest.sh@117 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/dev.c/dev_ut 00:06:07.797 00:06:07.797 00:06:07.797 CUnit - A unit testing framework for C - Version 2.1-3 00:06:07.797 http://cunit.sourceforge.net/ 00:06:07.797 00:06:07.797 00:06:07.797 Suite: dev_suite 00:06:07.797 Test: dev_destruct_null_dev ...passed 00:06:07.797 Test: dev_destruct_zero_luns ...passed 00:06:07.797 Test: dev_destruct_null_lun ...passed 00:06:07.797 Test: dev_destruct_success ...passed 00:06:07.797 Test: dev_construct_num_luns_zero ...[2024-07-15 13:58:53.627448] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 228:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUNs specified 00:06:07.797 passed 00:06:07.797 Test: dev_construct_no_lun_zero ...passed 00:06:07.797 Test: dev_construct_null_lun ...[2024-07-15 13:58:53.627712] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 241:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUN 0 specified 00:06:07.797 [2024-07-15 13:58:53.627789] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 247:spdk_scsi_dev_construct_ext: *ERROR*: NULL spdk_scsi_lun for LUN 0 00:06:07.797 passed 00:06:07.797 Test: dev_construct_name_too_long ...passed 00:06:07.797 Test: dev_construct_success ...passed 00:06:07.797 Test: dev_construct_success_lun_zero_not_first ...[2024-07-15 13:58:53.627833] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 222:spdk_scsi_dev_construct_ext: *ERROR*: device xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx: name longer than maximum allowed length 255 00:06:07.797 passed 00:06:07.797 Test: dev_queue_mgmt_task_success ...passed 00:06:07.797 Test: dev_queue_task_success ...passed 00:06:07.797 Test: dev_stop_success ...passed 00:06:07.797 Test: dev_add_port_max_ports ...[2024-07-15 13:58:53.628133] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 315:spdk_scsi_dev_add_port: *ERROR*: device already has 4 ports 00:06:07.797 passed 00:06:07.797 Test: dev_add_port_construct_failure1 ...passed 00:06:07.798 Test: dev_add_port_construct_failure2 ...passed[2024-07-15 13:58:53.628244] /home/vagrant/spdk_repo/spdk/lib/scsi/port.c: 49:scsi_port_construct: *ERROR*: port name too long 00:06:07.798 [2024-07-15 13:58:53.628333] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 321:spdk_scsi_dev_add_port: *ERROR*: device already has port(1) 00:06:07.798 00:06:07.798 Test: dev_add_port_success1 ...passed 00:06:07.798 Test: dev_add_port_success2 ...passed 00:06:07.798 Test: dev_add_port_success3 ...passed 00:06:07.798 Test: dev_find_port_by_id_num_ports_zero ...passed 00:06:07.798 Test: dev_find_port_by_id_id_not_found_failure ...passed 00:06:07.798 Test: dev_find_port_by_id_success ...passed 00:06:07.798 Test: dev_add_lun_bdev_not_found ...passed 00:06:07.798 Test: dev_add_lun_no_free_lun_id ...passed 00:06:07.798 Test: dev_add_lun_success1 ...[2024-07-15 13:58:53.628684] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 159:spdk_scsi_dev_add_lun_ext: *ERROR*: Free LUN ID is not found 00:06:07.798 passed 00:06:07.798 Test: dev_add_lun_success2 ...passed 00:06:07.798 Test: dev_check_pending_tasks ...passed 00:06:07.798 Test: dev_iterate_luns ...passed 00:06:07.798 Test: dev_find_free_lun ...passed 00:06:07.798 00:06:07.798 Run Summary: Type Total Ran Passed Failed Inactive 00:06:07.798 suites 1 1 n/a 0 0 00:06:07.798 tests 29 29 29 0 0 00:06:07.798 asserts 97 97 97 0 n/a 00:06:07.798 00:06:07.798 Elapsed time = 0.002 seconds 00:06:07.798 13:58:53 unittest.unittest_scsi -- unit/unittest.sh@118 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/lun.c/lun_ut 00:06:07.798 00:06:07.798 00:06:07.798 CUnit - A unit testing framework for C - Version 2.1-3 00:06:07.798 http://cunit.sourceforge.net/ 00:06:07.798 00:06:07.798 00:06:07.798 Suite: lun_suite 00:06:07.798 Test: lun_task_mgmt_execute_abort_task_not_supported ...[2024-07-15 13:58:53.656861] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task not supported 00:06:07.798 passed 00:06:07.798 Test: lun_task_mgmt_execute_abort_task_all_not_supported ...[2024-07-15 13:58:53.657136] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task set not supported 00:06:07.798 passed 00:06:07.798 Test: lun_task_mgmt_execute_lun_reset ...passed 00:06:07.798 Test: lun_task_mgmt_execute_target_reset ...passed 00:06:07.798 Test: lun_task_mgmt_execute_invalid_case ...passed 00:06:07.798 Test: lun_append_task_null_lun_task_cdb_spc_inquiry ...passed 00:06:07.798 Test: lun_append_task_null_lun_alloc_len_lt_4096 ...passed 00:06:07.798 Test: lun_append_task_null_lun_not_supported ...passed 00:06:07.798 Test: lun_execute_scsi_task_pending ...[2024-07-15 13:58:53.657304] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: unknown task not supported 00:06:07.798 passed 00:06:07.798 Test: lun_execute_scsi_task_complete ...passed 00:06:07.798 Test: lun_execute_scsi_task_resize ...passed 00:06:07.798 Test: lun_destruct_success ...passed 00:06:07.798 Test: lun_construct_null_ctx ...passed 00:06:07.798 Test: lun_construct_success ...passed 00:06:07.798 Test: lun_reset_task_wait_scsi_task_complete ...[2024-07-15 13:58:53.657478] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 432:scsi_lun_construct: *ERROR*: bdev_name must be non-NULL 00:06:07.798 passed 00:06:07.798 Test: lun_reset_task_suspend_scsi_task ...passed 00:06:07.798 Test: lun_check_pending_tasks_only_for_specific_initiator ...passed 00:06:07.798 Test: abort_pending_mgmt_tasks_when_lun_is_removed ...passed 00:06:07.798 00:06:07.798 Run Summary: Type Total Ran Passed Failed Inactive 00:06:07.798 suites 1 1 n/a 0 0 00:06:07.798 tests 18 18 18 0 0 00:06:07.798 asserts 153 153 153 0 n/a 00:06:07.798 00:06:07.798 Elapsed time = 0.001 seconds 00:06:07.798 13:58:53 unittest.unittest_scsi -- unit/unittest.sh@119 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi.c/scsi_ut 00:06:07.798 00:06:07.798 00:06:07.798 CUnit - A unit testing framework for C - Version 2.1-3 00:06:07.798 http://cunit.sourceforge.net/ 00:06:07.798 00:06:07.798 00:06:07.798 Suite: scsi_suite 00:06:07.798 Test: scsi_init ...passed 00:06:07.798 00:06:07.798 Run Summary: Type Total Ran Passed Failed Inactive 00:06:07.798 suites 1 1 n/a 0 0 00:06:07.798 tests 1 1 1 0 0 00:06:07.798 asserts 1 1 1 0 n/a 00:06:07.798 00:06:07.798 Elapsed time = 0.000 seconds 00:06:07.798 13:58:53 unittest.unittest_scsi -- unit/unittest.sh@120 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut 00:06:07.798 00:06:07.798 00:06:07.798 CUnit - A unit testing framework for C - Version 2.1-3 00:06:07.798 http://cunit.sourceforge.net/ 00:06:07.798 00:06:07.798 00:06:07.798 Suite: translation_suite 00:06:07.798 Test: mode_select_6_test ...passed 00:06:07.798 Test: mode_select_6_test2 ...passed 00:06:07.798 Test: mode_sense_6_test ...passed 00:06:07.798 Test: mode_sense_10_test ...passed 00:06:07.798 Test: inquiry_evpd_test ...passed 00:06:07.798 Test: inquiry_standard_test ...passed 00:06:07.798 Test: inquiry_overflow_test ...passed 00:06:07.798 Test: task_complete_test ...passed 00:06:07.798 Test: lba_range_test ...passed 00:06:07.798 Test: xfer_len_test ...[2024-07-15 13:58:53.707877] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_bdev.c:1270:bdev_scsi_readwrite: *ERROR*: xfer_len 8193 > maximum transfer length 8192 00:06:07.798 passed 00:06:07.798 Test: xfer_test ...passed 00:06:07.798 Test: scsi_name_padding_test ...passed 00:06:07.798 Test: get_dif_ctx_test ...passed 00:06:07.798 Test: unmap_split_test ...passed 00:06:07.798 00:06:07.798 Run Summary: Type Total Ran Passed Failed Inactive 00:06:07.798 suites 1 1 n/a 0 0 00:06:07.798 tests 14 14 14 0 0 00:06:07.798 asserts 1205 1205 1205 0 n/a 00:06:07.798 00:06:07.798 Elapsed time = 0.003 seconds 00:06:07.798 13:58:53 unittest.unittest_scsi -- unit/unittest.sh@121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut 00:06:07.798 00:06:07.798 00:06:07.798 CUnit - A unit testing framework for C - Version 2.1-3 00:06:07.798 http://cunit.sourceforge.net/ 00:06:07.798 00:06:07.798 00:06:07.798 Suite: reservation_suite 00:06:07.798 Test: test_reservation_register ...[2024-07-15 13:58:53.730701] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:06:07.798 passed 00:06:07.798 Test: test_reservation_reserve ...[2024-07-15 13:58:53.731104] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:06:07.798 [2024-07-15 13:58:53.731195] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 215:scsi_pr_out_reserve: *ERROR*: Only 1 holder is allowed for type 1 00:06:07.798 [2024-07-15 13:58:53.731323] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 210:scsi_pr_out_reserve: *ERROR*: Reservation type doesn't match 00:06:07.798 passed 00:06:07.798 Test: test_all_registrant_reservation_reserve ...passed 00:06:07.798 Test: test_all_registrant_reservation_access ...[2024-07-15 13:58:53.731404] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:06:07.798 [2024-07-15 13:58:53.731533] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:06:07.798 [2024-07-15 13:58:53.731607] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 865:scsi_pr_check: *ERROR*: CHECK: All Registrants reservation type reject command 0x8 00:06:07.798 passed 00:06:07.798 Test: test_reservation_preempt_non_all_regs ...[2024-07-15 13:58:53.731680] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 865:scsi_pr_check: *ERROR*: CHECK: All Registrants reservation type reject command 0xaa 00:06:07.798 [2024-07-15 13:58:53.731805] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:06:07.798 [2024-07-15 13:58:53.731883] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 464:scsi_pr_out_preempt: *ERROR*: Zeroed sa_rkey 00:06:07.798 passed 00:06:07.798 Test: test_reservation_preempt_all_regs ...[2024-07-15 13:58:53.732039] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:06:07.798 passed 00:06:07.798 Test: test_reservation_cmds_conflict ...[2024-07-15 13:58:53.732212] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:06:07.798 [2024-07-15 13:58:53.732310] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 857:scsi_pr_check: *ERROR*: CHECK: Registrants only reservation type reject command 0x2a 00:06:07.798 [2024-07-15 13:58:53.732380] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:06:07.798 [2024-07-15 13:58:53.732418] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:06:07.798 [2024-07-15 13:58:53.732460] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:06:07.798 [2024-07-15 13:58:53.732501] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:06:07.798 passed 00:06:07.798 Test: test_scsi2_reserve_release ...passed 00:06:07.798 Test: test_pr_with_scsi2_reserve_release ...passed 00:06:07.798 00:06:07.798 [2024-07-15 13:58:53.732595] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:06:07.798 Run Summary: Type Total Ran Passed Failed Inactive 00:06:07.798 suites 1 1 n/a 0 0 00:06:07.798 tests 9 9 9 0 0 00:06:07.798 asserts 344 344 344 0 n/a 00:06:07.798 00:06:07.798 Elapsed time = 0.002 seconds 00:06:07.798 00:06:07.798 real 0m0.129s 00:06:07.798 user 0m0.073s 00:06:07.798 sys 0m0.056s 00:06:07.798 13:58:53 unittest.unittest_scsi -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:07.798 13:58:53 unittest.unittest_scsi -- common/autotest_common.sh@10 -- # set +x 00:06:07.798 ************************************ 00:06:07.798 END TEST unittest_scsi 00:06:07.798 ************************************ 00:06:07.798 13:58:53 unittest -- common/autotest_common.sh@1142 -- # return 0 00:06:07.798 13:58:53 unittest -- unit/unittest.sh@278 -- # uname -s 00:06:07.798 13:58:53 unittest -- unit/unittest.sh@278 -- # '[' Linux = Linux ']' 00:06:07.798 13:58:53 unittest -- unit/unittest.sh@279 -- # run_test unittest_sock unittest_sock 00:06:07.798 13:58:53 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:07.798 13:58:53 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.798 13:58:53 unittest -- common/autotest_common.sh@10 -- # set +x 00:06:07.798 ************************************ 00:06:07.798 START TEST unittest_sock 00:06:07.798 ************************************ 00:06:08.057 13:58:53 unittest.unittest_sock -- common/autotest_common.sh@1123 -- # unittest_sock 00:06:08.058 13:58:53 unittest.unittest_sock -- unit/unittest.sh@125 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/sock.c/sock_ut 00:06:08.058 00:06:08.058 00:06:08.058 CUnit - A unit testing framework for C - Version 2.1-3 00:06:08.058 http://cunit.sourceforge.net/ 00:06:08.058 00:06:08.058 00:06:08.058 Suite: sock 00:06:08.058 Test: posix_sock ...passed 00:06:08.058 Test: ut_sock ...passed 00:06:08.058 Test: posix_sock_group ...passed 00:06:08.058 Test: ut_sock_group ...passed 00:06:08.058 Test: posix_sock_group_fairness ...passed 00:06:08.058 Test: _posix_sock_close ...passed 00:06:08.058 Test: sock_get_default_opts ...passed 00:06:08.058 Test: ut_sock_impl_get_set_opts ...passed 00:06:08.058 Test: posix_sock_impl_get_set_opts ...passed 00:06:08.058 Test: ut_sock_map ...passed 00:06:08.058 Test: override_impl_opts ...passed 00:06:08.058 Test: ut_sock_group_get_ctx ...passed 00:06:08.058 00:06:08.058 Run Summary: Type Total Ran Passed Failed Inactive 00:06:08.058 suites 1 1 n/a 0 0 00:06:08.058 tests 12 12 12 0 0 00:06:08.058 asserts 349 349 349 0 n/a 00:06:08.058 00:06:08.058 Elapsed time = 0.007 seconds 00:06:08.058 13:58:53 unittest.unittest_sock -- unit/unittest.sh@126 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/posix.c/posix_ut 00:06:08.058 00:06:08.058 00:06:08.058 CUnit - A unit testing framework for C - Version 2.1-3 00:06:08.058 http://cunit.sourceforge.net/ 00:06:08.058 00:06:08.058 00:06:08.058 Suite: posix 00:06:08.058 Test: flush ...passed 00:06:08.058 00:06:08.058 Run Summary: Type Total Ran Passed Failed Inactive 00:06:08.058 suites 1 1 n/a 0 0 00:06:08.058 tests 1 1 1 0 0 00:06:08.058 asserts 28 28 28 0 n/a 00:06:08.058 00:06:08.058 Elapsed time = 0.000 seconds 00:06:08.058 13:58:53 unittest.unittest_sock -- unit/unittest.sh@128 -- # grep -q '#define SPDK_CONFIG_URING 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:08.058 00:06:08.058 real 0m0.091s 00:06:08.058 user 0m0.042s 00:06:08.058 sys 0m0.024s 00:06:08.058 13:58:53 unittest.unittest_sock -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.058 13:58:53 unittest.unittest_sock -- common/autotest_common.sh@10 -- # set +x 00:06:08.058 ************************************ 00:06:08.058 END TEST unittest_sock 00:06:08.058 ************************************ 00:06:08.058 13:58:53 unittest -- common/autotest_common.sh@1142 -- # return 0 00:06:08.058 13:58:53 unittest -- unit/unittest.sh@281 -- # run_test unittest_thread /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:06:08.058 13:58:53 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:08.058 13:58:53 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.058 13:58:53 unittest -- common/autotest_common.sh@10 -- # set +x 00:06:08.058 ************************************ 00:06:08.058 START TEST unittest_thread 00:06:08.058 ************************************ 00:06:08.058 13:58:53 unittest.unittest_thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:06:08.058 00:06:08.058 00:06:08.058 CUnit - A unit testing framework for C - Version 2.1-3 00:06:08.058 http://cunit.sourceforge.net/ 00:06:08.058 00:06:08.058 00:06:08.058 Suite: io_channel 00:06:08.058 Test: thread_alloc ...passed 00:06:08.058 Test: thread_send_msg ...passed 00:06:08.058 Test: thread_poller ...passed 00:06:08.058 Test: poller_pause ...passed 00:06:08.058 Test: thread_for_each ...passed 00:06:08.058 Test: for_each_channel_remove ...passed 00:06:08.058 Test: for_each_channel_unreg ...[2024-07-15 13:58:53.964678] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2177:spdk_io_device_register: *ERROR*: io_device 0x7fff8a58cc60 already registered (old:0x613000000200 new:0x6130000003c0) 00:06:08.058 passed 00:06:08.058 Test: thread_name ...passed 00:06:08.058 Test: channel ...[2024-07-15 13:58:53.967515] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2311:spdk_get_io_channel: *ERROR*: could not find io_device 0x49b3e0 00:06:08.058 passed 00:06:08.058 Test: channel_destroy_races ...passed 00:06:08.058 Test: thread_exit_test ...[2024-07-15 13:58:53.971071] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 639:thread_exit: *ERROR*: thread 0x619000007380 got timeout, and move it to the exited state forcefully 00:06:08.058 passed 00:06:08.058 Test: thread_update_stats_test ...passed 00:06:08.058 Test: nested_channel ...passed 00:06:08.058 Test: device_unregister_and_thread_exit_race ...passed 00:06:08.058 Test: cache_closest_timed_poller ...passed 00:06:08.058 Test: multi_timed_pollers_have_same_expiration ...passed 00:06:08.058 Test: io_device_lookup ...passed 00:06:08.058 Test: spdk_spin ...[2024-07-15 13:58:53.978237] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3082:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:06:08.058 [2024-07-15 13:58:53.978298] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x7fff8a58cc40 00:06:08.058 [2024-07-15 13:58:53.978405] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3120:spdk_spin_held: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:06:08.058 [2024-07-15 13:58:53.979603] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3083:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:06:08.058 [2024-07-15 13:58:53.979678] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x7fff8a58cc40 00:06:08.058 [2024-07-15 13:58:53.979717] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3103:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:06:08.058 [2024-07-15 13:58:53.979771] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x7fff8a58cc40 00:06:08.058 [2024-07-15 13:58:53.979809] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3103:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:06:08.058 [2024-07-15 13:58:53.979847] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x7fff8a58cc40 00:06:08.058 [2024-07-15 13:58:53.979892] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3064:spdk_spin_destroy: *ERROR*: unrecoverable spinlock error 5: Destroying a held spinlock (sspin->thread == ((void *)0)) 00:06:08.058 [2024-07-15 13:58:53.979943] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x7fff8a58cc40 00:06:08.058 passed 00:06:08.058 Test: for_each_channel_and_thread_exit_race ...passed 00:06:08.058 Test: for_each_thread_and_thread_exit_race ...passed 00:06:08.058 00:06:08.058 Run Summary: Type Total Ran Passed Failed Inactive 00:06:08.058 suites 1 1 n/a 0 0 00:06:08.058 tests 20 20 20 0 0 00:06:08.058 asserts 409 409 409 0 n/a 00:06:08.058 00:06:08.058 Elapsed time = 0.033 seconds 00:06:08.058 00:06:08.058 real 0m0.066s 00:06:08.058 user 0m0.042s 00:06:08.058 sys 0m0.024s 00:06:08.058 13:58:54 unittest.unittest_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.058 13:58:54 unittest.unittest_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.058 ************************************ 00:06:08.058 END TEST unittest_thread 00:06:08.058 ************************************ 00:06:08.058 13:58:54 unittest -- common/autotest_common.sh@1142 -- # return 0 00:06:08.058 13:58:54 unittest -- unit/unittest.sh@282 -- # run_test unittest_iobuf /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:06:08.058 13:58:54 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:08.058 13:58:54 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.058 13:58:54 unittest -- common/autotest_common.sh@10 -- # set +x 00:06:08.058 ************************************ 00:06:08.058 START TEST unittest_iobuf 00:06:08.058 ************************************ 00:06:08.058 13:58:54 unittest.unittest_iobuf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:06:08.317 00:06:08.317 00:06:08.317 CUnit - A unit testing framework for C - Version 2.1-3 00:06:08.317 http://cunit.sourceforge.net/ 00:06:08.317 00:06:08.317 00:06:08.318 Suite: io_channel 00:06:08.318 Test: iobuf ...passed 00:06:08.318 Test: iobuf_cache ...[2024-07-15 13:58:54.066522] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 360:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module0' iobuf small buffer cache at 4/5 entries. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:06:08.318 [2024-07-15 13:58:54.066810] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 363:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:06:08.318 [2024-07-15 13:58:54.066948] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 372:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module0' iobuf large buffer cache at 4/5 entries. You may need to increase spdk_iobuf_opts.large_pool_count (4) 00:06:08.318 [2024-07-15 13:58:54.067002] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 375:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:06:08.318 [2024-07-15 13:58:54.067073] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 360:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module1' iobuf small buffer cache at 0/4 entries. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:06:08.318 [2024-07-15 13:58:54.067122] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 363:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:06:08.318 passed 00:06:08.318 00:06:08.318 Run Summary: Type Total Ran Passed Failed Inactive 00:06:08.318 suites 1 1 n/a 0 0 00:06:08.318 tests 2 2 2 0 0 00:06:08.318 asserts 107 107 107 0 n/a 00:06:08.318 00:06:08.318 Elapsed time = 0.005 seconds 00:06:08.318 00:06:08.318 real 0m0.030s 00:06:08.318 user 0m0.019s 00:06:08.318 sys 0m0.011s 00:06:08.318 13:58:54 unittest.unittest_iobuf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.318 13:58:54 unittest.unittest_iobuf -- common/autotest_common.sh@10 -- # set +x 00:06:08.318 ************************************ 00:06:08.318 END TEST unittest_iobuf 00:06:08.318 ************************************ 00:06:08.318 13:58:54 unittest -- common/autotest_common.sh@1142 -- # return 0 00:06:08.318 13:58:54 unittest -- unit/unittest.sh@283 -- # run_test unittest_util unittest_util 00:06:08.318 13:58:54 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:08.318 13:58:54 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.318 13:58:54 unittest -- common/autotest_common.sh@10 -- # set +x 00:06:08.318 ************************************ 00:06:08.318 START TEST unittest_util 00:06:08.318 ************************************ 00:06:08.318 13:58:54 unittest.unittest_util -- common/autotest_common.sh@1123 -- # unittest_util 00:06:08.318 13:58:54 unittest.unittest_util -- unit/unittest.sh@134 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/base64.c/base64_ut 00:06:08.318 00:06:08.318 00:06:08.318 CUnit - A unit testing framework for C - Version 2.1-3 00:06:08.318 http://cunit.sourceforge.net/ 00:06:08.318 00:06:08.318 00:06:08.318 Suite: base64 00:06:08.318 Test: test_base64_get_encoded_strlen ...passed 00:06:08.318 Test: test_base64_get_decoded_len ...passed 00:06:08.318 Test: test_base64_encode ...passed 00:06:08.318 Test: test_base64_decode ...passed 00:06:08.318 Test: test_base64_urlsafe_encode ...passed 00:06:08.318 Test: test_base64_urlsafe_decode ...passed 00:06:08.318 00:06:08.318 Run Summary: Type Total Ran Passed Failed Inactive 00:06:08.318 suites 1 1 n/a 0 0 00:06:08.318 tests 6 6 6 0 0 00:06:08.318 asserts 112 112 112 0 n/a 00:06:08.318 00:06:08.318 Elapsed time = 0.000 seconds 00:06:08.318 13:58:54 unittest.unittest_util -- unit/unittest.sh@135 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/bit_array.c/bit_array_ut 00:06:08.318 00:06:08.318 00:06:08.318 CUnit - A unit testing framework for C - Version 2.1-3 00:06:08.318 http://cunit.sourceforge.net/ 00:06:08.318 00:06:08.318 00:06:08.318 Suite: bit_array 00:06:08.318 Test: test_1bit ...passed 00:06:08.318 Test: test_64bit ...passed 00:06:08.318 Test: test_find ...passed 00:06:08.318 Test: test_resize ...passed 00:06:08.318 Test: test_errors ...passed 00:06:08.318 Test: test_count ...passed 00:06:08.318 Test: test_mask_store_load ...passed 00:06:08.318 Test: test_mask_clear ...passed 00:06:08.318 00:06:08.318 Run Summary: Type Total Ran Passed Failed Inactive 00:06:08.318 suites 1 1 n/a 0 0 00:06:08.318 tests 8 8 8 0 0 00:06:08.318 asserts 5075 5075 5075 0 n/a 00:06:08.318 00:06:08.318 Elapsed time = 0.001 seconds 00:06:08.318 13:58:54 unittest.unittest_util -- unit/unittest.sh@136 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/cpuset.c/cpuset_ut 00:06:08.318 00:06:08.318 00:06:08.318 CUnit - A unit testing framework for C - Version 2.1-3 00:06:08.318 http://cunit.sourceforge.net/ 00:06:08.318 00:06:08.318 00:06:08.318 Suite: cpuset 00:06:08.318 Test: test_cpuset ...passed 00:06:08.318 Test: test_cpuset_parse ...[2024-07-15 13:58:54.184084] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 256:parse_list: *ERROR*: Unexpected end of core list '[' 00:06:08.318 [2024-07-15 13:58:54.184338] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 258:parse_list: *ERROR*: Parsing of core list '[]' failed on character ']' 00:06:08.318 [2024-07-15 13:58:54.184439] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 258:parse_list: *ERROR*: Parsing of core list '[10--11]' failed on character '-' 00:06:08.318 [2024-07-15 13:58:54.184530] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 236:parse_list: *ERROR*: Invalid range of CPUs (11 > 10) 00:06:08.318 [2024-07-15 13:58:54.184574] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 258:parse_list: *ERROR*: Parsing of core list '[10-11,]' failed on character ',' 00:06:08.318 [2024-07-15 13:58:54.184617] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 258:parse_list: *ERROR*: Parsing of core list '[,10-11]' failed on character ',' 00:06:08.318 [2024-07-15 13:58:54.184654] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 220:parse_list: *ERROR*: Core number 1025 is out of range in '[1025]' 00:06:08.318 [2024-07-15 13:58:54.184710] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 215:parse_list: *ERROR*: Conversion of core mask in '[184467440737095516150]' failed 00:06:08.318 passed 00:06:08.318 Test: test_cpuset_fmt ...passed 00:06:08.318 Test: test_cpuset_foreach ...passed 00:06:08.318 00:06:08.318 Run Summary: Type Total Ran Passed Failed Inactive 00:06:08.318 suites 1 1 n/a 0 0 00:06:08.318 tests 4 4 4 0 0 00:06:08.318 asserts 90 90 90 0 n/a 00:06:08.318 00:06:08.318 Elapsed time = 0.002 seconds 00:06:08.318 13:58:54 unittest.unittest_util -- unit/unittest.sh@137 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc16.c/crc16_ut 00:06:08.318 00:06:08.318 00:06:08.318 CUnit - A unit testing framework for C - Version 2.1-3 00:06:08.318 http://cunit.sourceforge.net/ 00:06:08.318 00:06:08.318 00:06:08.318 Suite: crc16 00:06:08.318 Test: test_crc16_t10dif ...passed 00:06:08.318 Test: test_crc16_t10dif_seed ...passed 00:06:08.318 Test: test_crc16_t10dif_copy ...passed 00:06:08.318 00:06:08.318 Run Summary: Type Total Ran Passed Failed Inactive 00:06:08.318 suites 1 1 n/a 0 0 00:06:08.318 tests 3 3 3 0 0 00:06:08.318 asserts 5 5 5 0 n/a 00:06:08.318 00:06:08.318 Elapsed time = 0.000 seconds 00:06:08.318 13:58:54 unittest.unittest_util -- unit/unittest.sh@138 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut 00:06:08.318 00:06:08.318 00:06:08.318 CUnit - A unit testing framework for C - Version 2.1-3 00:06:08.318 http://cunit.sourceforge.net/ 00:06:08.318 00:06:08.318 00:06:08.318 Suite: crc32_ieee 00:06:08.318 Test: test_crc32_ieee ...passed 00:06:08.318 00:06:08.318 Run Summary: Type Total Ran Passed Failed Inactive 00:06:08.318 suites 1 1 n/a 0 0 00:06:08.318 tests 1 1 1 0 0 00:06:08.318 asserts 1 1 1 0 n/a 00:06:08.318 00:06:08.318 Elapsed time = 0.000 seconds 00:06:08.318 13:58:54 unittest.unittest_util -- unit/unittest.sh@139 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32c.c/crc32c_ut 00:06:08.318 00:06:08.318 00:06:08.318 CUnit - A unit testing framework for C - Version 2.1-3 00:06:08.318 http://cunit.sourceforge.net/ 00:06:08.318 00:06:08.318 00:06:08.318 Suite: crc32c 00:06:08.318 Test: test_crc32c ...passed 00:06:08.318 Test: test_crc32c_nvme ...passed 00:06:08.318 00:06:08.318 Run Summary: Type Total Ran Passed Failed Inactive 00:06:08.318 suites 1 1 n/a 0 0 00:06:08.318 tests 2 2 2 0 0 00:06:08.318 asserts 16 16 16 0 n/a 00:06:08.318 00:06:08.318 Elapsed time = 0.000 seconds 00:06:08.319 13:58:54 unittest.unittest_util -- unit/unittest.sh@140 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc64.c/crc64_ut 00:06:08.319 00:06:08.319 00:06:08.319 CUnit - A unit testing framework for C - Version 2.1-3 00:06:08.319 http://cunit.sourceforge.net/ 00:06:08.319 00:06:08.319 00:06:08.319 Suite: crc64 00:06:08.319 Test: test_crc64_nvme ...passed 00:06:08.319 00:06:08.319 Run Summary: Type Total Ran Passed Failed Inactive 00:06:08.319 suites 1 1 n/a 0 0 00:06:08.319 tests 1 1 1 0 0 00:06:08.319 asserts 4 4 4 0 n/a 00:06:08.319 00:06:08.319 Elapsed time = 0.000 seconds 00:06:08.319 13:58:54 unittest.unittest_util -- unit/unittest.sh@141 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/string.c/string_ut 00:06:08.319 00:06:08.319 00:06:08.319 CUnit - A unit testing framework for C - Version 2.1-3 00:06:08.319 http://cunit.sourceforge.net/ 00:06:08.319 00:06:08.319 00:06:08.319 Suite: string 00:06:08.319 Test: test_parse_ip_addr ...passed 00:06:08.319 Test: test_str_chomp ...passed 00:06:08.319 Test: test_parse_capacity ...passed 00:06:08.319 Test: test_sprintf_append_realloc ...passed 00:06:08.319 Test: test_strtol ...passed 00:06:08.319 Test: test_strtoll ...passed 00:06:08.319 Test: test_strarray ...passed 00:06:08.319 Test: test_strcpy_replace ...passed 00:06:08.319 00:06:08.319 Run Summary: Type Total Ran Passed Failed Inactive 00:06:08.319 suites 1 1 n/a 0 0 00:06:08.319 tests 8 8 8 0 0 00:06:08.319 asserts 161 161 161 0 n/a 00:06:08.319 00:06:08.319 Elapsed time = 0.000 seconds 00:06:08.319 13:58:54 unittest.unittest_util -- unit/unittest.sh@142 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/dif.c/dif_ut 00:06:08.319 00:06:08.319 00:06:08.319 CUnit - A unit testing framework for C - Version 2.1-3 00:06:08.319 http://cunit.sourceforge.net/ 00:06:08.319 00:06:08.319 00:06:08.319 Suite: dif 00:06:08.579 Test: dif_generate_and_verify_test ...[2024-07-15 13:58:54.318558] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:06:08.579 [2024-07-15 13:58:54.318935] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:06:08.579 [2024-07-15 13:58:54.319116] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:06:08.579 [2024-07-15 13:58:54.319294] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:06:08.579 [2024-07-15 13:58:54.319508] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:06:08.579 [2024-07-15 13:58:54.319692] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:06:08.579 passed 00:06:08.579 Test: dif_disable_check_test ...[2024-07-15 13:58:54.320318] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:06:08.579 [2024-07-15 13:58:54.320537] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:06:08.579 [2024-07-15 13:58:54.320699] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:06:08.579 passed 00:06:08.579 Test: dif_generate_and_verify_different_pi_formats_test ...[2024-07-15 13:58:54.321309] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a80000, Actual=b9848de 00:06:08.579 [2024-07-15 13:58:54.321501] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b98, Actual=b0a8 00:06:08.579 [2024-07-15 13:58:54.321692] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a8000000000000, Actual=81039fcf5685d8d4 00:06:08.579 [2024-07-15 13:58:54.321959] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b9848de00000000, Actual=81039fcf5685d8d4 00:06:08.579 [2024-07-15 13:58:54.322188] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:06:08.579 [2024-07-15 13:58:54.322385] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:06:08.579 [2024-07-15 13:58:54.322577] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:06:08.579 [2024-07-15 13:58:54.322776] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:06:08.579 [2024-07-15 13:58:54.322989] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:06:08.579 [2024-07-15 13:58:54.323177] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:06:08.579 [2024-07-15 13:58:54.323381] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:06:08.579 passed 00:06:08.579 Test: dif_apptag_mask_test ...[2024-07-15 13:58:54.323576] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:06:08.579 [2024-07-15 13:58:54.323781] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:06:08.579 passed 00:06:08.579 Test: dif_sec_512_md_0_error_test ...[2024-07-15 13:58:54.323928] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:06:08.579 passed 00:06:08.579 Test: dif_sec_4096_md_0_error_test ...[2024-07-15 13:58:54.323979] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:06:08.579 [2024-07-15 13:58:54.324025] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:06:08.579 passed 00:06:08.579 Test: dif_sec_4100_md_128_error_test ...passed 00:06:08.579 Test: dif_guard_seed_test ...[2024-07-15 13:58:54.324102] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 528:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:06:08.579 [2024-07-15 13:58:54.324157] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 528:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:06:08.579 passed 00:06:08.579 Test: dif_guard_value_test ...passed 00:06:08.579 Test: dif_disable_sec_512_md_8_single_iov_test ...passed 00:06:08.579 Test: dif_sec_512_md_8_prchk_0_single_iov_test ...passed 00:06:08.579 Test: dif_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:06:08.579 Test: dif_sec_512_md_8_prchk_0_1_2_4_multi_iovs_test ...passed 00:06:08.579 Test: dif_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:06:08.579 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_test ...passed 00:06:08.579 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:06:08.579 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:06:08.579 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_test ...passed 00:06:08.579 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:06:08.579 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_guard_test ...passed 00:06:08.579 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_guard_test ...passed 00:06:08.579 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_apptag_test ...passed 00:06:08.579 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_apptag_test ...passed 00:06:08.579 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_reftag_test ...passed 00:06:08.579 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_reftag_test ...passed 00:06:08.579 Test: dif_sec_512_md_8_prchk_7_multi_iovs_complex_splits_test ...passed 00:06:08.579 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:06:08.579 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-07-15 13:58:54.348507] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=ed4c, Actual=fd4c 00:06:08.579 [2024-07-15 13:58:54.349875] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=ee21, Actual=fe21 00:06:08.579 [2024-07-15 13:58:54.351190] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=1088 00:06:08.579 [2024-07-15 13:58:54.352521] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=1088 00:06:08.579 [2024-07-15 13:58:54.353888] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=1000005d 00:06:08.579 [2024-07-15 13:58:54.355204] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=1000005d 00:06:08.579 [2024-07-15 13:58:54.356542] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=fd4c, Actual=5aa1 00:06:08.579 [2024-07-15 13:58:54.357648] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=fe21, Actual=feb3 00:06:08.580 [2024-07-15 13:58:54.358766] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=ab753ed, Actual=1ab753ed 00:06:08.580 [2024-07-15 13:58:54.360098] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=28574660, Actual=38574660 00:06:08.580 [2024-07-15 13:58:54.361448] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=1088 00:06:08.580 [2024-07-15 13:58:54.362776] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=1088 00:06:08.580 [2024-07-15 13:58:54.364113] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=1000005d 00:06:08.580 [2024-07-15 13:58:54.365439] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=1000005d 00:06:08.580 [2024-07-15 13:58:54.366774] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=1ab753ed, Actual=de5d864b 00:06:08.580 [2024-07-15 13:58:54.367882] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=38574660, Actual=2af7bdf3 00:06:08.580 [2024-07-15 13:58:54.369019] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=a576a7729ecc20d3, Actual=a576a7728ecc20d3 00:06:08.580 [2024-07-15 13:58:54.370339] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=88010a2d5837a266, Actual=88010a2d4837a266 00:06:08.580 [2024-07-15 13:58:54.371652] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=1088 00:06:08.580 [2024-07-15 13:58:54.372991] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=1088 00:06:08.580 [2024-07-15 13:58:54.374315] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=10000000005d 00:06:08.580 [2024-07-15 13:58:54.375638] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=10000000005d 00:06:08.580 [2024-07-15 13:58:54.377009] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=a576a7728ecc20d3, Actual=4dd9eb455e81b002 00:06:08.580 [2024-07-15 13:58:54.378121] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=88010a2d4837a266, Actual=f7bef93cda1b82bb 00:06:08.580 passed 00:06:08.580 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_and_md_test ...[2024-07-15 13:58:54.378631] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=ed4c, Actual=fd4c 00:06:08.580 [2024-07-15 13:58:54.378830] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=ee21, Actual=fe21 00:06:08.580 [2024-07-15 13:58:54.379009] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=1088 00:06:08.580 [2024-07-15 13:58:54.379192] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=1088 00:06:08.580 [2024-07-15 13:58:54.379397] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10000058 00:06:08.580 [2024-07-15 13:58:54.379575] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10000058 00:06:08.580 [2024-07-15 13:58:54.379766] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=5aa1 00:06:08.580 [2024-07-15 13:58:54.379963] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=feb3 00:06:08.580 [2024-07-15 13:58:54.380165] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=ab753ed, Actual=1ab753ed 00:06:08.580 [2024-07-15 13:58:54.380345] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=28574660, Actual=38574660 00:06:08.580 [2024-07-15 13:58:54.380541] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=1088 00:06:08.580 [2024-07-15 13:58:54.380745] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=1088 00:06:08.580 [2024-07-15 13:58:54.380938] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10000058 00:06:08.580 [2024-07-15 13:58:54.381113] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10000058 00:06:08.580 [2024-07-15 13:58:54.381293] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=de5d864b 00:06:08.580 [2024-07-15 13:58:54.381461] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=2af7bdf3 00:06:08.580 [2024-07-15 13:58:54.381654] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7729ecc20d3, Actual=a576a7728ecc20d3 00:06:08.580 [2024-07-15 13:58:54.381844] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d5837a266, Actual=88010a2d4837a266 00:06:08.580 [2024-07-15 13:58:54.382022] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=1088 00:06:08.580 [2024-07-15 13:58:54.382195] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=1088 00:06:08.580 [2024-07-15 13:58:54.382379] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100000000058 00:06:08.580 [2024-07-15 13:58:54.382554] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100000000058 00:06:08.580 [2024-07-15 13:58:54.382765] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=4dd9eb455e81b002 00:06:08.580 [2024-07-15 13:58:54.382950] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=f7bef93cda1b82bb 00:06:08.580 passed 00:06:08.580 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_test ...[2024-07-15 13:58:54.383168] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=ed4c, Actual=fd4c 00:06:08.580 [2024-07-15 13:58:54.383350] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=ee21, Actual=fe21 00:06:08.580 [2024-07-15 13:58:54.383527] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=1088 00:06:08.580 [2024-07-15 13:58:54.383710] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=1088 00:06:08.580 [2024-07-15 13:58:54.383910] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10000058 00:06:08.580 [2024-07-15 13:58:54.384114] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10000058 00:06:08.580 [2024-07-15 13:58:54.384293] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=5aa1 00:06:08.580 [2024-07-15 13:58:54.384472] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=feb3 00:06:08.580 [2024-07-15 13:58:54.384645] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=ab753ed, Actual=1ab753ed 00:06:08.580 [2024-07-15 13:58:54.384834] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=28574660, Actual=38574660 00:06:08.580 [2024-07-15 13:58:54.385014] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=1088 00:06:08.580 [2024-07-15 13:58:54.385189] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=1088 00:06:08.580 [2024-07-15 13:58:54.385366] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10000058 00:06:08.580 [2024-07-15 13:58:54.385546] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10000058 00:06:08.580 [2024-07-15 13:58:54.385737] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=de5d864b 00:06:08.580 [2024-07-15 13:58:54.385920] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=2af7bdf3 00:06:08.580 [2024-07-15 13:58:54.386115] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7729ecc20d3, Actual=a576a7728ecc20d3 00:06:08.580 [2024-07-15 13:58:54.386284] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d5837a266, Actual=88010a2d4837a266 00:06:08.580 [2024-07-15 13:58:54.386461] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=1088 00:06:08.580 [2024-07-15 13:58:54.386642] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=1088 00:06:08.580 [2024-07-15 13:58:54.386826] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100000000058 00:06:08.580 [2024-07-15 13:58:54.386998] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100000000058 00:06:08.580 [2024-07-15 13:58:54.387192] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=4dd9eb455e81b002 00:06:08.580 [2024-07-15 13:58:54.387363] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=f7bef93cda1b82bb 00:06:08.580 passed 00:06:08.580 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_guard_test ...[2024-07-15 13:58:54.387565] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=ed4c, Actual=fd4c 00:06:08.580 [2024-07-15 13:58:54.387804] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=ee21, Actual=fe21 00:06:08.580 [2024-07-15 13:58:54.388009] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=1088 00:06:08.580 [2024-07-15 13:58:54.388213] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=1088 00:06:08.580 [2024-07-15 13:58:54.388412] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10000058 00:06:08.580 [2024-07-15 13:58:54.388590] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10000058 00:06:08.580 [2024-07-15 13:58:54.388777] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=5aa1 00:06:08.580 [2024-07-15 13:58:54.388958] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=feb3 00:06:08.580 [2024-07-15 13:58:54.389136] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=ab753ed, Actual=1ab753ed 00:06:08.580 [2024-07-15 13:58:54.389310] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=28574660, Actual=38574660 00:06:08.580 [2024-07-15 13:58:54.389512] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=1088 00:06:08.580 [2024-07-15 13:58:54.389695] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=1088 00:06:08.580 [2024-07-15 13:58:54.389880] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10000058 00:06:08.580 [2024-07-15 13:58:54.390056] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10000058 00:06:08.580 [2024-07-15 13:58:54.390236] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=de5d864b 00:06:08.580 [2024-07-15 13:58:54.390421] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=2af7bdf3 00:06:08.580 [2024-07-15 13:58:54.390619] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7729ecc20d3, Actual=a576a7728ecc20d3 00:06:08.580 [2024-07-15 13:58:54.390811] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d5837a266, Actual=88010a2d4837a266 00:06:08.581 [2024-07-15 13:58:54.390992] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=1088 00:06:08.581 [2024-07-15 13:58:54.391193] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=1088 00:06:08.581 [2024-07-15 13:58:54.391367] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100000000058 00:06:08.581 [2024-07-15 13:58:54.391543] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100000000058 00:06:08.581 [2024-07-15 13:58:54.391749] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=4dd9eb455e81b002 00:06:08.581 [2024-07-15 13:58:54.391944] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=f7bef93cda1b82bb 00:06:08.581 passed 00:06:08.581 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_pi_16_test ...[2024-07-15 13:58:54.392192] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=ed4c, Actual=fd4c 00:06:08.581 [2024-07-15 13:58:54.392370] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=ee21, Actual=fe21 00:06:08.581 [2024-07-15 13:58:54.392550] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=1088 00:06:08.581 [2024-07-15 13:58:54.392743] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=1088 00:06:08.581 [2024-07-15 13:58:54.392960] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10000058 00:06:08.581 [2024-07-15 13:58:54.393141] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10000058 00:06:08.581 [2024-07-15 13:58:54.393320] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=5aa1 00:06:08.581 [2024-07-15 13:58:54.393501] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=feb3 00:06:08.581 passed 00:06:08.581 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_test ...[2024-07-15 13:58:54.393695] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=ab753ed, Actual=1ab753ed 00:06:08.581 [2024-07-15 13:58:54.393885] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=28574660, Actual=38574660 00:06:08.581 [2024-07-15 13:58:54.394082] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=1088 00:06:08.581 [2024-07-15 13:58:54.394255] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=1088 00:06:08.581 [2024-07-15 13:58:54.394430] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10000058 00:06:08.581 [2024-07-15 13:58:54.394605] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10000058 00:06:08.581 [2024-07-15 13:58:54.394802] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=de5d864b 00:06:08.581 [2024-07-15 13:58:54.394983] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=2af7bdf3 00:06:08.581 [2024-07-15 13:58:54.395203] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7729ecc20d3, Actual=a576a7728ecc20d3 00:06:08.581 [2024-07-15 13:58:54.395387] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d5837a266, Actual=88010a2d4837a266 00:06:08.581 [2024-07-15 13:58:54.395561] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=1088 00:06:08.581 [2024-07-15 13:58:54.395752] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=1088 00:06:08.581 [2024-07-15 13:58:54.395939] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100000000058 00:06:08.581 [2024-07-15 13:58:54.396145] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100000000058 00:06:08.581 [2024-07-15 13:58:54.396340] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=4dd9eb455e81b002 00:06:08.581 [2024-07-15 13:58:54.396520] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=f7bef93cda1b82bb 00:06:08.581 passed 00:06:08.581 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_pi_16_test ...[2024-07-15 13:58:54.396720] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=ed4c, Actual=fd4c 00:06:08.581 [2024-07-15 13:58:54.396916] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=ee21, Actual=fe21 00:06:08.581 [2024-07-15 13:58:54.397078] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=1088 00:06:08.581 [2024-07-15 13:58:54.397256] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=1088 00:06:08.581 [2024-07-15 13:58:54.397453] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10000058 00:06:08.581 [2024-07-15 13:58:54.397627] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10000058 00:06:08.581 [2024-07-15 13:58:54.397813] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=5aa1 00:06:08.581 [2024-07-15 13:58:54.397987] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=feb3 00:06:08.581 passed 00:06:08.581 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_test ...[2024-07-15 13:58:54.398199] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=ab753ed, Actual=1ab753ed 00:06:08.581 [2024-07-15 13:58:54.398377] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=28574660, Actual=38574660 00:06:08.581 [2024-07-15 13:58:54.398575] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=1088 00:06:08.581 [2024-07-15 13:58:54.398771] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=1088 00:06:08.581 [2024-07-15 13:58:54.398951] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10000058 00:06:08.581 [2024-07-15 13:58:54.399128] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10000058 00:06:08.581 [2024-07-15 13:58:54.399306] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=de5d864b 00:06:08.581 [2024-07-15 13:58:54.399481] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=2af7bdf3 00:06:08.581 [2024-07-15 13:58:54.399718] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7729ecc20d3, Actual=a576a7728ecc20d3 00:06:08.581 [2024-07-15 13:58:54.399918] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d5837a266, Actual=88010a2d4837a266 00:06:08.581 [2024-07-15 13:58:54.400119] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=1088 00:06:08.581 [2024-07-15 13:58:54.400300] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=1088 00:06:08.581 [2024-07-15 13:58:54.400476] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100000000058 00:06:08.581 [2024-07-15 13:58:54.400649] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100000000058 00:06:08.581 [2024-07-15 13:58:54.400853] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=4dd9eb455e81b002 00:06:08.581 [2024-07-15 13:58:54.401033] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=f7bef93cda1b82bb 00:06:08.581 passed 00:06:08.581 Test: dif_copy_sec_512_md_8_prchk_0_single_iov ...passed 00:06:08.581 Test: dif_copy_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:06:08.581 Test: dif_copy_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:06:08.581 Test: dif_copy_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:06:08.581 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:06:08.581 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:06:08.581 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:06:08.581 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:06:08.581 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:06:08.581 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-07-15 13:58:54.425047] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=ed4c, Actual=fd4c 00:06:08.581 [2024-07-15 13:58:54.425840] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=6663, Actual=7663 00:06:08.581 [2024-07-15 13:58:54.426596] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=1088 00:06:08.581 [2024-07-15 13:58:54.427367] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=1088 00:06:08.581 [2024-07-15 13:58:54.428158] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=1000005d 00:06:08.581 [2024-07-15 13:58:54.428940] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=1000005d 00:06:08.581 [2024-07-15 13:58:54.429708] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=fd4c, Actual=5aa1 00:06:08.581 [2024-07-15 13:58:54.430489] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=f141, Actual=f1d3 00:06:08.581 [2024-07-15 13:58:54.431276] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=ab753ed, Actual=1ab753ed 00:06:08.581 [2024-07-15 13:58:54.432072] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=ec744b7a, Actual=fc744b7a 00:06:08.581 [2024-07-15 13:58:54.432867] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=1088 00:06:08.581 [2024-07-15 13:58:54.433661] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=1088 00:06:08.581 [2024-07-15 13:58:54.434442] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=1000005d 00:06:08.581 [2024-07-15 13:58:54.435216] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=1000005d 00:06:08.581 [2024-07-15 13:58:54.435999] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=1ab753ed, Actual=de5d864b 00:06:08.581 [2024-07-15 13:58:54.436797] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=5a3d6598, Actual=489d9e0b 00:06:08.581 [2024-07-15 13:58:54.437570] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=a576a7729ecc20d3, Actual=a576a7728ecc20d3 00:06:08.581 [2024-07-15 13:58:54.438378] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=b400026237881905, Actual=b400026227881905 00:06:08.581 [2024-07-15 13:58:54.439162] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=1088 00:06:08.581 [2024-07-15 13:58:54.439936] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=1088 00:06:08.581 [2024-07-15 13:58:54.440718] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=10000000005d 00:06:08.582 [2024-07-15 13:58:54.441494] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=10000000005d 00:06:08.582 [2024-07-15 13:58:54.442261] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=a576a7728ecc20d3, Actual=4dd9eb455e81b002 00:06:08.582 passed 00:06:08.582 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-07-15 13:58:54.443053] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=bdcaeff86fabb130, Actual=c2751ce9fd8791ed 00:06:08.582 [2024-07-15 13:58:54.443313] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=ed4c, Actual=fd4c 00:06:08.582 [2024-07-15 13:58:54.443520] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=5e2, Actual=15e2 00:06:08.582 [2024-07-15 13:58:54.443798] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=1088 00:06:08.582 [2024-07-15 13:58:54.444067] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=1088 00:06:08.582 [2024-07-15 13:58:54.444352] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=10000059 00:06:08.582 [2024-07-15 13:58:54.444642] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=10000059 00:06:08.582 [2024-07-15 13:58:54.444903] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd4c, Actual=5aa1 00:06:08.582 [2024-07-15 13:58:54.445156] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=92c0, Actual=9252 00:06:08.582 [2024-07-15 13:58:54.445368] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=ab753ed, Actual=1ab753ed 00:06:08.582 [2024-07-15 13:58:54.445563] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=2df47e8f, Actual=3df47e8f 00:06:08.582 [2024-07-15 13:58:54.445800] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=1088 00:06:08.582 [2024-07-15 13:58:54.446036] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=1088 00:06:08.582 [2024-07-15 13:58:54.446246] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=10000059 00:06:08.582 [2024-07-15 13:58:54.446451] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=10000059 00:06:08.582 [2024-07-15 13:58:54.446647] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab753ed, Actual=de5d864b 00:06:08.582 [2024-07-15 13:58:54.446865] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=9bbd506d, Actual=891dabfe 00:06:08.582 [2024-07-15 13:58:54.447119] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7729ecc20d3, Actual=a576a7728ecc20d3 00:06:08.582 [2024-07-15 13:58:54.447315] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=41e20df111679d5a, Actual=41e20df101679d5a 00:06:08.582 [2024-07-15 13:58:54.447510] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=1088 00:06:08.582 [2024-07-15 13:58:54.447699] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=1088 00:06:08.582 [2024-07-15 13:58:54.447918] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=100000000059 00:06:08.582 [2024-07-15 13:58:54.448127] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=100000000059 00:06:08.582 [2024-07-15 13:58:54.448348] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ecc20d3, Actual=4dd9eb455e81b002 00:06:08.582 [2024-07-15 13:58:54.448547] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=4828e06b4944356f, Actual=3797137adb6815b2 00:06:08.582 passed 00:06:08.582 Test: dix_sec_512_md_0_error ...[2024-07-15 13:58:54.448618] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:06:08.582 passed 00:06:08.582 Test: dix_sec_512_md_8_prchk_0_single_iov ...passed 00:06:08.582 Test: dix_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:06:08.582 Test: dix_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:06:08.582 Test: dix_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:06:08.582 Test: dix_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:06:08.582 Test: dix_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:06:08.582 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:06:08.582 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:06:08.582 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:06:08.582 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-07-15 13:58:54.472119] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=ed4c, Actual=fd4c 00:06:08.582 [2024-07-15 13:58:54.472909] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=6663, Actual=7663 00:06:08.582 [2024-07-15 13:58:54.473675] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=1088 00:06:08.582 [2024-07-15 13:58:54.474449] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=1088 00:06:08.582 [2024-07-15 13:58:54.475259] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=1000005d 00:06:08.582 [2024-07-15 13:58:54.476025] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=1000005d 00:06:08.582 [2024-07-15 13:58:54.476812] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=fd4c, Actual=5aa1 00:06:08.582 [2024-07-15 13:58:54.477577] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=f141, Actual=f1d3 00:06:08.582 [2024-07-15 13:58:54.478344] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=ab753ed, Actual=1ab753ed 00:06:08.582 [2024-07-15 13:58:54.479132] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=ec744b7a, Actual=fc744b7a 00:06:08.582 [2024-07-15 13:58:54.479915] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=1088 00:06:08.582 [2024-07-15 13:58:54.480680] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=1088 00:06:08.582 [2024-07-15 13:58:54.481450] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=1000005d 00:06:08.582 [2024-07-15 13:58:54.482221] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=1000005d 00:06:08.582 [2024-07-15 13:58:54.483004] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=1ab753ed, Actual=de5d864b 00:06:08.582 [2024-07-15 13:58:54.483771] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=5a3d6598, Actual=489d9e0b 00:06:08.582 [2024-07-15 13:58:54.484560] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=a576a7729ecc20d3, Actual=a576a7728ecc20d3 00:06:08.582 [2024-07-15 13:58:54.485322] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=b400026237881905, Actual=b400026227881905 00:06:08.582 [2024-07-15 13:58:54.486095] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=1088 00:06:08.582 [2024-07-15 13:58:54.486857] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=93, Expected=88, Actual=1088 00:06:08.582 [2024-07-15 13:58:54.487614] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=10000000005d 00:06:08.582 [2024-07-15 13:58:54.488378] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=93, Expected=5d, Actual=10000000005d 00:06:08.582 [2024-07-15 13:58:54.489172] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=a576a7728ecc20d3, Actual=4dd9eb455e81b002 00:06:08.582 passed 00:06:08.582 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-07-15 13:58:54.489926] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=93, Expected=bdcaeff86fabb130, Actual=c2751ce9fd8791ed 00:06:08.582 [2024-07-15 13:58:54.490188] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=ed4c, Actual=fd4c 00:06:08.582 [2024-07-15 13:58:54.490372] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=5e2, Actual=15e2 00:06:08.582 [2024-07-15 13:58:54.490574] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=1088 00:06:08.582 [2024-07-15 13:58:54.490785] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=1088 00:06:08.582 [2024-07-15 13:58:54.491002] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=10000059 00:06:08.582 [2024-07-15 13:58:54.491200] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=10000059 00:06:08.582 [2024-07-15 13:58:54.491403] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=fd4c, Actual=5aa1 00:06:08.582 [2024-07-15 13:58:54.491591] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=92c0, Actual=9252 00:06:08.582 [2024-07-15 13:58:54.491804] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=ab753ed, Actual=1ab753ed 00:06:08.582 [2024-07-15 13:58:54.491999] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=2df47e8f, Actual=3df47e8f 00:06:08.582 [2024-07-15 13:58:54.492220] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=1088 00:06:08.582 [2024-07-15 13:58:54.492422] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=1088 00:06:08.582 [2024-07-15 13:58:54.492613] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=10000059 00:06:08.582 [2024-07-15 13:58:54.492825] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=10000059 00:06:08.582 [2024-07-15 13:58:54.493014] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=1ab753ed, Actual=de5d864b 00:06:08.582 [2024-07-15 13:58:54.493209] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=9bbd506d, Actual=891dabfe 00:06:08.582 [2024-07-15 13:58:54.493404] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7729ecc20d3, Actual=a576a7728ecc20d3 00:06:08.582 [2024-07-15 13:58:54.493599] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=41e20df111679d5a, Actual=41e20df101679d5a 00:06:08.582 [2024-07-15 13:58:54.493802] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=1088 00:06:08.582 [2024-07-15 13:58:54.494016] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89, Expected=88, Actual=1088 00:06:08.582 [2024-07-15 13:58:54.494225] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=100000000059 00:06:08.582 [2024-07-15 13:58:54.494418] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=100000000059 00:06:08.582 [2024-07-15 13:58:54.494599] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=a576a7728ecc20d3, Actual=4dd9eb455e81b002 00:06:08.582 [2024-07-15 13:58:54.494810] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89, Expected=4828e06b4944356f, Actual=3797137adb6815b2 00:06:08.582 passed 00:06:08.582 Test: set_md_interleave_iovs_test ...passed 00:06:08.583 Test: set_md_interleave_iovs_split_test ...passed 00:06:08.583 Test: dif_generate_stream_pi_16_test ...passed 00:06:08.583 Test: dif_generate_stream_test ...passed 00:06:08.583 Test: set_md_interleave_iovs_alignment_test ...[2024-07-15 13:58:54.498711] /home/vagrant/spdk_repo/spdk/lib/util/dif.c:1822:spdk_dif_set_md_interleave_iovs: *ERROR*: Buffer overflow will occur. 00:06:08.583 passed 00:06:08.583 Test: dif_generate_split_test ...passed 00:06:08.583 Test: set_md_interleave_iovs_multi_segments_test ...passed 00:06:08.583 Test: dif_verify_split_test ...passed 00:06:08.583 Test: dif_verify_stream_multi_segments_test ...passed 00:06:08.583 Test: update_crc32c_pi_16_test ...passed 00:06:08.583 Test: update_crc32c_test ...passed 00:06:08.583 Test: dif_update_crc32c_split_test ...passed 00:06:08.583 Test: dif_update_crc32c_stream_multi_segments_test ...passed 00:06:08.583 Test: get_range_with_md_test ...passed 00:06:08.583 Test: dif_sec_512_md_8_prchk_7_multi_iovs_remap_pi_16_test ...passed 00:06:08.583 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_remap_test ...passed 00:06:08.583 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:06:08.583 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_remap ...passed 00:06:08.583 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits_remap_pi_16_test ...passed 00:06:08.583 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:06:08.583 Test: dif_generate_and_verify_unmap_test ...passed 00:06:08.583 00:06:08.583 Run Summary: Type Total Ran Passed Failed Inactive 00:06:08.583 suites 1 1 n/a 0 0 00:06:08.583 tests 79 79 79 0 0 00:06:08.583 asserts 3584 3584 3584 0 n/a 00:06:08.583 00:06:08.583 Elapsed time = 0.204 seconds 00:06:08.583 13:58:54 unittest.unittest_util -- unit/unittest.sh@143 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/iov.c/iov_ut 00:06:08.583 00:06:08.583 00:06:08.583 CUnit - A unit testing framework for C - Version 2.1-3 00:06:08.583 http://cunit.sourceforge.net/ 00:06:08.583 00:06:08.583 00:06:08.583 Suite: iov 00:06:08.583 Test: test_single_iov ...passed 00:06:08.583 Test: test_simple_iov ...passed 00:06:08.583 Test: test_complex_iov ...passed 00:06:08.583 Test: test_iovs_to_buf ...passed 00:06:08.583 Test: test_buf_to_iovs ...passed 00:06:08.583 Test: test_memset ...passed 00:06:08.583 Test: test_iov_one ...passed 00:06:08.583 Test: test_iov_xfer ...passed 00:06:08.583 00:06:08.583 Run Summary: Type Total Ran Passed Failed Inactive 00:06:08.583 suites 1 1 n/a 0 0 00:06:08.583 tests 8 8 8 0 0 00:06:08.583 asserts 156 156 156 0 n/a 00:06:08.583 00:06:08.583 Elapsed time = 0.000 seconds 00:06:08.583 13:58:54 unittest.unittest_util -- unit/unittest.sh@144 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/math.c/math_ut 00:06:08.583 00:06:08.583 00:06:08.583 CUnit - A unit testing framework for C - Version 2.1-3 00:06:08.583 http://cunit.sourceforge.net/ 00:06:08.583 00:06:08.583 00:06:08.583 Suite: math 00:06:08.583 Test: test_serial_number_arithmetic ...passed 00:06:08.583 Suite: erase 00:06:08.583 Test: test_memset_s ...passed 00:06:08.583 00:06:08.583 Run Summary: Type Total Ran Passed Failed Inactive 00:06:08.583 suites 2 2 n/a 0 0 00:06:08.583 tests 2 2 2 0 0 00:06:08.583 asserts 18 18 18 0 n/a 00:06:08.583 00:06:08.583 Elapsed time = 0.000 seconds 00:06:08.842 13:58:54 unittest.unittest_util -- unit/unittest.sh@145 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/pipe.c/pipe_ut 00:06:08.842 00:06:08.842 00:06:08.842 CUnit - A unit testing framework for C - Version 2.1-3 00:06:08.842 http://cunit.sourceforge.net/ 00:06:08.842 00:06:08.842 00:06:08.842 Suite: pipe 00:06:08.842 Test: test_create_destroy ...passed 00:06:08.842 Test: test_write_get_buffer ...passed 00:06:08.842 Test: test_write_advance ...passed 00:06:08.842 Test: test_read_get_buffer ...passed 00:06:08.842 Test: test_read_advance ...passed 00:06:08.842 Test: test_data ...passed 00:06:08.842 00:06:08.842 Run Summary: Type Total Ran Passed Failed Inactive 00:06:08.842 suites 1 1 n/a 0 0 00:06:08.842 tests 6 6 6 0 0 00:06:08.842 asserts 251 251 251 0 n/a 00:06:08.842 00:06:08.842 Elapsed time = 0.000 seconds 00:06:08.842 13:58:54 unittest.unittest_util -- unit/unittest.sh@146 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/xor.c/xor_ut 00:06:08.842 00:06:08.842 00:06:08.842 CUnit - A unit testing framework for C - Version 2.1-3 00:06:08.842 http://cunit.sourceforge.net/ 00:06:08.842 00:06:08.842 00:06:08.842 Suite: xor 00:06:08.842 Test: test_xor_gen ...passed 00:06:08.842 00:06:08.842 Run Summary: Type Total Ran Passed Failed Inactive 00:06:08.842 suites 1 1 n/a 0 0 00:06:08.842 tests 1 1 1 0 0 00:06:08.842 asserts 17 17 17 0 n/a 00:06:08.842 00:06:08.842 Elapsed time = 0.001 seconds 00:06:08.842 00:06:08.842 real 0m0.510s 00:06:08.842 user 0m0.370s 00:06:08.842 sys 0m0.139s 00:06:08.842 13:58:54 unittest.unittest_util -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.842 13:58:54 unittest.unittest_util -- common/autotest_common.sh@10 -- # set +x 00:06:08.842 ************************************ 00:06:08.842 END TEST unittest_util 00:06:08.842 ************************************ 00:06:08.842 13:58:54 unittest -- common/autotest_common.sh@1142 -- # return 0 00:06:08.842 13:58:54 unittest -- unit/unittest.sh@284 -- # grep -q '#define SPDK_CONFIG_VHOST 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:08.842 13:58:54 unittest -- unit/unittest.sh@285 -- # run_test unittest_vhost /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:06:08.842 13:58:54 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:08.842 13:58:54 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.842 13:58:54 unittest -- common/autotest_common.sh@10 -- # set +x 00:06:08.842 ************************************ 00:06:08.842 START TEST unittest_vhost 00:06:08.842 ************************************ 00:06:08.842 13:58:54 unittest.unittest_vhost -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:06:08.842 00:06:08.842 00:06:08.842 CUnit - A unit testing framework for C - Version 2.1-3 00:06:08.842 http://cunit.sourceforge.net/ 00:06:08.842 00:06:08.842 00:06:08.842 Suite: vhost_suite 00:06:08.842 Test: desc_to_iov_test ...[2024-07-15 13:58:54.703791] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c: 620:vhost_vring_desc_payload_to_iov: *ERROR*: SPDK_VHOST_IOVS_MAX(129) reached 00:06:08.842 passed 00:06:08.842 Test: create_controller_test ...[2024-07-15 13:58:54.707154] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:06:08.842 [2024-07-15 13:58:54.707256] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xf0 is invalid (core mask is 0xf) 00:06:08.842 [2024-07-15 13:58:54.707372] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:06:08.842 [2024-07-15 13:58:54.707449] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xff is invalid (core mask is 0xf) 00:06:08.842 [2024-07-15 13:58:54.707496] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 121:vhost_dev_register: *ERROR*: Can't register controller with no name 00:06:08.843 [2024-07-15 13:58:54.707878] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1781:vhost_user_dev_init: *ERROR*: Resulting socket path for controller is too long: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx 00:06:08.843 passed 00:06:08.843 Test: session_find_by_vid_test ...[2024-07-15 13:58:54.708629] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 137:vhost_dev_register: *ERROR*: vhost controller vdev_name_0 already exists. 00:06:08.843 passed 00:06:08.843 Test: remove_controller_test ...[2024-07-15 13:58:54.710192] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1866:vhost_user_dev_unregister: *ERROR*: Controller vdev_name_0 has still valid connection. 00:06:08.843 passed 00:06:08.843 Test: vq_avail_ring_get_test ...passed 00:06:08.843 Test: vq_packed_ring_test ...passed 00:06:08.843 Test: vhost_blk_construct_test ...passed 00:06:08.843 00:06:08.843 Run Summary: Type Total Ran Passed Failed Inactive 00:06:08.843 suites 1 1 n/a 0 0 00:06:08.843 tests 7 7 7 0 0 00:06:08.843 asserts 147 147 147 0 n/a 00:06:08.843 00:06:08.843 Elapsed time = 0.009 seconds 00:06:08.843 00:06:08.843 real 0m0.038s 00:06:08.843 user 0m0.022s 00:06:08.843 sys 0m0.016s 00:06:08.843 13:58:54 unittest.unittest_vhost -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.843 13:58:54 unittest.unittest_vhost -- common/autotest_common.sh@10 -- # set +x 00:06:08.843 ************************************ 00:06:08.843 END TEST unittest_vhost 00:06:08.843 ************************************ 00:06:08.843 13:58:54 unittest -- common/autotest_common.sh@1142 -- # return 0 00:06:08.843 13:58:54 unittest -- unit/unittest.sh@287 -- # run_test unittest_dma /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:06:08.843 13:58:54 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:08.843 13:58:54 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.843 13:58:54 unittest -- common/autotest_common.sh@10 -- # set +x 00:06:08.843 ************************************ 00:06:08.843 START TEST unittest_dma 00:06:08.843 ************************************ 00:06:08.843 13:58:54 unittest.unittest_dma -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:06:08.843 00:06:08.843 00:06:08.843 CUnit - A unit testing framework for C - Version 2.1-3 00:06:08.843 http://cunit.sourceforge.net/ 00:06:08.843 00:06:08.843 00:06:08.843 Suite: dma_suite 00:06:08.843 Test: test_dma ...[2024-07-15 13:58:54.792182] /home/vagrant/spdk_repo/spdk/lib/dma/dma.c: 56:spdk_memory_domain_create: *ERROR*: Context size can't be 0 00:06:08.843 passed 00:06:08.843 00:06:08.843 Run Summary: Type Total Ran Passed Failed Inactive 00:06:08.843 suites 1 1 n/a 0 0 00:06:08.843 tests 1 1 1 0 0 00:06:08.843 asserts 54 54 54 0 n/a 00:06:08.843 00:06:08.843 Elapsed time = 0.000 seconds 00:06:08.843 00:06:08.843 real 0m0.024s 00:06:08.843 user 0m0.012s 00:06:08.843 sys 0m0.012s 00:06:08.843 13:58:54 unittest.unittest_dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.843 13:58:54 unittest.unittest_dma -- common/autotest_common.sh@10 -- # set +x 00:06:08.843 ************************************ 00:06:08.843 END TEST unittest_dma 00:06:08.843 ************************************ 00:06:08.843 13:58:54 unittest -- common/autotest_common.sh@1142 -- # return 0 00:06:08.843 13:58:54 unittest -- unit/unittest.sh@289 -- # run_test unittest_init unittest_init 00:06:08.843 13:58:54 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:08.843 13:58:54 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.102 13:58:54 unittest -- common/autotest_common.sh@10 -- # set +x 00:06:09.102 ************************************ 00:06:09.102 START TEST unittest_init 00:06:09.102 ************************************ 00:06:09.102 13:58:54 unittest.unittest_init -- common/autotest_common.sh@1123 -- # unittest_init 00:06:09.102 13:58:54 unittest.unittest_init -- unit/unittest.sh@150 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/init/subsystem.c/subsystem_ut 00:06:09.102 00:06:09.103 00:06:09.103 CUnit - A unit testing framework for C - Version 2.1-3 00:06:09.103 http://cunit.sourceforge.net/ 00:06:09.103 00:06:09.103 00:06:09.103 Suite: subsystem_suite 00:06:09.103 Test: subsystem_sort_test_depends_on_single ...passed 00:06:09.103 Test: subsystem_sort_test_depends_on_multiple ...passed 00:06:09.103 Test: subsystem_sort_test_missing_dependency ...[2024-07-15 13:58:54.870405] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 196:spdk_subsystem_init: *ERROR*: subsystem A dependency B is missing 00:06:09.103 [2024-07-15 13:58:54.870625] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 191:spdk_subsystem_init: *ERROR*: subsystem C is missing 00:06:09.103 passed 00:06:09.103 00:06:09.103 Run Summary: Type Total Ran Passed Failed Inactive 00:06:09.103 suites 1 1 n/a 0 0 00:06:09.103 tests 3 3 3 0 0 00:06:09.103 asserts 20 20 20 0 n/a 00:06:09.103 00:06:09.103 Elapsed time = 0.000 seconds 00:06:09.103 00:06:09.103 real 0m0.028s 00:06:09.103 user 0m0.018s 00:06:09.103 sys 0m0.010s 00:06:09.103 13:58:54 unittest.unittest_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:09.103 13:58:54 unittest.unittest_init -- common/autotest_common.sh@10 -- # set +x 00:06:09.103 ************************************ 00:06:09.103 END TEST unittest_init 00:06:09.103 ************************************ 00:06:09.103 13:58:54 unittest -- common/autotest_common.sh@1142 -- # return 0 00:06:09.103 13:58:54 unittest -- unit/unittest.sh@290 -- # run_test unittest_keyring /home/vagrant/spdk_repo/spdk/test/unit/lib/keyring/keyring.c/keyring_ut 00:06:09.103 13:58:54 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:09.103 13:58:54 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.103 13:58:54 unittest -- common/autotest_common.sh@10 -- # set +x 00:06:09.103 ************************************ 00:06:09.103 START TEST unittest_keyring 00:06:09.103 ************************************ 00:06:09.103 13:58:54 unittest.unittest_keyring -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/keyring/keyring.c/keyring_ut 00:06:09.103 00:06:09.103 00:06:09.103 CUnit - A unit testing framework for C - Version 2.1-3 00:06:09.103 http://cunit.sourceforge.net/ 00:06:09.103 00:06:09.103 00:06:09.103 Suite: keyring 00:06:09.103 Test: test_keyring_add_remove ...[2024-07-15 13:58:54.953866] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 107:spdk_keyring_add_key: *ERROR*: Key 'key0' already exists 00:06:09.103 [2024-07-15 13:58:54.954077] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 107:spdk_keyring_add_key: *ERROR*: Key ':key0' already exists 00:06:09.103 passed 00:06:09.103 Test: test_keyring_get_put ...passed 00:06:09.103 00:06:09.103 [2024-07-15 13:58:54.954157] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:06:09.103 Run Summary: Type Total Ran Passed Failed Inactive 00:06:09.103 suites 1 1 n/a 0 0 00:06:09.103 tests 2 2 2 0 0 00:06:09.103 asserts 44 44 44 0 n/a 00:06:09.103 00:06:09.103 Elapsed time = 0.000 seconds 00:06:09.103 00:06:09.103 real 0m0.025s 00:06:09.103 user 0m0.015s 00:06:09.103 sys 0m0.010s 00:06:09.103 13:58:54 unittest.unittest_keyring -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:09.103 13:58:54 unittest.unittest_keyring -- common/autotest_common.sh@10 -- # set +x 00:06:09.103 ************************************ 00:06:09.103 END TEST unittest_keyring 00:06:09.103 ************************************ 00:06:09.103 13:58:55 unittest -- common/autotest_common.sh@1142 -- # return 0 00:06:09.103 13:58:55 unittest -- unit/unittest.sh@292 -- # '[' yes = yes ']' 00:06:09.103 13:58:55 unittest -- unit/unittest.sh@292 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:06:09.103 13:58:55 unittest -- unit/unittest.sh@293 -- # hostname 00:06:09.103 13:58:55 unittest -- unit/unittest.sh@293 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -d . -c -t rocky9-cloud-1711172311-2200 -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:06:09.376 geninfo: WARNING: invalid characters removed from testname! 00:06:41.466 13:59:26 unittest -- unit/unittest.sh@294 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info 00:06:46.725 13:59:31 unittest -- unit/unittest.sh@295 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:06:49.253 13:59:34 unittest -- unit/unittest.sh@296 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/app/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:06:51.778 13:59:37 unittest -- unit/unittest.sh@297 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:06:54.344 13:59:40 unittest -- unit/unittest.sh@298 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/examples/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:06:56.875 13:59:42 unittest -- unit/unittest.sh@299 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:07:00.206 13:59:45 unittest -- unit/unittest.sh@300 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/test/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:07:02.739 13:59:48 unittest -- unit/unittest.sh@301 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:07:02.739 13:59:48 unittest -- unit/unittest.sh@302 -- # genhtml /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info --output-directory /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:07:03.307 Reading data file /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:07:03.307 Found 321 entries. 00:07:03.307 Found common filename prefix "/home/vagrant/spdk_repo/spdk" 00:07:03.307 Writing .css and .png files. 00:07:03.307 Generating output. 00:07:03.307 Processing file include/linux/virtio_ring.h 00:07:03.566 Processing file include/spdk/mmio.h 00:07:03.566 Processing file include/spdk/nvme_spec.h 00:07:03.566 Processing file include/spdk/nvmf_transport.h 00:07:03.566 Processing file include/spdk/nvme.h 00:07:03.566 Processing file include/spdk/base64.h 00:07:03.566 Processing file include/spdk/thread.h 00:07:03.566 Processing file include/spdk/util.h 00:07:03.566 Processing file include/spdk/bdev_module.h 00:07:03.566 Processing file include/spdk/trace.h 00:07:03.566 Processing file include/spdk/endian.h 00:07:03.566 Processing file include/spdk/histogram_data.h 00:07:03.826 Processing file include/spdk_internal/utf.h 00:07:03.826 Processing file include/spdk_internal/rdma_utils.h 00:07:03.826 Processing file include/spdk_internal/nvme_tcp.h 00:07:03.826 Processing file include/spdk_internal/sgl.h 00:07:03.826 Processing file include/spdk_internal/virtio.h 00:07:03.826 Processing file include/spdk_internal/sock.h 00:07:03.826 Processing file lib/accel/accel_rpc.c 00:07:03.826 Processing file lib/accel/accel_sw.c 00:07:03.826 Processing file lib/accel/accel.c 00:07:04.084 Processing file lib/bdev/scsi_nvme.c 00:07:04.084 Processing file lib/bdev/part.c 00:07:04.084 Processing file lib/bdev/bdev_rpc.c 00:07:04.084 Processing file lib/bdev/bdev_zone.c 00:07:04.084 Processing file lib/bdev/bdev.c 00:07:04.342 Processing file lib/blob/request.c 00:07:04.342 Processing file lib/blob/zeroes.c 00:07:04.342 Processing file lib/blob/blobstore.h 00:07:04.342 Processing file lib/blob/blobstore.c 00:07:04.342 Processing file lib/blob/blob_bs_dev.c 00:07:04.600 Processing file lib/blobfs/tree.c 00:07:04.600 Processing file lib/blobfs/blobfs.c 00:07:04.600 Processing file lib/conf/conf.c 00:07:04.600 Processing file lib/dma/dma.c 00:07:04.859 Processing file lib/env_dpdk/init.c 00:07:04.859 Processing file lib/env_dpdk/pci_dpdk_2211.c 00:07:04.859 Processing file lib/env_dpdk/env.c 00:07:04.859 Processing file lib/env_dpdk/pci_vmd.c 00:07:04.859 Processing file lib/env_dpdk/pci_dpdk.c 00:07:04.859 Processing file lib/env_dpdk/memory.c 00:07:04.859 Processing file lib/env_dpdk/pci.c 00:07:04.859 Processing file lib/env_dpdk/sigbus_handler.c 00:07:04.859 Processing file lib/env_dpdk/pci_virtio.c 00:07:04.859 Processing file lib/env_dpdk/pci_event.c 00:07:04.859 Processing file lib/env_dpdk/threads.c 00:07:04.859 Processing file lib/env_dpdk/pci_ioat.c 00:07:04.859 Processing file lib/env_dpdk/pci_idxd.c 00:07:04.859 Processing file lib/env_dpdk/pci_dpdk_2207.c 00:07:05.118 Processing file lib/event/scheduler_static.c 00:07:05.118 Processing file lib/event/log_rpc.c 00:07:05.119 Processing file lib/event/reactor.c 00:07:05.119 Processing file lib/event/app.c 00:07:05.119 Processing file lib/event/app_rpc.c 00:07:05.376 Processing file lib/ftl/ftl_reloc.c 00:07:05.376 Processing file lib/ftl/ftl_io.c 00:07:05.377 Processing file lib/ftl/ftl_sb.c 00:07:05.377 Processing file lib/ftl/ftl_init.c 00:07:05.377 Processing file lib/ftl/ftl_io.h 00:07:05.377 Processing file lib/ftl/ftl_l2p.c 00:07:05.377 Processing file lib/ftl/ftl_layout.c 00:07:05.377 Processing file lib/ftl/ftl_writer.c 00:07:05.377 Processing file lib/ftl/ftl_nv_cache.h 00:07:05.377 Processing file lib/ftl/ftl_band.c 00:07:05.377 Processing file lib/ftl/ftl_nv_cache.c 00:07:05.377 Processing file lib/ftl/ftl_nv_cache_io.h 00:07:05.377 Processing file lib/ftl/ftl_writer.h 00:07:05.377 Processing file lib/ftl/ftl_debug.h 00:07:05.377 Processing file lib/ftl/ftl_band.h 00:07:05.377 Processing file lib/ftl/ftl_debug.c 00:07:05.377 Processing file lib/ftl/ftl_l2p_flat.c 00:07:05.377 Processing file lib/ftl/ftl_core.c 00:07:05.377 Processing file lib/ftl/ftl_band_ops.c 00:07:05.377 Processing file lib/ftl/ftl_p2l.c 00:07:05.377 Processing file lib/ftl/ftl_core.h 00:07:05.377 Processing file lib/ftl/ftl_l2p_cache.c 00:07:05.377 Processing file lib/ftl/ftl_trace.c 00:07:05.377 Processing file lib/ftl/ftl_rq.c 00:07:05.377 Processing file lib/ftl/base/ftl_base_dev.c 00:07:05.377 Processing file lib/ftl/base/ftl_base_bdev.c 00:07:05.635 Processing file lib/ftl/mngt/ftl_mngt_band.c 00:07:05.635 Processing file lib/ftl/mngt/ftl_mngt_self_test.c 00:07:05.635 Processing file lib/ftl/mngt/ftl_mngt_upgrade.c 00:07:05.635 Processing file lib/ftl/mngt/ftl_mngt.c 00:07:05.635 Processing file lib/ftl/mngt/ftl_mngt_l2p.c 00:07:05.635 Processing file lib/ftl/mngt/ftl_mngt_misc.c 00:07:05.635 Processing file lib/ftl/mngt/ftl_mngt_recovery.c 00:07:05.635 Processing file lib/ftl/mngt/ftl_mngt_startup.c 00:07:05.635 Processing file lib/ftl/mngt/ftl_mngt_ioch.c 00:07:05.635 Processing file lib/ftl/mngt/ftl_mngt_bdev.c 00:07:05.635 Processing file lib/ftl/mngt/ftl_mngt_md.c 00:07:05.635 Processing file lib/ftl/mngt/ftl_mngt_shutdown.c 00:07:05.635 Processing file lib/ftl/mngt/ftl_mngt_p2l.c 00:07:05.894 Processing file lib/ftl/nvc/ftl_nvc_bdev_vss.c 00:07:05.894 Processing file lib/ftl/nvc/ftl_nvc_dev.c 00:07:05.894 Processing file lib/ftl/upgrade/ftl_band_upgrade.c 00:07:05.894 Processing file lib/ftl/upgrade/ftl_layout_upgrade.c 00:07:05.894 Processing file lib/ftl/upgrade/ftl_sb_v3.c 00:07:05.894 Processing file lib/ftl/upgrade/ftl_trim_upgrade.c 00:07:05.894 Processing file lib/ftl/upgrade/ftl_sb_v5.c 00:07:05.894 Processing file lib/ftl/upgrade/ftl_p2l_upgrade.c 00:07:05.894 Processing file lib/ftl/upgrade/ftl_sb_upgrade.c 00:07:05.894 Processing file lib/ftl/upgrade/ftl_chunk_upgrade.c 00:07:06.157 Processing file lib/ftl/utils/ftl_md.c 00:07:06.157 Processing file lib/ftl/utils/ftl_addr_utils.h 00:07:06.157 Processing file lib/ftl/utils/ftl_bitmap.c 00:07:06.157 Processing file lib/ftl/utils/ftl_property.c 00:07:06.157 Processing file lib/ftl/utils/ftl_df.h 00:07:06.157 Processing file lib/ftl/utils/ftl_property.h 00:07:06.157 Processing file lib/ftl/utils/ftl_conf.c 00:07:06.157 Processing file lib/ftl/utils/ftl_mempool.c 00:07:06.157 Processing file lib/ftl/utils/ftl_layout_tracker_bdev.c 00:07:06.157 Processing file lib/idxd/idxd.c 00:07:06.157 Processing file lib/idxd/idxd_user.c 00:07:06.157 Processing file lib/idxd/idxd_internal.h 00:07:06.429 Processing file lib/init/json_config.c 00:07:06.429 Processing file lib/init/subsystem_rpc.c 00:07:06.429 Processing file lib/init/rpc.c 00:07:06.429 Processing file lib/init/subsystem.c 00:07:06.429 Processing file lib/ioat/ioat.c 00:07:06.429 Processing file lib/ioat/ioat_internal.h 00:07:06.687 Processing file lib/iscsi/iscsi_rpc.c 00:07:06.687 Processing file lib/iscsi/tgt_node.c 00:07:06.687 Processing file lib/iscsi/iscsi_subsystem.c 00:07:06.687 Processing file lib/iscsi/md5.c 00:07:06.687 Processing file lib/iscsi/portal_grp.c 00:07:06.687 Processing file lib/iscsi/iscsi.c 00:07:06.687 Processing file lib/iscsi/task.c 00:07:06.687 Processing file lib/iscsi/param.c 00:07:06.687 Processing file lib/iscsi/init_grp.c 00:07:06.687 Processing file lib/iscsi/conn.c 00:07:06.687 Processing file lib/iscsi/iscsi.h 00:07:06.687 Processing file lib/iscsi/task.h 00:07:06.946 Processing file lib/json/json_util.c 00:07:06.946 Processing file lib/json/json_write.c 00:07:06.946 Processing file lib/json/json_parse.c 00:07:06.946 Processing file lib/jsonrpc/jsonrpc_client_tcp.c 00:07:06.946 Processing file lib/jsonrpc/jsonrpc_client.c 00:07:06.946 Processing file lib/jsonrpc/jsonrpc_server.c 00:07:06.946 Processing file lib/jsonrpc/jsonrpc_server_tcp.c 00:07:06.946 Processing file lib/keyring/keyring.c 00:07:06.946 Processing file lib/keyring/keyring_rpc.c 00:07:07.203 Processing file lib/log/log_flags.c 00:07:07.203 Processing file lib/log/log_deprecated.c 00:07:07.203 Processing file lib/log/log.c 00:07:07.203 Processing file lib/lvol/lvol.c 00:07:07.203 Processing file lib/nbd/nbd.c 00:07:07.203 Processing file lib/nbd/nbd_rpc.c 00:07:07.461 Processing file lib/notify/notify.c 00:07:07.461 Processing file lib/notify/notify_rpc.c 00:07:08.393 Processing file lib/nvme/nvme_ctrlr_cmd.c 00:07:08.393 Processing file lib/nvme/nvme_poll_group.c 00:07:08.394 Processing file lib/nvme/nvme_fabric.c 00:07:08.394 Processing file lib/nvme/nvme_pcie_common.c 00:07:08.394 Processing file lib/nvme/nvme_auth.c 00:07:08.394 Processing file lib/nvme/nvme_ns_ocssd_cmd.c 00:07:08.394 Processing file lib/nvme/nvme_tcp.c 00:07:08.394 Processing file lib/nvme/nvme_pcie_internal.h 00:07:08.394 Processing file lib/nvme/nvme_rdma.c 00:07:08.394 Processing file lib/nvme/nvme_ns_cmd.c 00:07:08.394 Processing file lib/nvme/nvme_transport.c 00:07:08.394 Processing file lib/nvme/nvme_zns.c 00:07:08.394 Processing file lib/nvme/nvme_ctrlr_ocssd_cmd.c 00:07:08.394 Processing file lib/nvme/nvme_internal.h 00:07:08.394 Processing file lib/nvme/nvme_cuse.c 00:07:08.394 Processing file lib/nvme/nvme_qpair.c 00:07:08.394 Processing file lib/nvme/nvme_io_msg.c 00:07:08.394 Processing file lib/nvme/nvme_opal.c 00:07:08.394 Processing file lib/nvme/nvme_ns.c 00:07:08.394 Processing file lib/nvme/nvme_pcie.c 00:07:08.394 Processing file lib/nvme/nvme_ctrlr.c 00:07:08.394 Processing file lib/nvme/nvme_discovery.c 00:07:08.394 Processing file lib/nvme/nvme.c 00:07:08.394 Processing file lib/nvme/nvme_quirks.c 00:07:08.652 Processing file lib/nvmf/ctrlr.c 00:07:08.652 Processing file lib/nvmf/nvmf_rpc.c 00:07:08.652 Processing file lib/nvmf/nvmf.c 00:07:08.652 Processing file lib/nvmf/transport.c 00:07:08.652 Processing file lib/nvmf/rdma.c 00:07:08.652 Processing file lib/nvmf/nvmf_internal.h 00:07:08.652 Processing file lib/nvmf/auth.c 00:07:08.652 Processing file lib/nvmf/subsystem.c 00:07:08.652 Processing file lib/nvmf/tcp.c 00:07:08.652 Processing file lib/nvmf/ctrlr_discovery.c 00:07:08.652 Processing file lib/nvmf/ctrlr_bdev.c 00:07:08.910 Processing file lib/rdma_provider/common.c 00:07:08.910 Processing file lib/rdma_provider/rdma_provider_verbs.c 00:07:08.910 Processing file lib/rdma_utils/rdma_utils.c 00:07:08.910 Processing file lib/rpc/rpc.c 00:07:09.167 Processing file lib/scsi/scsi_rpc.c 00:07:09.167 Processing file lib/scsi/task.c 00:07:09.167 Processing file lib/scsi/scsi_pr.c 00:07:09.167 Processing file lib/scsi/scsi.c 00:07:09.167 Processing file lib/scsi/dev.c 00:07:09.167 Processing file lib/scsi/lun.c 00:07:09.167 Processing file lib/scsi/scsi_bdev.c 00:07:09.167 Processing file lib/scsi/port.c 00:07:09.167 Processing file lib/sock/sock_rpc.c 00:07:09.167 Processing file lib/sock/sock.c 00:07:09.167 Processing file lib/thread/iobuf.c 00:07:09.167 Processing file lib/thread/thread.c 00:07:09.465 Processing file lib/trace/trace_rpc.c 00:07:09.465 Processing file lib/trace/trace_flags.c 00:07:09.465 Processing file lib/trace/trace.c 00:07:09.465 Processing file lib/trace_parser/trace.cpp 00:07:09.465 Processing file lib/ut/ut.c 00:07:09.465 Processing file lib/ut_mock/mock.c 00:07:10.040 Processing file lib/util/math.c 00:07:10.040 Processing file lib/util/zipf.c 00:07:10.040 Processing file lib/util/crc32.c 00:07:10.040 Processing file lib/util/crc16.c 00:07:10.040 Processing file lib/util/pipe.c 00:07:10.040 Processing file lib/util/dif.c 00:07:10.040 Processing file lib/util/cpuset.c 00:07:10.040 Processing file lib/util/crc32c.c 00:07:10.040 Processing file lib/util/fd_group.c 00:07:10.040 Processing file lib/util/xor.c 00:07:10.040 Processing file lib/util/base64.c 00:07:10.040 Processing file lib/util/fd.c 00:07:10.040 Processing file lib/util/file.c 00:07:10.040 Processing file lib/util/crc64.c 00:07:10.040 Processing file lib/util/crc32_ieee.c 00:07:10.040 Processing file lib/util/string.c 00:07:10.040 Processing file lib/util/iov.c 00:07:10.040 Processing file lib/util/bit_array.c 00:07:10.040 Processing file lib/util/strerror_tls.c 00:07:10.040 Processing file lib/util/uuid.c 00:07:10.040 Processing file lib/util/hexlify.c 00:07:10.040 Processing file lib/vfio_user/host/vfio_user.c 00:07:10.040 Processing file lib/vfio_user/host/vfio_user_pci.c 00:07:10.299 Processing file lib/vhost/vhost_internal.h 00:07:10.299 Processing file lib/vhost/rte_vhost_user.c 00:07:10.299 Processing file lib/vhost/vhost_scsi.c 00:07:10.299 Processing file lib/vhost/vhost_rpc.c 00:07:10.299 Processing file lib/vhost/vhost.c 00:07:10.299 Processing file lib/vhost/vhost_blk.c 00:07:10.299 Processing file lib/virtio/virtio.c 00:07:10.299 Processing file lib/virtio/virtio_pci.c 00:07:10.299 Processing file lib/virtio/virtio_vfio_user.c 00:07:10.299 Processing file lib/virtio/virtio_vhost_user.c 00:07:10.299 Processing file lib/vmd/vmd.c 00:07:10.299 Processing file lib/vmd/led.c 00:07:10.558 Processing file module/accel/dsa/accel_dsa.c 00:07:10.558 Processing file module/accel/dsa/accel_dsa_rpc.c 00:07:10.558 Processing file module/accel/error/accel_error_rpc.c 00:07:10.558 Processing file module/accel/error/accel_error.c 00:07:10.558 Processing file module/accel/iaa/accel_iaa.c 00:07:10.558 Processing file module/accel/iaa/accel_iaa_rpc.c 00:07:10.816 Processing file module/accel/ioat/accel_ioat.c 00:07:10.816 Processing file module/accel/ioat/accel_ioat_rpc.c 00:07:10.816 Processing file module/bdev/aio/bdev_aio_rpc.c 00:07:10.816 Processing file module/bdev/aio/bdev_aio.c 00:07:10.816 Processing file module/bdev/delay/vbdev_delay_rpc.c 00:07:10.816 Processing file module/bdev/delay/vbdev_delay.c 00:07:10.816 Processing file module/bdev/error/vbdev_error.c 00:07:10.816 Processing file module/bdev/error/vbdev_error_rpc.c 00:07:11.074 Processing file module/bdev/ftl/bdev_ftl.c 00:07:11.074 Processing file module/bdev/ftl/bdev_ftl_rpc.c 00:07:11.074 Processing file module/bdev/gpt/vbdev_gpt.c 00:07:11.074 Processing file module/bdev/gpt/gpt.c 00:07:11.074 Processing file module/bdev/gpt/gpt.h 00:07:11.074 Processing file module/bdev/iscsi/bdev_iscsi_rpc.c 00:07:11.074 Processing file module/bdev/iscsi/bdev_iscsi.c 00:07:11.332 Processing file module/bdev/lvol/vbdev_lvol_rpc.c 00:07:11.332 Processing file module/bdev/lvol/vbdev_lvol.c 00:07:11.332 Processing file module/bdev/malloc/bdev_malloc_rpc.c 00:07:11.332 Processing file module/bdev/malloc/bdev_malloc.c 00:07:11.332 Processing file module/bdev/null/bdev_null.c 00:07:11.332 Processing file module/bdev/null/bdev_null_rpc.c 00:07:11.898 Processing file module/bdev/nvme/bdev_nvme_rpc.c 00:07:11.898 Processing file module/bdev/nvme/bdev_mdns_client.c 00:07:11.898 Processing file module/bdev/nvme/vbdev_opal.c 00:07:11.898 Processing file module/bdev/nvme/nvme_rpc.c 00:07:11.898 Processing file module/bdev/nvme/vbdev_opal_rpc.c 00:07:11.898 Processing file module/bdev/nvme/bdev_nvme_cuse_rpc.c 00:07:11.898 Processing file module/bdev/nvme/bdev_nvme.c 00:07:11.898 Processing file module/bdev/passthru/vbdev_passthru.c 00:07:11.898 Processing file module/bdev/passthru/vbdev_passthru_rpc.c 00:07:12.155 Processing file module/bdev/raid/raid1.c 00:07:12.155 Processing file module/bdev/raid/raid0.c 00:07:12.155 Processing file module/bdev/raid/bdev_raid_sb.c 00:07:12.155 Processing file module/bdev/raid/concat.c 00:07:12.155 Processing file module/bdev/raid/bdev_raid.h 00:07:12.155 Processing file module/bdev/raid/bdev_raid_rpc.c 00:07:12.155 Processing file module/bdev/raid/bdev_raid.c 00:07:12.155 Processing file module/bdev/split/vbdev_split.c 00:07:12.155 Processing file module/bdev/split/vbdev_split_rpc.c 00:07:12.155 Processing file module/bdev/virtio/bdev_virtio_rpc.c 00:07:12.155 Processing file module/bdev/virtio/bdev_virtio_scsi.c 00:07:12.155 Processing file module/bdev/virtio/bdev_virtio_blk.c 00:07:12.413 Processing file module/bdev/zone_block/vbdev_zone_block_rpc.c 00:07:12.413 Processing file module/bdev/zone_block/vbdev_zone_block.c 00:07:12.413 Processing file module/blob/bdev/blob_bdev.c 00:07:12.413 Processing file module/blobfs/bdev/blobfs_bdev_rpc.c 00:07:12.413 Processing file module/blobfs/bdev/blobfs_bdev.c 00:07:12.413 Processing file module/env_dpdk/env_dpdk_rpc.c 00:07:12.671 Processing file module/event/subsystems/accel/accel.c 00:07:12.671 Processing file module/event/subsystems/bdev/bdev.c 00:07:12.671 Processing file module/event/subsystems/iobuf/iobuf.c 00:07:12.671 Processing file module/event/subsystems/iobuf/iobuf_rpc.c 00:07:12.671 Processing file module/event/subsystems/iscsi/iscsi.c 00:07:12.671 Processing file module/event/subsystems/keyring/keyring.c 00:07:13.005 Processing file module/event/subsystems/nbd/nbd.c 00:07:13.005 Processing file module/event/subsystems/nvmf/nvmf_tgt.c 00:07:13.005 Processing file module/event/subsystems/nvmf/nvmf_rpc.c 00:07:13.005 Processing file module/event/subsystems/scheduler/scheduler.c 00:07:13.005 Processing file module/event/subsystems/scsi/scsi.c 00:07:13.005 Processing file module/event/subsystems/sock/sock.c 00:07:13.005 Processing file module/event/subsystems/vhost_blk/vhost_blk.c 00:07:13.264 Processing file module/event/subsystems/vhost_scsi/vhost_scsi.c 00:07:13.264 Processing file module/event/subsystems/vmd/vmd.c 00:07:13.264 Processing file module/event/subsystems/vmd/vmd_rpc.c 00:07:13.264 Processing file module/keyring/file/keyring.c 00:07:13.264 Processing file module/keyring/file/keyring_rpc.c 00:07:13.264 Processing file module/keyring/linux/keyring.c 00:07:13.264 Processing file module/keyring/linux/keyring_rpc.c 00:07:13.264 Processing file module/scheduler/dpdk_governor/dpdk_governor.c 00:07:13.543 Processing file module/scheduler/dynamic/scheduler_dynamic.c 00:07:13.543 Processing file module/scheduler/gscheduler/gscheduler.c 00:07:13.543 Processing file module/sock/sock_kernel.h 00:07:13.543 Processing file module/sock/posix/posix.c 00:07:13.543 Writing directory view page. 00:07:13.543 Overall coverage rate: 00:07:13.543 lines......: 38.4% (40436 of 105227 lines) 00:07:13.543 functions..: 42.1% (3688 of 8760 functions) 00:07:13.543 13:59:59 unittest -- unit/unittest.sh@305 -- # set +x 00:07:13.543 00:07:13.543 00:07:13.543 ===================== 00:07:13.543 All unit tests passed 00:07:13.543 ===================== 00:07:13.543 Note: coverage report is here: /home/vagrant/spdk_repo/spdk//home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:07:13.543 00:07:13.543 00:07:13.802 00:07:13.802 real 2m24.189s 00:07:13.802 user 2m0.884s 00:07:13.802 sys 0m13.120s 00:07:13.802 13:59:59 unittest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:13.802 13:59:59 unittest -- common/autotest_common.sh@10 -- # set +x 00:07:13.802 ************************************ 00:07:13.802 END TEST unittest 00:07:13.802 ************************************ 00:07:13.802 13:59:59 -- common/autotest_common.sh@1142 -- # return 0 00:07:13.802 13:59:59 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:07:13.802 13:59:59 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:07:13.802 13:59:59 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:07:13.802 13:59:59 -- spdk/autotest.sh@162 -- # timing_enter lib 00:07:13.802 13:59:59 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:13.802 13:59:59 -- common/autotest_common.sh@10 -- # set +x 00:07:13.802 13:59:59 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:07:13.802 13:59:59 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:07:13.802 13:59:59 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:13.802 13:59:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.802 13:59:59 -- common/autotest_common.sh@10 -- # set +x 00:07:13.802 ************************************ 00:07:13.802 START TEST env 00:07:13.802 ************************************ 00:07:13.802 13:59:59 env -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:07:13.802 * Looking for test storage... 00:07:13.802 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:07:13.802 13:59:59 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:07:13.802 13:59:59 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:13.802 13:59:59 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.802 13:59:59 env -- common/autotest_common.sh@10 -- # set +x 00:07:13.802 ************************************ 00:07:13.802 START TEST env_memory 00:07:13.802 ************************************ 00:07:13.802 13:59:59 env.env_memory -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:07:13.802 00:07:13.802 00:07:13.802 CUnit - A unit testing framework for C - Version 2.1-3 00:07:13.802 http://cunit.sourceforge.net/ 00:07:13.802 00:07:13.802 00:07:13.802 Suite: memory 00:07:13.802 Test: alloc and free memory map ...[2024-07-15 13:59:59.749670] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:07:13.802 passed 00:07:13.802 Test: mem map translation ...[2024-07-15 13:59:59.784353] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:07:13.802 [2024-07-15 13:59:59.784702] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:07:13.802 [2024-07-15 13:59:59.784931] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:07:13.802 [2024-07-15 13:59:59.785174] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:07:14.059 passed 00:07:14.059 Test: mem map registration ...[2024-07-15 13:59:59.830478] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:07:14.059 [2024-07-15 13:59:59.830953] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:07:14.059 passed 00:07:14.059 Test: mem map adjacent registrations ...passed 00:07:14.059 00:07:14.059 Run Summary: Type Total Ran Passed Failed Inactive 00:07:14.059 suites 1 1 n/a 0 0 00:07:14.059 tests 4 4 4 0 0 00:07:14.059 asserts 152 152 152 0 n/a 00:07:14.059 00:07:14.059 Elapsed time = 0.194 seconds 00:07:14.059 00:07:14.059 real 0m0.222s 00:07:14.059 user 0m0.196s 00:07:14.059 sys 0m0.021s 00:07:14.059 13:59:59 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:14.059 13:59:59 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:07:14.059 ************************************ 00:07:14.059 END TEST env_memory 00:07:14.059 ************************************ 00:07:14.059 13:59:59 env -- common/autotest_common.sh@1142 -- # return 0 00:07:14.059 13:59:59 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:07:14.059 13:59:59 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:14.059 13:59:59 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.059 13:59:59 env -- common/autotest_common.sh@10 -- # set +x 00:07:14.059 ************************************ 00:07:14.059 START TEST env_vtophys 00:07:14.059 ************************************ 00:07:14.059 13:59:59 env.env_vtophys -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:07:14.059 EAL: lib.eal log level changed from notice to debug 00:07:14.059 EAL: Detected lcore 0 as core 0 on socket 0 00:07:14.059 EAL: Detected lcore 1 as core 0 on socket 0 00:07:14.059 EAL: Detected lcore 2 as core 0 on socket 0 00:07:14.059 EAL: Detected lcore 3 as core 0 on socket 0 00:07:14.059 EAL: Detected lcore 4 as core 0 on socket 0 00:07:14.059 EAL: Detected lcore 5 as core 0 on socket 0 00:07:14.059 EAL: Detected lcore 6 as core 0 on socket 0 00:07:14.059 EAL: Detected lcore 7 as core 0 on socket 0 00:07:14.059 EAL: Detected lcore 8 as core 0 on socket 0 00:07:14.059 EAL: Detected lcore 9 as core 0 on socket 0 00:07:14.059 EAL: Maximum logical cores by configuration: 128 00:07:14.059 EAL: Detected CPU lcores: 10 00:07:14.059 EAL: Detected NUMA nodes: 1 00:07:14.059 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:07:14.059 EAL: Checking presence of .so 'librte_eal.so.24' 00:07:14.059 EAL: Checking presence of .so 'librte_eal.so' 00:07:14.059 EAL: Detected static linkage of DPDK 00:07:14.317 EAL: No shared files mode enabled, IPC will be disabled 00:07:14.317 EAL: Selected IOVA mode 'PA' 00:07:14.317 EAL: Probing VFIO support... 00:07:14.317 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:07:14.317 EAL: VFIO modules not loaded, skipping VFIO support... 00:07:14.317 EAL: Ask a virtual area of 0x2e000 bytes 00:07:14.317 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:07:14.317 EAL: Setting up physically contiguous memory... 00:07:14.317 EAL: Setting maximum number of open files to 524288 00:07:14.317 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:07:14.317 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:07:14.317 EAL: Ask a virtual area of 0x61000 bytes 00:07:14.317 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:07:14.317 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:14.317 EAL: Ask a virtual area of 0x400000000 bytes 00:07:14.317 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:07:14.317 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:07:14.317 EAL: Ask a virtual area of 0x61000 bytes 00:07:14.317 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:07:14.317 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:14.317 EAL: Ask a virtual area of 0x400000000 bytes 00:07:14.317 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:07:14.317 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:07:14.317 EAL: Ask a virtual area of 0x61000 bytes 00:07:14.317 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:07:14.317 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:14.317 EAL: Ask a virtual area of 0x400000000 bytes 00:07:14.317 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:07:14.317 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:07:14.317 EAL: Ask a virtual area of 0x61000 bytes 00:07:14.317 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:07:14.317 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:14.317 EAL: Ask a virtual area of 0x400000000 bytes 00:07:14.317 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:07:14.317 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:07:14.317 EAL: Hugepages will be freed exactly as allocated. 00:07:14.317 EAL: No shared files mode enabled, IPC is disabled 00:07:14.317 EAL: No shared files mode enabled, IPC is disabled 00:07:14.317 EAL: TSC frequency is ~2200000 KHz 00:07:14.317 EAL: Main lcore 0 is ready (tid=7fc46d5f3a40;cpuset=[0]) 00:07:14.317 EAL: Trying to obtain current memory policy. 00:07:14.317 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:14.317 EAL: Restoring previous memory policy: 0 00:07:14.317 EAL: request: mp_malloc_sync 00:07:14.317 EAL: No shared files mode enabled, IPC is disabled 00:07:14.317 EAL: Heap on socket 0 was expanded by 2MB 00:07:14.317 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:07:14.317 EAL: Mem event callback 'spdk:(nil)' registered 00:07:14.317 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:07:14.317 00:07:14.317 00:07:14.317 CUnit - A unit testing framework for C - Version 2.1-3 00:07:14.317 http://cunit.sourceforge.net/ 00:07:14.317 00:07:14.317 00:07:14.317 Suite: components_suite 00:07:14.882 Test: vtophys_malloc_test ...passed 00:07:14.882 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:07:14.882 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:14.882 EAL: Restoring previous memory policy: 4 00:07:14.882 EAL: Calling mem event callback 'spdk:(nil)' 00:07:14.882 EAL: request: mp_malloc_sync 00:07:14.882 EAL: No shared files mode enabled, IPC is disabled 00:07:14.882 EAL: Heap on socket 0 was expanded by 4MB 00:07:14.882 EAL: Calling mem event callback 'spdk:(nil)' 00:07:14.882 EAL: request: mp_malloc_sync 00:07:14.882 EAL: No shared files mode enabled, IPC is disabled 00:07:14.882 EAL: Heap on socket 0 was shrunk by 4MB 00:07:14.882 EAL: Trying to obtain current memory policy. 00:07:14.882 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:14.882 EAL: Restoring previous memory policy: 4 00:07:14.882 EAL: Calling mem event callback 'spdk:(nil)' 00:07:14.882 EAL: request: mp_malloc_sync 00:07:14.882 EAL: No shared files mode enabled, IPC is disabled 00:07:14.882 EAL: Heap on socket 0 was expanded by 6MB 00:07:14.882 EAL: Calling mem event callback 'spdk:(nil)' 00:07:14.882 EAL: request: mp_malloc_sync 00:07:14.882 EAL: No shared files mode enabled, IPC is disabled 00:07:14.882 EAL: Heap on socket 0 was shrunk by 6MB 00:07:14.882 EAL: Trying to obtain current memory policy. 00:07:14.882 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:14.882 EAL: Restoring previous memory policy: 4 00:07:14.882 EAL: Calling mem event callback 'spdk:(nil)' 00:07:14.882 EAL: request: mp_malloc_sync 00:07:14.882 EAL: No shared files mode enabled, IPC is disabled 00:07:14.882 EAL: Heap on socket 0 was expanded by 10MB 00:07:14.882 EAL: Calling mem event callback 'spdk:(nil)' 00:07:14.882 EAL: request: mp_malloc_sync 00:07:14.882 EAL: No shared files mode enabled, IPC is disabled 00:07:14.882 EAL: Heap on socket 0 was shrunk by 10MB 00:07:14.882 EAL: Trying to obtain current memory policy. 00:07:14.882 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:14.882 EAL: Restoring previous memory policy: 4 00:07:14.882 EAL: Calling mem event callback 'spdk:(nil)' 00:07:14.882 EAL: request: mp_malloc_sync 00:07:14.882 EAL: No shared files mode enabled, IPC is disabled 00:07:14.882 EAL: Heap on socket 0 was expanded by 18MB 00:07:14.882 EAL: Calling mem event callback 'spdk:(nil)' 00:07:14.882 EAL: request: mp_malloc_sync 00:07:14.882 EAL: No shared files mode enabled, IPC is disabled 00:07:14.882 EAL: Heap on socket 0 was shrunk by 18MB 00:07:14.882 EAL: Trying to obtain current memory policy. 00:07:14.882 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:14.882 EAL: Restoring previous memory policy: 4 00:07:14.882 EAL: Calling mem event callback 'spdk:(nil)' 00:07:14.882 EAL: request: mp_malloc_sync 00:07:14.882 EAL: No shared files mode enabled, IPC is disabled 00:07:14.882 EAL: Heap on socket 0 was expanded by 34MB 00:07:14.882 EAL: Calling mem event callback 'spdk:(nil)' 00:07:14.882 EAL: request: mp_malloc_sync 00:07:14.882 EAL: No shared files mode enabled, IPC is disabled 00:07:14.882 EAL: Heap on socket 0 was shrunk by 34MB 00:07:14.882 EAL: Trying to obtain current memory policy. 00:07:14.882 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:15.139 EAL: Restoring previous memory policy: 4 00:07:15.139 EAL: Calling mem event callback 'spdk:(nil)' 00:07:15.139 EAL: request: mp_malloc_sync 00:07:15.139 EAL: No shared files mode enabled, IPC is disabled 00:07:15.139 EAL: Heap on socket 0 was expanded by 66MB 00:07:15.139 EAL: Calling mem event callback 'spdk:(nil)' 00:07:15.139 EAL: request: mp_malloc_sync 00:07:15.139 EAL: No shared files mode enabled, IPC is disabled 00:07:15.139 EAL: Heap on socket 0 was shrunk by 66MB 00:07:15.139 EAL: Trying to obtain current memory policy. 00:07:15.139 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:15.139 EAL: Restoring previous memory policy: 4 00:07:15.139 EAL: Calling mem event callback 'spdk:(nil)' 00:07:15.139 EAL: request: mp_malloc_sync 00:07:15.139 EAL: No shared files mode enabled, IPC is disabled 00:07:15.139 EAL: Heap on socket 0 was expanded by 130MB 00:07:15.395 EAL: Calling mem event callback 'spdk:(nil)' 00:07:15.395 EAL: request: mp_malloc_sync 00:07:15.395 EAL: No shared files mode enabled, IPC is disabled 00:07:15.395 EAL: Heap on socket 0 was shrunk by 130MB 00:07:15.653 EAL: Trying to obtain current memory policy. 00:07:15.653 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:15.653 EAL: Restoring previous memory policy: 4 00:07:15.653 EAL: Calling mem event callback 'spdk:(nil)' 00:07:15.653 EAL: request: mp_malloc_sync 00:07:15.653 EAL: No shared files mode enabled, IPC is disabled 00:07:15.653 EAL: Heap on socket 0 was expanded by 258MB 00:07:16.216 EAL: Calling mem event callback 'spdk:(nil)' 00:07:16.216 EAL: request: mp_malloc_sync 00:07:16.216 EAL: No shared files mode enabled, IPC is disabled 00:07:16.216 EAL: Heap on socket 0 was shrunk by 258MB 00:07:16.474 EAL: Trying to obtain current memory policy. 00:07:16.474 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:16.731 EAL: Restoring previous memory policy: 4 00:07:16.731 EAL: Calling mem event callback 'spdk:(nil)' 00:07:16.731 EAL: request: mp_malloc_sync 00:07:16.731 EAL: No shared files mode enabled, IPC is disabled 00:07:16.731 EAL: Heap on socket 0 was expanded by 514MB 00:07:17.665 EAL: Calling mem event callback 'spdk:(nil)' 00:07:17.665 EAL: request: mp_malloc_sync 00:07:17.665 EAL: No shared files mode enabled, IPC is disabled 00:07:17.665 EAL: Heap on socket 0 was shrunk by 514MB 00:07:18.231 EAL: Trying to obtain current memory policy. 00:07:18.231 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:18.489 EAL: Restoring previous memory policy: 4 00:07:18.489 EAL: Calling mem event callback 'spdk:(nil)' 00:07:18.489 EAL: request: mp_malloc_sync 00:07:18.489 EAL: No shared files mode enabled, IPC is disabled 00:07:18.489 EAL: Heap on socket 0 was expanded by 1026MB 00:07:20.389 EAL: Calling mem event callback 'spdk:(nil)' 00:07:20.389 EAL: request: mp_malloc_sync 00:07:20.389 EAL: No shared files mode enabled, IPC is disabled 00:07:20.389 EAL: Heap on socket 0 was shrunk by 1026MB 00:07:21.760 passed 00:07:21.760 00:07:21.760 Run Summary: Type Total Ran Passed Failed Inactive 00:07:21.760 suites 1 1 n/a 0 0 00:07:21.760 tests 2 2 2 0 0 00:07:21.760 asserts 6496 6496 6496 0 n/a 00:07:21.760 00:07:21.760 Elapsed time = 7.452 seconds 00:07:21.760 EAL: Calling mem event callback 'spdk:(nil)' 00:07:21.760 EAL: request: mp_malloc_sync 00:07:21.760 EAL: No shared files mode enabled, IPC is disabled 00:07:21.760 EAL: Heap on socket 0 was shrunk by 2MB 00:07:21.760 EAL: No shared files mode enabled, IPC is disabled 00:07:21.760 EAL: No shared files mode enabled, IPC is disabled 00:07:21.760 EAL: No shared files mode enabled, IPC is disabled 00:07:22.017 00:07:22.017 real 0m7.804s 00:07:22.017 user 0m6.655s 00:07:22.017 sys 0m0.955s 00:07:22.017 14:00:07 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:22.017 14:00:07 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:07:22.017 ************************************ 00:07:22.017 END TEST env_vtophys 00:07:22.017 ************************************ 00:07:22.017 14:00:07 env -- common/autotest_common.sh@1142 -- # return 0 00:07:22.017 14:00:07 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:07:22.017 14:00:07 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:22.017 14:00:07 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.017 14:00:07 env -- common/autotest_common.sh@10 -- # set +x 00:07:22.017 ************************************ 00:07:22.017 START TEST env_pci 00:07:22.017 ************************************ 00:07:22.017 14:00:07 env.env_pci -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:07:22.017 00:07:22.017 00:07:22.017 CUnit - A unit testing framework for C - Version 2.1-3 00:07:22.017 http://cunit.sourceforge.net/ 00:07:22.017 00:07:22.017 00:07:22.017 Suite: pci 00:07:22.017 Test: pci_hook ...[2024-07-15 14:00:07.876834] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 177534 has claimed it 00:07:22.017 EAL: Cannot find device (10000:00:01.0) 00:07:22.017 EAL: Failed to attach device on primary process 00:07:22.017 passed 00:07:22.017 00:07:22.017 Run Summary: Type Total Ran Passed Failed Inactive 00:07:22.017 suites 1 1 n/a 0 0 00:07:22.017 tests 1 1 1 0 0 00:07:22.017 asserts 25 25 25 0 n/a 00:07:22.017 00:07:22.017 Elapsed time = 0.005 seconds 00:07:22.017 00:07:22.017 real 0m0.067s 00:07:22.017 user 0m0.031s 00:07:22.017 sys 0m0.033s 00:07:22.017 14:00:07 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:22.017 14:00:07 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:07:22.017 ************************************ 00:07:22.017 END TEST env_pci 00:07:22.017 ************************************ 00:07:22.017 14:00:07 env -- common/autotest_common.sh@1142 -- # return 0 00:07:22.017 14:00:07 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:07:22.017 14:00:07 env -- env/env.sh@15 -- # uname 00:07:22.017 14:00:07 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:07:22.017 14:00:07 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:07:22.017 14:00:07 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:22.017 14:00:07 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:07:22.017 14:00:07 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.017 14:00:07 env -- common/autotest_common.sh@10 -- # set +x 00:07:22.017 ************************************ 00:07:22.017 START TEST env_dpdk_post_init 00:07:22.017 ************************************ 00:07:22.017 14:00:07 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:22.017 EAL: Detected CPU lcores: 10 00:07:22.017 EAL: Detected NUMA nodes: 1 00:07:22.017 EAL: Detected static linkage of DPDK 00:07:22.274 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:22.274 EAL: Selected IOVA mode 'PA' 00:07:22.274 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:22.274 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket 0) 00:07:22.274 Starting DPDK initialization... 00:07:22.274 Starting SPDK post initialization... 00:07:22.274 SPDK NVMe probe 00:07:22.274 Attaching to 0000:00:10.0 00:07:22.274 Attached to 0000:00:10.0 00:07:22.274 Cleaning up... 00:07:22.274 00:07:22.274 real 0m0.264s 00:07:22.274 user 0m0.082s 00:07:22.274 sys 0m0.081s 00:07:22.274 14:00:08 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:22.274 14:00:08 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:07:22.274 ************************************ 00:07:22.274 END TEST env_dpdk_post_init 00:07:22.274 ************************************ 00:07:22.274 14:00:08 env -- common/autotest_common.sh@1142 -- # return 0 00:07:22.274 14:00:08 env -- env/env.sh@26 -- # uname 00:07:22.274 14:00:08 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:07:22.274 14:00:08 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:07:22.274 14:00:08 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:22.274 14:00:08 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.274 14:00:08 env -- common/autotest_common.sh@10 -- # set +x 00:07:22.274 ************************************ 00:07:22.274 START TEST env_mem_callbacks 00:07:22.274 ************************************ 00:07:22.274 14:00:08 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:07:22.532 EAL: Detected CPU lcores: 10 00:07:22.532 EAL: Detected NUMA nodes: 1 00:07:22.532 EAL: Detected static linkage of DPDK 00:07:22.532 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:22.532 EAL: Selected IOVA mode 'PA' 00:07:22.532 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:22.532 00:07:22.532 00:07:22.532 CUnit - A unit testing framework for C - Version 2.1-3 00:07:22.532 http://cunit.sourceforge.net/ 00:07:22.532 00:07:22.532 00:07:22.532 Suite: memory 00:07:22.532 Test: test ... 00:07:22.532 register 0x200000200000 2097152 00:07:22.532 malloc 3145728 00:07:22.532 register 0x200000400000 4194304 00:07:22.532 buf 0x2000004fffc0 len 3145728 PASSED 00:07:22.532 malloc 64 00:07:22.532 buf 0x2000004ffec0 len 64 PASSED 00:07:22.532 malloc 4194304 00:07:22.532 register 0x200000800000 6291456 00:07:22.532 buf 0x2000009fffc0 len 4194304 PASSED 00:07:22.532 free 0x2000004fffc0 3145728 00:07:22.532 free 0x2000004ffec0 64 00:07:22.532 unregister 0x200000400000 4194304 PASSED 00:07:22.532 free 0x2000009fffc0 4194304 00:07:22.532 unregister 0x200000800000 6291456 PASSED 00:07:22.790 malloc 8388608 00:07:22.790 register 0x200000400000 10485760 00:07:22.790 buf 0x2000005fffc0 len 8388608 PASSED 00:07:22.790 free 0x2000005fffc0 8388608 00:07:22.790 unregister 0x200000400000 10485760 PASSED 00:07:22.790 passed 00:07:22.790 00:07:22.790 Run Summary: Type Total Ran Passed Failed Inactive 00:07:22.790 suites 1 1 n/a 0 0 00:07:22.790 tests 1 1 1 0 0 00:07:22.790 asserts 15 15 15 0 n/a 00:07:22.790 00:07:22.790 Elapsed time = 0.111 seconds 00:07:22.790 00:07:22.790 real 0m0.360s 00:07:22.790 user 0m0.155s 00:07:22.790 sys 0m0.096s 00:07:22.790 14:00:08 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:22.790 14:00:08 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:07:22.790 ************************************ 00:07:22.790 END TEST env_mem_callbacks 00:07:22.790 ************************************ 00:07:22.790 14:00:08 env -- common/autotest_common.sh@1142 -- # return 0 00:07:22.790 00:07:22.790 real 0m9.057s 00:07:22.790 user 0m7.244s 00:07:22.790 sys 0m1.387s 00:07:22.790 14:00:08 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:22.790 14:00:08 env -- common/autotest_common.sh@10 -- # set +x 00:07:22.790 ************************************ 00:07:22.790 END TEST env 00:07:22.790 ************************************ 00:07:22.790 14:00:08 -- common/autotest_common.sh@1142 -- # return 0 00:07:22.790 14:00:08 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:07:22.790 14:00:08 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:22.790 14:00:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.790 14:00:08 -- common/autotest_common.sh@10 -- # set +x 00:07:22.790 ************************************ 00:07:22.790 START TEST rpc 00:07:22.790 ************************************ 00:07:22.790 14:00:08 rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:07:22.790 * Looking for test storage... 00:07:23.047 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:07:23.047 14:00:08 rpc -- rpc/rpc.sh@65 -- # spdk_pid=177659 00:07:23.047 14:00:08 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:07:23.047 14:00:08 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:23.047 14:00:08 rpc -- rpc/rpc.sh@67 -- # waitforlisten 177659 00:07:23.047 14:00:08 rpc -- common/autotest_common.sh@829 -- # '[' -z 177659 ']' 00:07:23.047 14:00:08 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.047 14:00:08 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:23.047 14:00:08 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.047 14:00:08 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:23.047 14:00:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:23.047 [2024-07-15 14:00:08.854772] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:07:23.047 [2024-07-15 14:00:08.855382] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid177659 ] 00:07:23.047 [2024-07-15 14:00:09.028341] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.368 [2024-07-15 14:00:09.275816] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:07:23.368 [2024-07-15 14:00:09.276121] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 177659' to capture a snapshot of events at runtime. 00:07:23.368 [2024-07-15 14:00:09.276313] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:23.368 [2024-07-15 14:00:09.276476] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:23.368 [2024-07-15 14:00:09.276640] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid177659 for offline analysis/debug. 00:07:23.368 [2024-07-15 14:00:09.276889] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.302 14:00:10 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:24.302 14:00:10 rpc -- common/autotest_common.sh@862 -- # return 0 00:07:24.302 14:00:10 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:07:24.302 14:00:10 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:07:24.302 14:00:10 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:07:24.302 14:00:10 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:07:24.302 14:00:10 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:24.302 14:00:10 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:24.302 14:00:10 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:24.302 ************************************ 00:07:24.302 START TEST rpc_integrity 00:07:24.302 ************************************ 00:07:24.302 14:00:10 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:07:24.302 14:00:10 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:24.302 14:00:10 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.302 14:00:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:24.302 14:00:10 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.302 14:00:10 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:24.302 14:00:10 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:24.302 14:00:10 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:24.302 14:00:10 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:24.302 14:00:10 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.302 14:00:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:24.302 14:00:10 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.302 14:00:10 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:07:24.302 14:00:10 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:24.302 14:00:10 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.302 14:00:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:24.302 14:00:10 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.302 14:00:10 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:24.302 { 00:07:24.302 "name": "Malloc0", 00:07:24.302 "aliases": [ 00:07:24.302 "483c01cf-85cc-496d-8ddb-a91623ff8d0c" 00:07:24.302 ], 00:07:24.302 "product_name": "Malloc disk", 00:07:24.302 "block_size": 512, 00:07:24.302 "num_blocks": 16384, 00:07:24.302 "uuid": "483c01cf-85cc-496d-8ddb-a91623ff8d0c", 00:07:24.302 "assigned_rate_limits": { 00:07:24.302 "rw_ios_per_sec": 0, 00:07:24.302 "rw_mbytes_per_sec": 0, 00:07:24.302 "r_mbytes_per_sec": 0, 00:07:24.302 "w_mbytes_per_sec": 0 00:07:24.302 }, 00:07:24.302 "claimed": false, 00:07:24.302 "zoned": false, 00:07:24.302 "supported_io_types": { 00:07:24.302 "read": true, 00:07:24.302 "write": true, 00:07:24.302 "unmap": true, 00:07:24.302 "flush": true, 00:07:24.302 "reset": true, 00:07:24.302 "nvme_admin": false, 00:07:24.302 "nvme_io": false, 00:07:24.302 "nvme_io_md": false, 00:07:24.302 "write_zeroes": true, 00:07:24.302 "zcopy": true, 00:07:24.302 "get_zone_info": false, 00:07:24.302 "zone_management": false, 00:07:24.302 "zone_append": false, 00:07:24.302 "compare": false, 00:07:24.302 "compare_and_write": false, 00:07:24.302 "abort": true, 00:07:24.302 "seek_hole": false, 00:07:24.302 "seek_data": false, 00:07:24.302 "copy": true, 00:07:24.302 "nvme_iov_md": false 00:07:24.302 }, 00:07:24.302 "memory_domains": [ 00:07:24.302 { 00:07:24.302 "dma_device_id": "system", 00:07:24.302 "dma_device_type": 1 00:07:24.302 }, 00:07:24.302 { 00:07:24.302 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:24.302 "dma_device_type": 2 00:07:24.302 } 00:07:24.302 ], 00:07:24.302 "driver_specific": {} 00:07:24.302 } 00:07:24.302 ]' 00:07:24.302 14:00:10 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:24.302 14:00:10 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:24.302 14:00:10 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:07:24.302 14:00:10 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.302 14:00:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:24.302 [2024-07-15 14:00:10.226172] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:07:24.302 [2024-07-15 14:00:10.226396] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:24.302 [2024-07-15 14:00:10.226599] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:24.302 [2024-07-15 14:00:10.226745] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:24.302 [2024-07-15 14:00:10.228574] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:24.302 [2024-07-15 14:00:10.228787] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:24.302 Passthru0 00:07:24.302 14:00:10 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.302 14:00:10 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:24.302 14:00:10 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.303 14:00:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:24.303 14:00:10 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.303 14:00:10 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:24.303 { 00:07:24.303 "name": "Malloc0", 00:07:24.303 "aliases": [ 00:07:24.303 "483c01cf-85cc-496d-8ddb-a91623ff8d0c" 00:07:24.303 ], 00:07:24.303 "product_name": "Malloc disk", 00:07:24.303 "block_size": 512, 00:07:24.303 "num_blocks": 16384, 00:07:24.303 "uuid": "483c01cf-85cc-496d-8ddb-a91623ff8d0c", 00:07:24.303 "assigned_rate_limits": { 00:07:24.303 "rw_ios_per_sec": 0, 00:07:24.303 "rw_mbytes_per_sec": 0, 00:07:24.303 "r_mbytes_per_sec": 0, 00:07:24.303 "w_mbytes_per_sec": 0 00:07:24.303 }, 00:07:24.303 "claimed": true, 00:07:24.303 "claim_type": "exclusive_write", 00:07:24.303 "zoned": false, 00:07:24.303 "supported_io_types": { 00:07:24.303 "read": true, 00:07:24.303 "write": true, 00:07:24.303 "unmap": true, 00:07:24.303 "flush": true, 00:07:24.303 "reset": true, 00:07:24.303 "nvme_admin": false, 00:07:24.303 "nvme_io": false, 00:07:24.303 "nvme_io_md": false, 00:07:24.303 "write_zeroes": true, 00:07:24.303 "zcopy": true, 00:07:24.303 "get_zone_info": false, 00:07:24.303 "zone_management": false, 00:07:24.303 "zone_append": false, 00:07:24.303 "compare": false, 00:07:24.303 "compare_and_write": false, 00:07:24.303 "abort": true, 00:07:24.303 "seek_hole": false, 00:07:24.303 "seek_data": false, 00:07:24.303 "copy": true, 00:07:24.303 "nvme_iov_md": false 00:07:24.303 }, 00:07:24.303 "memory_domains": [ 00:07:24.303 { 00:07:24.303 "dma_device_id": "system", 00:07:24.303 "dma_device_type": 1 00:07:24.303 }, 00:07:24.303 { 00:07:24.303 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:24.303 "dma_device_type": 2 00:07:24.303 } 00:07:24.303 ], 00:07:24.303 "driver_specific": {} 00:07:24.303 }, 00:07:24.303 { 00:07:24.303 "name": "Passthru0", 00:07:24.303 "aliases": [ 00:07:24.303 "f1771d26-94f6-563c-bbfb-8bfafc57ba3c" 00:07:24.303 ], 00:07:24.303 "product_name": "passthru", 00:07:24.303 "block_size": 512, 00:07:24.303 "num_blocks": 16384, 00:07:24.303 "uuid": "f1771d26-94f6-563c-bbfb-8bfafc57ba3c", 00:07:24.303 "assigned_rate_limits": { 00:07:24.303 "rw_ios_per_sec": 0, 00:07:24.303 "rw_mbytes_per_sec": 0, 00:07:24.303 "r_mbytes_per_sec": 0, 00:07:24.304 "w_mbytes_per_sec": 0 00:07:24.304 }, 00:07:24.304 "claimed": false, 00:07:24.304 "zoned": false, 00:07:24.304 "supported_io_types": { 00:07:24.304 "read": true, 00:07:24.304 "write": true, 00:07:24.304 "unmap": true, 00:07:24.304 "flush": true, 00:07:24.304 "reset": true, 00:07:24.304 "nvme_admin": false, 00:07:24.304 "nvme_io": false, 00:07:24.304 "nvme_io_md": false, 00:07:24.304 "write_zeroes": true, 00:07:24.304 "zcopy": true, 00:07:24.304 "get_zone_info": false, 00:07:24.304 "zone_management": false, 00:07:24.304 "zone_append": false, 00:07:24.304 "compare": false, 00:07:24.304 "compare_and_write": false, 00:07:24.304 "abort": true, 00:07:24.304 "seek_hole": false, 00:07:24.304 "seek_data": false, 00:07:24.304 "copy": true, 00:07:24.304 "nvme_iov_md": false 00:07:24.304 }, 00:07:24.304 "memory_domains": [ 00:07:24.304 { 00:07:24.304 "dma_device_id": "system", 00:07:24.304 "dma_device_type": 1 00:07:24.304 }, 00:07:24.304 { 00:07:24.304 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:24.304 "dma_device_type": 2 00:07:24.304 } 00:07:24.304 ], 00:07:24.304 "driver_specific": { 00:07:24.304 "passthru": { 00:07:24.304 "name": "Passthru0", 00:07:24.304 "base_bdev_name": "Malloc0" 00:07:24.304 } 00:07:24.304 } 00:07:24.304 } 00:07:24.304 ]' 00:07:24.304 14:00:10 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:24.563 14:00:10 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:24.563 14:00:10 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:24.563 14:00:10 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.563 14:00:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:24.563 14:00:10 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.563 14:00:10 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:07:24.563 14:00:10 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.563 14:00:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:24.563 14:00:10 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.563 14:00:10 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:24.563 14:00:10 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.563 14:00:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:24.563 14:00:10 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.563 14:00:10 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:24.563 14:00:10 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:24.563 14:00:10 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:24.563 00:07:24.563 real 0m0.346s 00:07:24.563 user 0m0.216s 00:07:24.563 sys 0m0.029s 00:07:24.563 14:00:10 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:24.563 14:00:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:24.563 ************************************ 00:07:24.563 END TEST rpc_integrity 00:07:24.563 ************************************ 00:07:24.563 14:00:10 rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:24.563 14:00:10 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:07:24.563 14:00:10 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:24.563 14:00:10 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:24.563 14:00:10 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:24.563 ************************************ 00:07:24.563 START TEST rpc_plugins 00:07:24.563 ************************************ 00:07:24.563 14:00:10 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:07:24.563 14:00:10 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:07:24.563 14:00:10 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.563 14:00:10 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:24.563 14:00:10 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.563 14:00:10 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:07:24.563 14:00:10 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:07:24.563 14:00:10 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.563 14:00:10 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:24.563 14:00:10 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.563 14:00:10 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:07:24.563 { 00:07:24.563 "name": "Malloc1", 00:07:24.563 "aliases": [ 00:07:24.563 "235b25fa-c88e-49a8-973e-6814b373bb72" 00:07:24.563 ], 00:07:24.563 "product_name": "Malloc disk", 00:07:24.563 "block_size": 4096, 00:07:24.563 "num_blocks": 256, 00:07:24.563 "uuid": "235b25fa-c88e-49a8-973e-6814b373bb72", 00:07:24.563 "assigned_rate_limits": { 00:07:24.563 "rw_ios_per_sec": 0, 00:07:24.563 "rw_mbytes_per_sec": 0, 00:07:24.563 "r_mbytes_per_sec": 0, 00:07:24.563 "w_mbytes_per_sec": 0 00:07:24.563 }, 00:07:24.563 "claimed": false, 00:07:24.563 "zoned": false, 00:07:24.563 "supported_io_types": { 00:07:24.563 "read": true, 00:07:24.563 "write": true, 00:07:24.563 "unmap": true, 00:07:24.563 "flush": true, 00:07:24.563 "reset": true, 00:07:24.563 "nvme_admin": false, 00:07:24.563 "nvme_io": false, 00:07:24.563 "nvme_io_md": false, 00:07:24.563 "write_zeroes": true, 00:07:24.563 "zcopy": true, 00:07:24.563 "get_zone_info": false, 00:07:24.563 "zone_management": false, 00:07:24.563 "zone_append": false, 00:07:24.563 "compare": false, 00:07:24.563 "compare_and_write": false, 00:07:24.563 "abort": true, 00:07:24.563 "seek_hole": false, 00:07:24.563 "seek_data": false, 00:07:24.563 "copy": true, 00:07:24.563 "nvme_iov_md": false 00:07:24.563 }, 00:07:24.563 "memory_domains": [ 00:07:24.563 { 00:07:24.563 "dma_device_id": "system", 00:07:24.563 "dma_device_type": 1 00:07:24.563 }, 00:07:24.563 { 00:07:24.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:24.563 "dma_device_type": 2 00:07:24.564 } 00:07:24.564 ], 00:07:24.564 "driver_specific": {} 00:07:24.564 } 00:07:24.564 ]' 00:07:24.564 14:00:10 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:07:24.564 14:00:10 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:07:24.564 14:00:10 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:07:24.564 14:00:10 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.564 14:00:10 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:24.564 14:00:10 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.564 14:00:10 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:07:24.564 14:00:10 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.564 14:00:10 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:24.821 14:00:10 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.821 14:00:10 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:07:24.821 14:00:10 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:07:24.821 14:00:10 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:07:24.821 00:07:24.821 real 0m0.150s 00:07:24.821 user 0m0.093s 00:07:24.821 sys 0m0.012s 00:07:24.821 ************************************ 00:07:24.821 END TEST rpc_plugins 00:07:24.821 ************************************ 00:07:24.821 14:00:10 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:24.821 14:00:10 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:24.821 14:00:10 rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:24.821 14:00:10 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:07:24.821 14:00:10 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:24.821 14:00:10 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:24.821 14:00:10 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:24.821 ************************************ 00:07:24.821 START TEST rpc_trace_cmd_test 00:07:24.821 ************************************ 00:07:24.821 14:00:10 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:07:24.821 14:00:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:07:24.821 14:00:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:07:24.821 14:00:10 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.821 14:00:10 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.821 14:00:10 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.821 14:00:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:07:24.821 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid177659", 00:07:24.821 "tpoint_group_mask": "0x8", 00:07:24.821 "iscsi_conn": { 00:07:24.821 "mask": "0x2", 00:07:24.821 "tpoint_mask": "0x0" 00:07:24.821 }, 00:07:24.821 "scsi": { 00:07:24.821 "mask": "0x4", 00:07:24.821 "tpoint_mask": "0x0" 00:07:24.821 }, 00:07:24.821 "bdev": { 00:07:24.821 "mask": "0x8", 00:07:24.821 "tpoint_mask": "0xffffffffffffffff" 00:07:24.821 }, 00:07:24.821 "nvmf_rdma": { 00:07:24.821 "mask": "0x10", 00:07:24.821 "tpoint_mask": "0x0" 00:07:24.821 }, 00:07:24.821 "nvmf_tcp": { 00:07:24.821 "mask": "0x20", 00:07:24.821 "tpoint_mask": "0x0" 00:07:24.821 }, 00:07:24.821 "ftl": { 00:07:24.821 "mask": "0x40", 00:07:24.821 "tpoint_mask": "0x0" 00:07:24.821 }, 00:07:24.821 "blobfs": { 00:07:24.821 "mask": "0x80", 00:07:24.821 "tpoint_mask": "0x0" 00:07:24.821 }, 00:07:24.821 "dsa": { 00:07:24.821 "mask": "0x200", 00:07:24.821 "tpoint_mask": "0x0" 00:07:24.821 }, 00:07:24.821 "thread": { 00:07:24.821 "mask": "0x400", 00:07:24.821 "tpoint_mask": "0x0" 00:07:24.821 }, 00:07:24.821 "nvme_pcie": { 00:07:24.821 "mask": "0x800", 00:07:24.821 "tpoint_mask": "0x0" 00:07:24.821 }, 00:07:24.821 "iaa": { 00:07:24.821 "mask": "0x1000", 00:07:24.821 "tpoint_mask": "0x0" 00:07:24.821 }, 00:07:24.821 "nvme_tcp": { 00:07:24.821 "mask": "0x2000", 00:07:24.821 "tpoint_mask": "0x0" 00:07:24.821 }, 00:07:24.821 "bdev_nvme": { 00:07:24.821 "mask": "0x4000", 00:07:24.821 "tpoint_mask": "0x0" 00:07:24.821 }, 00:07:24.821 "sock": { 00:07:24.821 "mask": "0x8000", 00:07:24.821 "tpoint_mask": "0x0" 00:07:24.821 } 00:07:24.821 }' 00:07:24.821 14:00:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:07:24.821 14:00:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:07:24.821 14:00:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:07:24.821 14:00:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:07:24.821 14:00:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:07:25.079 14:00:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:07:25.079 14:00:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:07:25.079 14:00:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:07:25.079 14:00:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:07:25.079 14:00:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:07:25.079 00:07:25.079 real 0m0.260s 00:07:25.079 user 0m0.222s 00:07:25.079 sys 0m0.028s 00:07:25.079 14:00:10 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:25.079 14:00:10 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.079 ************************************ 00:07:25.079 END TEST rpc_trace_cmd_test 00:07:25.079 ************************************ 00:07:25.079 14:00:10 rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:25.079 14:00:10 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:07:25.079 14:00:10 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:07:25.079 14:00:10 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:07:25.079 14:00:10 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:25.079 14:00:10 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:25.079 14:00:10 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:25.079 ************************************ 00:07:25.079 START TEST rpc_daemon_integrity 00:07:25.079 ************************************ 00:07:25.079 14:00:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:07:25.079 14:00:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:25.079 14:00:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.079 14:00:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:25.079 14:00:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.079 14:00:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:25.079 14:00:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:25.079 14:00:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:25.079 14:00:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:25.079 14:00:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.079 14:00:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:25.079 14:00:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.079 14:00:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:07:25.079 14:00:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:25.079 14:00:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.079 14:00:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:25.079 14:00:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.079 14:00:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:25.079 { 00:07:25.079 "name": "Malloc2", 00:07:25.079 "aliases": [ 00:07:25.079 "36f9785a-0301-4851-95c5-0bcb349b79ac" 00:07:25.079 ], 00:07:25.079 "product_name": "Malloc disk", 00:07:25.079 "block_size": 512, 00:07:25.079 "num_blocks": 16384, 00:07:25.079 "uuid": "36f9785a-0301-4851-95c5-0bcb349b79ac", 00:07:25.079 "assigned_rate_limits": { 00:07:25.079 "rw_ios_per_sec": 0, 00:07:25.079 "rw_mbytes_per_sec": 0, 00:07:25.079 "r_mbytes_per_sec": 0, 00:07:25.079 "w_mbytes_per_sec": 0 00:07:25.079 }, 00:07:25.079 "claimed": false, 00:07:25.079 "zoned": false, 00:07:25.079 "supported_io_types": { 00:07:25.079 "read": true, 00:07:25.079 "write": true, 00:07:25.079 "unmap": true, 00:07:25.079 "flush": true, 00:07:25.079 "reset": true, 00:07:25.079 "nvme_admin": false, 00:07:25.080 "nvme_io": false, 00:07:25.080 "nvme_io_md": false, 00:07:25.080 "write_zeroes": true, 00:07:25.080 "zcopy": true, 00:07:25.080 "get_zone_info": false, 00:07:25.080 "zone_management": false, 00:07:25.080 "zone_append": false, 00:07:25.080 "compare": false, 00:07:25.080 "compare_and_write": false, 00:07:25.080 "abort": true, 00:07:25.080 "seek_hole": false, 00:07:25.080 "seek_data": false, 00:07:25.080 "copy": true, 00:07:25.080 "nvme_iov_md": false 00:07:25.080 }, 00:07:25.080 "memory_domains": [ 00:07:25.080 { 00:07:25.080 "dma_device_id": "system", 00:07:25.080 "dma_device_type": 1 00:07:25.080 }, 00:07:25.080 { 00:07:25.080 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:25.080 "dma_device_type": 2 00:07:25.080 } 00:07:25.080 ], 00:07:25.080 "driver_specific": {} 00:07:25.080 } 00:07:25.080 ]' 00:07:25.080 14:00:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:25.338 14:00:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:25.338 14:00:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:07:25.338 14:00:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.338 14:00:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:25.338 [2024-07-15 14:00:11.132611] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:07:25.338 [2024-07-15 14:00:11.132845] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:25.338 [2024-07-15 14:00:11.133041] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:25.338 [2024-07-15 14:00:11.133187] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:25.338 [2024-07-15 14:00:11.135140] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:25.338 [2024-07-15 14:00:11.135314] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:25.338 Passthru0 00:07:25.338 14:00:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.338 14:00:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:25.338 14:00:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.338 14:00:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:25.338 14:00:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.338 14:00:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:25.338 { 00:07:25.338 "name": "Malloc2", 00:07:25.338 "aliases": [ 00:07:25.338 "36f9785a-0301-4851-95c5-0bcb349b79ac" 00:07:25.338 ], 00:07:25.338 "product_name": "Malloc disk", 00:07:25.338 "block_size": 512, 00:07:25.338 "num_blocks": 16384, 00:07:25.338 "uuid": "36f9785a-0301-4851-95c5-0bcb349b79ac", 00:07:25.338 "assigned_rate_limits": { 00:07:25.338 "rw_ios_per_sec": 0, 00:07:25.338 "rw_mbytes_per_sec": 0, 00:07:25.338 "r_mbytes_per_sec": 0, 00:07:25.338 "w_mbytes_per_sec": 0 00:07:25.338 }, 00:07:25.338 "claimed": true, 00:07:25.338 "claim_type": "exclusive_write", 00:07:25.338 "zoned": false, 00:07:25.338 "supported_io_types": { 00:07:25.338 "read": true, 00:07:25.338 "write": true, 00:07:25.338 "unmap": true, 00:07:25.338 "flush": true, 00:07:25.338 "reset": true, 00:07:25.338 "nvme_admin": false, 00:07:25.338 "nvme_io": false, 00:07:25.338 "nvme_io_md": false, 00:07:25.338 "write_zeroes": true, 00:07:25.338 "zcopy": true, 00:07:25.338 "get_zone_info": false, 00:07:25.338 "zone_management": false, 00:07:25.338 "zone_append": false, 00:07:25.338 "compare": false, 00:07:25.338 "compare_and_write": false, 00:07:25.338 "abort": true, 00:07:25.338 "seek_hole": false, 00:07:25.338 "seek_data": false, 00:07:25.338 "copy": true, 00:07:25.338 "nvme_iov_md": false 00:07:25.338 }, 00:07:25.338 "memory_domains": [ 00:07:25.338 { 00:07:25.338 "dma_device_id": "system", 00:07:25.338 "dma_device_type": 1 00:07:25.338 }, 00:07:25.338 { 00:07:25.338 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:25.338 "dma_device_type": 2 00:07:25.338 } 00:07:25.338 ], 00:07:25.338 "driver_specific": {} 00:07:25.338 }, 00:07:25.338 { 00:07:25.338 "name": "Passthru0", 00:07:25.338 "aliases": [ 00:07:25.338 "3ce89912-f6a1-556a-aa7c-744b1267361c" 00:07:25.338 ], 00:07:25.338 "product_name": "passthru", 00:07:25.338 "block_size": 512, 00:07:25.338 "num_blocks": 16384, 00:07:25.338 "uuid": "3ce89912-f6a1-556a-aa7c-744b1267361c", 00:07:25.338 "assigned_rate_limits": { 00:07:25.338 "rw_ios_per_sec": 0, 00:07:25.338 "rw_mbytes_per_sec": 0, 00:07:25.338 "r_mbytes_per_sec": 0, 00:07:25.338 "w_mbytes_per_sec": 0 00:07:25.338 }, 00:07:25.338 "claimed": false, 00:07:25.338 "zoned": false, 00:07:25.338 "supported_io_types": { 00:07:25.338 "read": true, 00:07:25.338 "write": true, 00:07:25.338 "unmap": true, 00:07:25.338 "flush": true, 00:07:25.338 "reset": true, 00:07:25.338 "nvme_admin": false, 00:07:25.338 "nvme_io": false, 00:07:25.338 "nvme_io_md": false, 00:07:25.338 "write_zeroes": true, 00:07:25.338 "zcopy": true, 00:07:25.338 "get_zone_info": false, 00:07:25.338 "zone_management": false, 00:07:25.338 "zone_append": false, 00:07:25.338 "compare": false, 00:07:25.338 "compare_and_write": false, 00:07:25.338 "abort": true, 00:07:25.338 "seek_hole": false, 00:07:25.338 "seek_data": false, 00:07:25.338 "copy": true, 00:07:25.338 "nvme_iov_md": false 00:07:25.338 }, 00:07:25.338 "memory_domains": [ 00:07:25.338 { 00:07:25.338 "dma_device_id": "system", 00:07:25.338 "dma_device_type": 1 00:07:25.338 }, 00:07:25.338 { 00:07:25.338 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:25.338 "dma_device_type": 2 00:07:25.338 } 00:07:25.338 ], 00:07:25.338 "driver_specific": { 00:07:25.338 "passthru": { 00:07:25.338 "name": "Passthru0", 00:07:25.338 "base_bdev_name": "Malloc2" 00:07:25.338 } 00:07:25.338 } 00:07:25.338 } 00:07:25.338 ]' 00:07:25.338 14:00:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:25.338 14:00:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:25.338 14:00:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:25.338 14:00:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.338 14:00:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:25.338 14:00:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.338 14:00:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:07:25.338 14:00:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.338 14:00:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:25.338 14:00:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.338 14:00:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:25.338 14:00:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:25.338 14:00:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:25.338 14:00:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:25.338 14:00:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:25.338 14:00:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:25.338 14:00:11 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:25.338 00:07:25.338 real 0m0.331s 00:07:25.338 user 0m0.211s 00:07:25.338 sys 0m0.023s 00:07:25.338 14:00:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:25.338 ************************************ 00:07:25.338 END TEST rpc_daemon_integrity 00:07:25.338 ************************************ 00:07:25.338 14:00:11 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:25.596 14:00:11 rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:25.596 14:00:11 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:07:25.596 14:00:11 rpc -- rpc/rpc.sh@84 -- # killprocess 177659 00:07:25.596 14:00:11 rpc -- common/autotest_common.sh@948 -- # '[' -z 177659 ']' 00:07:25.596 14:00:11 rpc -- common/autotest_common.sh@952 -- # kill -0 177659 00:07:25.596 14:00:11 rpc -- common/autotest_common.sh@953 -- # uname 00:07:25.596 14:00:11 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:25.596 14:00:11 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 177659 00:07:25.596 14:00:11 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:25.596 14:00:11 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:25.596 killing process with pid 177659 00:07:25.596 14:00:11 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 177659' 00:07:25.596 14:00:11 rpc -- common/autotest_common.sh@967 -- # kill 177659 00:07:25.596 14:00:11 rpc -- common/autotest_common.sh@972 -- # wait 177659 00:07:28.121 00:07:28.121 real 0m5.195s 00:07:28.121 user 0m5.884s 00:07:28.121 sys 0m0.746s 00:07:28.121 14:00:13 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:28.121 14:00:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:28.121 ************************************ 00:07:28.121 END TEST rpc 00:07:28.121 ************************************ 00:07:28.121 14:00:13 -- common/autotest_common.sh@1142 -- # return 0 00:07:28.121 14:00:13 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:07:28.121 14:00:13 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:28.121 14:00:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.121 14:00:13 -- common/autotest_common.sh@10 -- # set +x 00:07:28.121 ************************************ 00:07:28.121 START TEST skip_rpc 00:07:28.121 ************************************ 00:07:28.121 14:00:13 skip_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:07:28.121 * Looking for test storage... 00:07:28.121 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:07:28.121 14:00:14 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:28.121 14:00:14 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:07:28.121 14:00:14 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:07:28.121 14:00:14 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:28.121 14:00:14 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.121 14:00:14 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:28.121 ************************************ 00:07:28.121 START TEST skip_rpc 00:07:28.121 ************************************ 00:07:28.121 14:00:14 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:07:28.121 14:00:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=177911 00:07:28.121 14:00:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:28.121 14:00:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:07:28.121 14:00:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:07:28.121 [2024-07-15 14:00:14.101579] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:07:28.121 [2024-07-15 14:00:14.102128] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid177911 ] 00:07:28.379 [2024-07-15 14:00:14.273022] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.637 [2024-07-15 14:00:14.522600] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.900 14:00:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:07:33.900 14:00:19 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:33.900 14:00:19 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:07:33.900 14:00:19 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:07:33.900 14:00:19 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:33.900 14:00:19 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:07:33.900 14:00:19 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:33.900 14:00:19 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:07:33.900 14:00:19 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:33.900 14:00:19 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:33.900 14:00:19 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:07:33.900 14:00:19 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:33.900 14:00:19 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:33.900 14:00:19 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:33.900 14:00:19 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:33.900 14:00:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:07:33.900 14:00:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 177911 00:07:33.900 14:00:19 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 177911 ']' 00:07:33.900 14:00:19 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 177911 00:07:33.900 14:00:19 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:07:33.900 14:00:19 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:33.900 14:00:19 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 177911 00:07:33.900 14:00:19 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:33.900 14:00:19 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:33.900 14:00:19 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 177911' 00:07:33.900 killing process with pid 177911 00:07:33.900 14:00:19 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 177911 00:07:33.900 14:00:19 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 177911 00:07:35.802 00:07:35.802 real 0m7.336s 00:07:35.802 user 0m6.744s 00:07:35.802 sys 0m0.477s 00:07:35.802 14:00:21 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:35.802 14:00:21 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:35.802 ************************************ 00:07:35.802 END TEST skip_rpc 00:07:35.802 ************************************ 00:07:35.802 14:00:21 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:35.802 14:00:21 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:07:35.802 14:00:21 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:35.802 14:00:21 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.802 14:00:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:35.802 ************************************ 00:07:35.802 START TEST skip_rpc_with_json 00:07:35.802 ************************************ 00:07:35.802 14:00:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:07:35.802 14:00:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:07:35.802 14:00:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=178028 00:07:35.802 14:00:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:35.802 14:00:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:35.802 14:00:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 178028 00:07:35.802 14:00:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 178028 ']' 00:07:35.802 14:00:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.802 14:00:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:35.802 14:00:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.802 14:00:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:35.802 14:00:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:35.802 [2024-07-15 14:00:21.491275] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:07:35.802 [2024-07-15 14:00:21.491766] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid178028 ] 00:07:35.802 [2024-07-15 14:00:21.662214] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.060 [2024-07-15 14:00:21.901841] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.994 14:00:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:36.994 14:00:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:07:36.994 14:00:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:07:36.994 14:00:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.994 14:00:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:36.994 [2024-07-15 14:00:22.681097] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:07:36.994 request: 00:07:36.994 { 00:07:36.994 "trtype": "tcp", 00:07:36.994 "method": "nvmf_get_transports", 00:07:36.994 "req_id": 1 00:07:36.994 } 00:07:36.994 Got JSON-RPC error response 00:07:36.994 response: 00:07:36.994 { 00:07:36.994 "code": -19, 00:07:36.994 "message": "No such device" 00:07:36.994 } 00:07:36.994 14:00:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:07:36.994 14:00:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:07:36.994 14:00:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.994 14:00:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:36.994 [2024-07-15 14:00:22.689490] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:36.994 14:00:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.994 14:00:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:07:36.994 14:00:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.994 14:00:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:36.994 14:00:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.994 14:00:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:36.994 { 00:07:36.994 "subsystems": [ 00:07:36.994 { 00:07:36.994 "subsystem": "scheduler", 00:07:36.994 "config": [ 00:07:36.994 { 00:07:36.994 "method": "framework_set_scheduler", 00:07:36.994 "params": { 00:07:36.994 "name": "static" 00:07:36.994 } 00:07:36.994 } 00:07:36.994 ] 00:07:36.994 }, 00:07:36.994 { 00:07:36.994 "subsystem": "vmd", 00:07:36.994 "config": [] 00:07:36.994 }, 00:07:36.994 { 00:07:36.994 "subsystem": "sock", 00:07:36.994 "config": [ 00:07:36.994 { 00:07:36.994 "method": "sock_set_default_impl", 00:07:36.994 "params": { 00:07:36.994 "impl_name": "posix" 00:07:36.994 } 00:07:36.994 }, 00:07:36.994 { 00:07:36.994 "method": "sock_impl_set_options", 00:07:36.994 "params": { 00:07:36.994 "impl_name": "ssl", 00:07:36.994 "recv_buf_size": 4096, 00:07:36.994 "send_buf_size": 4096, 00:07:36.994 "enable_recv_pipe": true, 00:07:36.994 "enable_quickack": false, 00:07:36.994 "enable_placement_id": 0, 00:07:36.994 "enable_zerocopy_send_server": true, 00:07:36.994 "enable_zerocopy_send_client": false, 00:07:36.994 "zerocopy_threshold": 0, 00:07:36.994 "tls_version": 0, 00:07:36.994 "enable_ktls": false 00:07:36.994 } 00:07:36.994 }, 00:07:36.994 { 00:07:36.994 "method": "sock_impl_set_options", 00:07:36.994 "params": { 00:07:36.994 "impl_name": "posix", 00:07:36.994 "recv_buf_size": 2097152, 00:07:36.994 "send_buf_size": 2097152, 00:07:36.994 "enable_recv_pipe": true, 00:07:36.994 "enable_quickack": false, 00:07:36.994 "enable_placement_id": 0, 00:07:36.994 "enable_zerocopy_send_server": true, 00:07:36.994 "enable_zerocopy_send_client": false, 00:07:36.994 "zerocopy_threshold": 0, 00:07:36.994 "tls_version": 0, 00:07:36.994 "enable_ktls": false 00:07:36.994 } 00:07:36.994 } 00:07:36.994 ] 00:07:36.994 }, 00:07:36.994 { 00:07:36.994 "subsystem": "iobuf", 00:07:36.994 "config": [ 00:07:36.994 { 00:07:36.994 "method": "iobuf_set_options", 00:07:36.994 "params": { 00:07:36.994 "small_pool_count": 8192, 00:07:36.994 "large_pool_count": 1024, 00:07:36.994 "small_bufsize": 8192, 00:07:36.994 "large_bufsize": 135168 00:07:36.994 } 00:07:36.994 } 00:07:36.994 ] 00:07:36.994 }, 00:07:36.994 { 00:07:36.994 "subsystem": "keyring", 00:07:36.994 "config": [] 00:07:36.994 }, 00:07:36.994 { 00:07:36.994 "subsystem": "accel", 00:07:36.994 "config": [ 00:07:36.994 { 00:07:36.994 "method": "accel_set_options", 00:07:36.994 "params": { 00:07:36.994 "small_cache_size": 128, 00:07:36.994 "large_cache_size": 16, 00:07:36.994 "task_count": 2048, 00:07:36.994 "sequence_count": 2048, 00:07:36.994 "buf_count": 2048 00:07:36.994 } 00:07:36.994 } 00:07:36.994 ] 00:07:36.994 }, 00:07:36.994 { 00:07:36.994 "subsystem": "bdev", 00:07:36.994 "config": [ 00:07:36.994 { 00:07:36.994 "method": "bdev_set_options", 00:07:36.994 "params": { 00:07:36.994 "bdev_io_pool_size": 65535, 00:07:36.994 "bdev_io_cache_size": 256, 00:07:36.994 "bdev_auto_examine": true, 00:07:36.994 "iobuf_small_cache_size": 128, 00:07:36.994 "iobuf_large_cache_size": 16 00:07:36.994 } 00:07:36.994 }, 00:07:36.994 { 00:07:36.994 "method": "bdev_raid_set_options", 00:07:36.994 "params": { 00:07:36.994 "process_window_size_kb": 1024 00:07:36.994 } 00:07:36.994 }, 00:07:36.994 { 00:07:36.994 "method": "bdev_nvme_set_options", 00:07:36.994 "params": { 00:07:36.994 "action_on_timeout": "none", 00:07:36.994 "timeout_us": 0, 00:07:36.994 "timeout_admin_us": 0, 00:07:36.994 "keep_alive_timeout_ms": 10000, 00:07:36.994 "arbitration_burst": 0, 00:07:36.994 "low_priority_weight": 0, 00:07:36.994 "medium_priority_weight": 0, 00:07:36.994 "high_priority_weight": 0, 00:07:36.994 "nvme_adminq_poll_period_us": 10000, 00:07:36.994 "nvme_ioq_poll_period_us": 0, 00:07:36.994 "io_queue_requests": 0, 00:07:36.994 "delay_cmd_submit": true, 00:07:36.994 "transport_retry_count": 4, 00:07:36.994 "bdev_retry_count": 3, 00:07:36.994 "transport_ack_timeout": 0, 00:07:36.994 "ctrlr_loss_timeout_sec": 0, 00:07:36.994 "reconnect_delay_sec": 0, 00:07:36.994 "fast_io_fail_timeout_sec": 0, 00:07:36.994 "disable_auto_failback": false, 00:07:36.994 "generate_uuids": false, 00:07:36.994 "transport_tos": 0, 00:07:36.994 "nvme_error_stat": false, 00:07:36.994 "rdma_srq_size": 0, 00:07:36.994 "io_path_stat": false, 00:07:36.994 "allow_accel_sequence": false, 00:07:36.994 "rdma_max_cq_size": 0, 00:07:36.994 "rdma_cm_event_timeout_ms": 0, 00:07:36.994 "dhchap_digests": [ 00:07:36.994 "sha256", 00:07:36.994 "sha384", 00:07:36.994 "sha512" 00:07:36.994 ], 00:07:36.994 "dhchap_dhgroups": [ 00:07:36.994 "null", 00:07:36.994 "ffdhe2048", 00:07:36.994 "ffdhe3072", 00:07:36.994 "ffdhe4096", 00:07:36.994 "ffdhe6144", 00:07:36.994 "ffdhe8192" 00:07:36.994 ] 00:07:36.994 } 00:07:36.994 }, 00:07:36.994 { 00:07:36.994 "method": "bdev_nvme_set_hotplug", 00:07:36.994 "params": { 00:07:36.994 "period_us": 100000, 00:07:36.994 "enable": false 00:07:36.994 } 00:07:36.994 }, 00:07:36.994 { 00:07:36.994 "method": "bdev_iscsi_set_options", 00:07:36.994 "params": { 00:07:36.994 "timeout_sec": 30 00:07:36.994 } 00:07:36.994 }, 00:07:36.994 { 00:07:36.994 "method": "bdev_wait_for_examine" 00:07:36.994 } 00:07:36.994 ] 00:07:36.994 }, 00:07:36.994 { 00:07:36.994 "subsystem": "nvmf", 00:07:36.994 "config": [ 00:07:36.994 { 00:07:36.994 "method": "nvmf_set_config", 00:07:36.994 "params": { 00:07:36.994 "discovery_filter": "match_any", 00:07:36.994 "admin_cmd_passthru": { 00:07:36.994 "identify_ctrlr": false 00:07:36.994 } 00:07:36.994 } 00:07:36.994 }, 00:07:36.994 { 00:07:36.994 "method": "nvmf_set_max_subsystems", 00:07:36.994 "params": { 00:07:36.994 "max_subsystems": 1024 00:07:36.994 } 00:07:36.994 }, 00:07:36.994 { 00:07:36.994 "method": "nvmf_set_crdt", 00:07:36.994 "params": { 00:07:36.994 "crdt1": 0, 00:07:36.994 "crdt2": 0, 00:07:36.994 "crdt3": 0 00:07:36.994 } 00:07:36.994 }, 00:07:36.994 { 00:07:36.994 "method": "nvmf_create_transport", 00:07:36.994 "params": { 00:07:36.994 "trtype": "TCP", 00:07:36.994 "max_queue_depth": 128, 00:07:36.994 "max_io_qpairs_per_ctrlr": 127, 00:07:36.994 "in_capsule_data_size": 4096, 00:07:36.994 "max_io_size": 131072, 00:07:36.994 "io_unit_size": 131072, 00:07:36.994 "max_aq_depth": 128, 00:07:36.994 "num_shared_buffers": 511, 00:07:36.994 "buf_cache_size": 4294967295, 00:07:36.994 "dif_insert_or_strip": false, 00:07:36.994 "zcopy": false, 00:07:36.994 "c2h_success": true, 00:07:36.994 "sock_priority": 0, 00:07:36.994 "abort_timeout_sec": 1, 00:07:36.994 "ack_timeout": 0, 00:07:36.994 "data_wr_pool_size": 0 00:07:36.994 } 00:07:36.994 } 00:07:36.994 ] 00:07:36.994 }, 00:07:36.994 { 00:07:36.994 "subsystem": "nbd", 00:07:36.994 "config": [] 00:07:36.994 }, 00:07:36.994 { 00:07:36.994 "subsystem": "vhost_blk", 00:07:36.994 "config": [] 00:07:36.994 }, 00:07:36.994 { 00:07:36.994 "subsystem": "scsi", 00:07:36.994 "config": null 00:07:36.994 }, 00:07:36.994 { 00:07:36.994 "subsystem": "iscsi", 00:07:36.994 "config": [ 00:07:36.994 { 00:07:36.994 "method": "iscsi_set_options", 00:07:36.994 "params": { 00:07:36.994 "node_base": "iqn.2016-06.io.spdk", 00:07:36.994 "max_sessions": 128, 00:07:36.994 "max_connections_per_session": 2, 00:07:36.994 "max_queue_depth": 64, 00:07:36.994 "default_time2wait": 2, 00:07:36.994 "default_time2retain": 20, 00:07:36.994 "first_burst_length": 8192, 00:07:36.994 "immediate_data": true, 00:07:36.994 "allow_duplicated_isid": false, 00:07:36.994 "error_recovery_level": 0, 00:07:36.994 "nop_timeout": 60, 00:07:36.994 "nop_in_interval": 30, 00:07:36.994 "disable_chap": false, 00:07:36.994 "require_chap": false, 00:07:36.994 "mutual_chap": false, 00:07:36.994 "chap_group": 0, 00:07:36.994 "max_large_datain_per_connection": 64, 00:07:36.995 "max_r2t_per_connection": 4, 00:07:36.995 "pdu_pool_size": 36864, 00:07:36.995 "immediate_data_pool_size": 16384, 00:07:36.995 "data_out_pool_size": 2048 00:07:36.995 } 00:07:36.995 } 00:07:36.995 ] 00:07:36.995 }, 00:07:36.995 { 00:07:36.995 "subsystem": "vhost_scsi", 00:07:36.995 "config": [] 00:07:36.995 } 00:07:36.995 ] 00:07:36.995 } 00:07:36.995 14:00:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:07:36.995 14:00:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 178028 00:07:36.995 14:00:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 178028 ']' 00:07:36.995 14:00:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 178028 00:07:36.995 14:00:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:07:36.995 14:00:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:36.995 14:00:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 178028 00:07:36.995 14:00:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:36.995 14:00:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:36.995 14:00:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 178028' 00:07:36.995 killing process with pid 178028 00:07:36.995 14:00:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 178028 00:07:36.995 14:00:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 178028 00:07:39.528 14:00:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=178092 00:07:39.528 14:00:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:39.528 14:00:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:07:44.789 14:00:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 178092 00:07:44.789 14:00:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 178092 ']' 00:07:44.789 14:00:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 178092 00:07:44.789 14:00:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:07:44.789 14:00:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:44.789 14:00:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 178092 00:07:44.789 14:00:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:44.789 14:00:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:44.789 14:00:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 178092' 00:07:44.789 killing process with pid 178092 00:07:44.789 14:00:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 178092 00:07:44.789 14:00:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 178092 00:07:46.689 14:00:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:07:46.689 14:00:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:07:46.689 00:07:46.689 real 0m10.798s 00:07:46.689 user 0m10.225s 00:07:46.689 sys 0m0.942s 00:07:46.689 14:00:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:46.689 14:00:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:46.689 ************************************ 00:07:46.689 END TEST skip_rpc_with_json 00:07:46.689 ************************************ 00:07:46.689 14:00:32 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:46.689 14:00:32 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:07:46.689 14:00:32 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:46.689 14:00:32 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:46.689 14:00:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:46.689 ************************************ 00:07:46.689 START TEST skip_rpc_with_delay 00:07:46.689 ************************************ 00:07:46.689 14:00:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:07:46.689 14:00:32 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:46.689 14:00:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:07:46.689 14:00:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:46.689 14:00:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:46.689 14:00:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:46.689 14:00:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:46.689 14:00:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:46.689 14:00:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:46.689 14:00:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:46.689 14:00:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:46.689 14:00:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:07:46.689 14:00:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:46.689 [2024-07-15 14:00:32.354328] app.c: 831:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:07:46.689 [2024-07-15 14:00:32.355206] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:07:46.689 14:00:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:07:46.689 14:00:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:46.689 14:00:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:46.689 14:00:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:46.689 00:07:46.689 real 0m0.102s 00:07:46.689 user 0m0.058s 00:07:46.689 sys 0m0.041s 00:07:46.689 14:00:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:46.689 14:00:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:07:46.689 ************************************ 00:07:46.689 END TEST skip_rpc_with_delay 00:07:46.689 ************************************ 00:07:46.690 14:00:32 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:46.690 14:00:32 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:07:46.690 14:00:32 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:07:46.690 14:00:32 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:07:46.690 14:00:32 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:46.690 14:00:32 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:46.690 14:00:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:46.690 ************************************ 00:07:46.690 START TEST exit_on_failed_rpc_init 00:07:46.690 ************************************ 00:07:46.690 14:00:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:07:46.690 14:00:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=178231 00:07:46.690 14:00:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 178231 00:07:46.690 14:00:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:46.690 14:00:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 178231 ']' 00:07:46.690 14:00:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:46.690 14:00:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:46.690 14:00:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:46.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:46.690 14:00:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:46.690 14:00:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:46.690 [2024-07-15 14:00:32.506682] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:07:46.690 [2024-07-15 14:00:32.507093] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid178231 ] 00:07:46.690 [2024-07-15 14:00:32.667639] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.947 [2024-07-15 14:00:32.923170] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.884 14:00:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:47.884 14:00:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:07:47.884 14:00:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:47.884 14:00:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:47.884 14:00:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:07:47.884 14:00:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:47.884 14:00:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:47.884 14:00:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:47.884 14:00:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:47.884 14:00:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:47.884 14:00:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:47.884 14:00:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:47.884 14:00:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:47.884 14:00:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:07:47.884 14:00:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:07:47.884 [2024-07-15 14:00:33.758623] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:07:47.884 [2024-07-15 14:00:33.759628] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid178254 ] 00:07:48.143 [2024-07-15 14:00:33.923763] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.413 [2024-07-15 14:00:34.210812] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:48.413 [2024-07-15 14:00:34.211208] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:07:48.413 [2024-07-15 14:00:34.211456] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:07:48.413 [2024-07-15 14:00:34.211653] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:48.672 14:00:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:07:48.672 14:00:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:48.672 14:00:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:07:48.672 14:00:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:07:48.672 14:00:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:07:48.672 14:00:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:48.672 14:00:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:48.672 14:00:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 178231 00:07:48.672 14:00:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 178231 ']' 00:07:48.672 14:00:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 178231 00:07:48.672 14:00:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:07:48.672 14:00:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:48.673 14:00:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 178231 00:07:48.931 14:00:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:48.931 14:00:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:48.931 14:00:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 178231' 00:07:48.931 killing process with pid 178231 00:07:48.931 14:00:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 178231 00:07:48.931 14:00:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 178231 00:07:51.457 00:07:51.457 real 0m4.458s 00:07:51.457 user 0m5.217s 00:07:51.457 sys 0m0.585s 00:07:51.457 14:00:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:51.457 14:00:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:51.457 ************************************ 00:07:51.457 END TEST exit_on_failed_rpc_init 00:07:51.457 ************************************ 00:07:51.457 14:00:36 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:51.457 14:00:36 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:51.457 00:07:51.457 real 0m23.012s 00:07:51.457 user 0m22.345s 00:07:51.457 sys 0m2.229s 00:07:51.457 14:00:36 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:51.457 14:00:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.457 ************************************ 00:07:51.457 END TEST skip_rpc 00:07:51.457 ************************************ 00:07:51.457 14:00:37 -- common/autotest_common.sh@1142 -- # return 0 00:07:51.457 14:00:37 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:07:51.457 14:00:37 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:51.457 14:00:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:51.457 14:00:37 -- common/autotest_common.sh@10 -- # set +x 00:07:51.457 ************************************ 00:07:51.457 START TEST rpc_client 00:07:51.457 ************************************ 00:07:51.457 14:00:37 rpc_client -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:07:51.457 * Looking for test storage... 00:07:51.457 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:07:51.457 14:00:37 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:07:51.457 OK 00:07:51.457 14:00:37 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:07:51.457 00:07:51.457 real 0m0.152s 00:07:51.457 user 0m0.071s 00:07:51.457 sys 0m0.090s 00:07:51.457 14:00:37 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:51.457 14:00:37 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:07:51.457 ************************************ 00:07:51.457 END TEST rpc_client 00:07:51.457 ************************************ 00:07:51.457 14:00:37 -- common/autotest_common.sh@1142 -- # return 0 00:07:51.457 14:00:37 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:07:51.457 14:00:37 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:51.457 14:00:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:51.457 14:00:37 -- common/autotest_common.sh@10 -- # set +x 00:07:51.457 ************************************ 00:07:51.457 START TEST json_config 00:07:51.457 ************************************ 00:07:51.457 14:00:37 json_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:07:51.457 14:00:37 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:51.457 14:00:37 json_config -- nvmf/common.sh@7 -- # uname -s 00:07:51.457 14:00:37 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:51.457 14:00:37 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:51.457 14:00:37 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:51.457 14:00:37 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:51.457 14:00:37 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:51.457 14:00:37 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:51.457 14:00:37 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:51.457 14:00:37 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:51.457 14:00:37 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:51.458 14:00:37 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:51.458 14:00:37 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6eb37903-5e6e-4bf2-b995-7433baab6b1f 00:07:51.458 14:00:37 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=6eb37903-5e6e-4bf2-b995-7433baab6b1f 00:07:51.458 14:00:37 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:51.458 14:00:37 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:51.458 14:00:37 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:51.458 14:00:37 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:51.458 14:00:37 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:51.458 14:00:37 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:51.458 14:00:37 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:51.458 14:00:37 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:51.458 14:00:37 json_config -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:07:51.458 14:00:37 json_config -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:07:51.458 14:00:37 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:07:51.458 14:00:37 json_config -- paths/export.sh@5 -- # export PATH 00:07:51.458 14:00:37 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:07:51.458 14:00:37 json_config -- nvmf/common.sh@47 -- # : 0 00:07:51.458 14:00:37 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:51.458 14:00:37 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:51.458 14:00:37 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:51.458 14:00:37 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:51.458 14:00:37 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:51.458 14:00:37 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:51.458 14:00:37 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:51.458 14:00:37 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:51.458 14:00:37 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:07:51.458 14:00:37 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:07:51.458 14:00:37 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:07:51.458 14:00:37 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:07:51.458 14:00:37 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:07:51.458 14:00:37 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:07:51.458 14:00:37 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:07:51.458 14:00:37 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:07:51.458 14:00:37 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:07:51.458 14:00:37 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:07:51.458 14:00:37 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:07:51.458 14:00:37 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:07:51.458 14:00:37 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:07:51.458 14:00:37 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:07:51.458 14:00:37 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:51.458 14:00:37 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:07:51.458 INFO: JSON configuration test init 00:07:51.458 14:00:37 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:07:51.458 14:00:37 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:07:51.458 14:00:37 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:51.458 14:00:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:51.458 14:00:37 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:07:51.458 14:00:37 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:51.458 14:00:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:51.458 14:00:37 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:07:51.458 14:00:37 json_config -- json_config/common.sh@9 -- # local app=target 00:07:51.458 14:00:37 json_config -- json_config/common.sh@10 -- # shift 00:07:51.458 14:00:37 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:51.458 14:00:37 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:51.458 14:00:37 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:07:51.458 14:00:37 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:51.458 14:00:37 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:51.458 14:00:37 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=178415 00:07:51.458 14:00:37 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:51.458 Waiting for target to run... 00:07:51.458 14:00:37 json_config -- json_config/common.sh@25 -- # waitforlisten 178415 /var/tmp/spdk_tgt.sock 00:07:51.458 14:00:37 json_config -- common/autotest_common.sh@829 -- # '[' -z 178415 ']' 00:07:51.458 14:00:37 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:07:51.458 14:00:37 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:51.458 14:00:37 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:51.458 14:00:37 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:51.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:51.458 14:00:37 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:51.458 14:00:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:51.458 [2024-07-15 14:00:37.387604] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:07:51.458 [2024-07-15 14:00:37.388313] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid178415 ] 00:07:52.025 [2024-07-15 14:00:37.929048] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.289 [2024-07-15 14:00:38.119458] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.547 14:00:38 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:52.547 14:00:38 json_config -- common/autotest_common.sh@862 -- # return 0 00:07:52.547 14:00:38 json_config -- json_config/common.sh@26 -- # echo '' 00:07:52.547 00:07:52.547 14:00:38 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:07:52.547 14:00:38 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:07:52.547 14:00:38 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:52.547 14:00:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:52.547 14:00:38 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:07:52.547 14:00:38 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:07:52.547 14:00:38 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:52.547 14:00:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:52.547 14:00:38 json_config -- json_config/json_config.sh@273 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:07:52.547 14:00:38 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:07:52.547 14:00:38 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:07:53.480 14:00:39 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:07:53.480 14:00:39 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:07:53.480 14:00:39 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:53.480 14:00:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:53.480 14:00:39 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:07:53.480 14:00:39 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:07:53.480 14:00:39 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:07:53.742 14:00:39 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:07:53.742 14:00:39 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:07:53.742 14:00:39 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:07:54.025 14:00:39 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:07:54.025 14:00:39 json_config -- json_config/json_config.sh@48 -- # local get_types 00:07:54.025 14:00:39 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:07:54.025 14:00:39 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:07:54.025 14:00:39 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:54.025 14:00:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:54.025 14:00:39 json_config -- json_config/json_config.sh@55 -- # return 0 00:07:54.025 14:00:39 json_config -- json_config/json_config.sh@278 -- # [[ 1 -eq 1 ]] 00:07:54.025 14:00:39 json_config -- json_config/json_config.sh@279 -- # create_bdev_subsystem_config 00:07:54.025 14:00:39 json_config -- json_config/json_config.sh@105 -- # timing_enter create_bdev_subsystem_config 00:07:54.025 14:00:39 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:54.025 14:00:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:54.025 14:00:39 json_config -- json_config/json_config.sh@107 -- # expected_notifications=() 00:07:54.025 14:00:39 json_config -- json_config/json_config.sh@107 -- # local expected_notifications 00:07:54.025 14:00:39 json_config -- json_config/json_config.sh@111 -- # expected_notifications+=($(get_notifications)) 00:07:54.025 14:00:39 json_config -- json_config/json_config.sh@111 -- # get_notifications 00:07:54.025 14:00:39 json_config -- json_config/json_config.sh@59 -- # local ev_type ev_ctx event_id 00:07:54.026 14:00:39 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:07:54.026 14:00:39 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:07:54.026 14:00:39 json_config -- json_config/json_config.sh@58 -- # tgt_rpc notify_get_notifications -i 0 00:07:54.026 14:00:39 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:07:54.026 14:00:39 json_config -- json_config/json_config.sh@58 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:07:54.283 14:00:40 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1 00:07:54.283 14:00:40 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:07:54.283 14:00:40 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:07:54.283 14:00:40 json_config -- json_config/json_config.sh@113 -- # [[ 1 -eq 1 ]] 00:07:54.283 14:00:40 json_config -- json_config/json_config.sh@114 -- # local lvol_store_base_bdev=Nvme0n1 00:07:54.284 14:00:40 json_config -- json_config/json_config.sh@116 -- # tgt_rpc bdev_split_create Nvme0n1 2 00:07:54.284 14:00:40 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Nvme0n1 2 00:07:54.540 Nvme0n1p0 Nvme0n1p1 00:07:54.540 14:00:40 json_config -- json_config/json_config.sh@117 -- # tgt_rpc bdev_split_create Malloc0 3 00:07:54.540 14:00:40 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Malloc0 3 00:07:54.797 [2024-07-15 14:00:40.646769] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:07:54.797 [2024-07-15 14:00:40.647646] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:07:54.797 00:07:54.797 14:00:40 json_config -- json_config/json_config.sh@118 -- # tgt_rpc bdev_malloc_create 8 4096 --name Malloc3 00:07:54.797 14:00:40 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 4096 --name Malloc3 00:07:55.053 Malloc3 00:07:55.053 14:00:41 json_config -- json_config/json_config.sh@119 -- # tgt_rpc bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:07:55.053 14:00:41 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:07:55.310 [2024-07-15 14:00:41.294676] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:55.310 [2024-07-15 14:00:41.295155] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:55.310 [2024-07-15 14:00:41.295428] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:07:55.310 [2024-07-15 14:00:41.295650] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:55.310 [2024-07-15 14:00:41.297663] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:55.310 [2024-07-15 14:00:41.297921] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:07:55.310 PTBdevFromMalloc3 00:07:55.568 14:00:41 json_config -- json_config/json_config.sh@121 -- # tgt_rpc bdev_null_create Null0 32 512 00:07:55.568 14:00:41 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_null_create Null0 32 512 00:07:55.826 Null0 00:07:55.826 14:00:41 json_config -- json_config/json_config.sh@123 -- # tgt_rpc bdev_malloc_create 32 512 --name Malloc0 00:07:55.826 14:00:41 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 32 512 --name Malloc0 00:07:56.084 Malloc0 00:07:56.084 14:00:41 json_config -- json_config/json_config.sh@124 -- # tgt_rpc bdev_malloc_create 16 4096 --name Malloc1 00:07:56.084 14:00:41 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 16 4096 --name Malloc1 00:07:56.377 Malloc1 00:07:56.377 14:00:42 json_config -- json_config/json_config.sh@137 -- # expected_notifications+=(bdev_register:${lvol_store_base_bdev}p1 bdev_register:${lvol_store_base_bdev}p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1) 00:07:56.377 14:00:42 json_config -- json_config/json_config.sh@140 -- # dd if=/dev/zero of=/sample_aio bs=1024 count=102400 00:07:56.637 102400+0 records in 00:07:56.637 102400+0 records out 00:07:56.637 104857600 bytes (105 MB, 100 MiB) copied, 0.3116 s, 337 MB/s 00:07:56.637 14:00:42 json_config -- json_config/json_config.sh@141 -- # tgt_rpc bdev_aio_create /sample_aio aio_disk 1024 00:07:56.637 14:00:42 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_aio_create /sample_aio aio_disk 1024 00:07:56.894 aio_disk 00:07:57.152 14:00:42 json_config -- json_config/json_config.sh@142 -- # expected_notifications+=(bdev_register:aio_disk) 00:07:57.152 14:00:42 json_config -- json_config/json_config.sh@147 -- # tgt_rpc bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:07:57.152 14:00:42 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:07:57.408 09da1b5b-f9b4-4749-9e7b-f82d647b202b 00:07:57.408 14:00:43 json_config -- json_config/json_config.sh@154 -- # expected_notifications+=("bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test lvol0 32)" "bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32)" "bdev_register:$(tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0)" "bdev_register:$(tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0)") 00:07:57.408 14:00:43 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_create -l lvs_test lvol0 32 00:07:57.408 14:00:43 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test lvol0 32 00:07:57.665 14:00:43 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32 00:07:57.665 14:00:43 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test -t lvol1 32 00:07:57.923 14:00:43 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:07:57.923 14:00:43 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:07:58.180 14:00:44 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0 00:07:58.180 14:00:44 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_clone lvs_test/snapshot0 clone0 00:07:58.438 14:00:44 json_config -- json_config/json_config.sh@157 -- # [[ 0 -eq 1 ]] 00:07:58.438 14:00:44 json_config -- json_config/json_config.sh@172 -- # [[ 0 -eq 1 ]] 00:07:58.438 14:00:44 json_config -- json_config/json_config.sh@178 -- # tgt_check_notifications bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:222c1491-f4ec-4ce2-9021-e38e26bce180 bdev_register:51c33cca-e5d2-4c55-a020-b91e2627259a bdev_register:9d138d4f-7753-478f-b979-21ef8b58946e bdev_register:cc32a658-9270-4816-8c34-9c04b2c5ac47 00:07:58.438 14:00:44 json_config -- json_config/json_config.sh@67 -- # local events_to_check 00:07:58.438 14:00:44 json_config -- json_config/json_config.sh@68 -- # local recorded_events 00:07:58.438 14:00:44 json_config -- json_config/json_config.sh@71 -- # events_to_check=($(printf '%s\n' "$@" | sort)) 00:07:58.438 14:00:44 json_config -- json_config/json_config.sh@71 -- # printf '%s\n' bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:222c1491-f4ec-4ce2-9021-e38e26bce180 bdev_register:51c33cca-e5d2-4c55-a020-b91e2627259a bdev_register:9d138d4f-7753-478f-b979-21ef8b58946e bdev_register:cc32a658-9270-4816-8c34-9c04b2c5ac47 00:07:58.438 14:00:44 json_config -- json_config/json_config.sh@71 -- # sort 00:07:58.438 14:00:44 json_config -- json_config/json_config.sh@72 -- # recorded_events=($(get_notifications | sort)) 00:07:58.438 14:00:44 json_config -- json_config/json_config.sh@72 -- # get_notifications 00:07:58.438 14:00:44 json_config -- json_config/json_config.sh@72 -- # sort 00:07:58.438 14:00:44 json_config -- json_config/json_config.sh@59 -- # local ev_type ev_ctx event_id 00:07:58.438 14:00:44 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:07:58.438 14:00:44 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:07:58.438 14:00:44 json_config -- json_config/json_config.sh@58 -- # tgt_rpc notify_get_notifications -i 0 00:07:58.438 14:00:44 json_config -- json_config/json_config.sh@58 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:07:58.438 14:00:44 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:07:58.696 14:00:44 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1 00:07:58.696 14:00:44 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:07:58.696 14:00:44 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:07:58.696 14:00:44 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1p1 00:07:58.696 14:00:44 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:07:58.696 14:00:44 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:07:58.696 14:00:44 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1p0 00:07:58.696 14:00:44 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:07:58.696 14:00:44 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:07:58.696 14:00:44 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc3 00:07:58.696 14:00:44 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:07:58.696 14:00:44 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:07:58.696 14:00:44 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:PTBdevFromMalloc3 00:07:58.696 14:00:44 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:07:58.696 14:00:44 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:07:58.696 14:00:44 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Null0 00:07:58.696 14:00:44 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:07:58.696 14:00:44 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:07:58.696 14:00:44 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0 00:07:58.696 14:00:44 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:07:58.696 14:00:44 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:07:58.696 14:00:44 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p2 00:07:58.696 14:00:44 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:07:58.696 14:00:44 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:07:58.696 14:00:44 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p1 00:07:58.696 14:00:44 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:07:58.696 14:00:44 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:07:58.696 14:00:44 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p0 00:07:58.696 14:00:44 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:07:58.696 14:00:44 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:07:58.696 14:00:44 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc1 00:07:58.696 14:00:44 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:07:58.696 14:00:44 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:07:58.696 14:00:44 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:aio_disk 00:07:58.696 14:00:44 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:07:58.696 14:00:44 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:07:58.696 14:00:44 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:222c1491-f4ec-4ce2-9021-e38e26bce180 00:07:58.696 14:00:44 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:07:58.696 14:00:44 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:07:58.696 14:00:44 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:51c33cca-e5d2-4c55-a020-b91e2627259a 00:07:58.696 14:00:44 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:07:58.696 14:00:44 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:07:58.696 14:00:44 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:9d138d4f-7753-478f-b979-21ef8b58946e 00:07:58.696 14:00:44 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:07:58.696 14:00:44 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:07:58.696 14:00:44 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:cc32a658-9270-4816-8c34-9c04b2c5ac47 00:07:58.696 14:00:44 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:07:58.696 14:00:44 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:07:58.696 14:00:44 json_config -- json_config/json_config.sh@74 -- # [[ bdev_register:222c1491-f4ec-4ce2-9021-e38e26bce180 bdev_register:51c33cca-e5d2-4c55-a020-b91e2627259a bdev_register:9d138d4f-7753-478f-b979-21ef8b58946e bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk bdev_register:cc32a658-9270-4816-8c34-9c04b2c5ac47 != \b\d\e\v\_\r\e\g\i\s\t\e\r\:\2\2\2\c\1\4\9\1\-\f\4\e\c\-\4\c\e\2\-\9\0\2\1\-\e\3\8\e\2\6\b\c\e\1\8\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\5\1\c\3\3\c\c\a\-\e\5\d\2\-\4\c\5\5\-\a\0\2\0\-\b\9\1\e\2\6\2\7\2\5\9\a\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\9\d\1\3\8\d\4\f\-\7\7\5\3\-\4\7\8\f\-\b\9\7\9\-\2\1\e\f\8\b\5\8\9\4\6\e\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\u\l\l\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\P\T\B\d\e\v\F\r\o\m\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\i\o\_\d\i\s\k\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\c\c\3\2\a\6\5\8\-\9\2\7\0\-\4\8\1\6\-\8\c\3\4\-\9\c\0\4\b\2\c\5\a\c\4\7 ]] 00:07:58.696 14:00:44 json_config -- json_config/json_config.sh@86 -- # cat 00:07:58.696 14:00:44 json_config -- json_config/json_config.sh@86 -- # printf ' %s\n' bdev_register:222c1491-f4ec-4ce2-9021-e38e26bce180 bdev_register:51c33cca-e5d2-4c55-a020-b91e2627259a bdev_register:9d138d4f-7753-478f-b979-21ef8b58946e bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk bdev_register:cc32a658-9270-4816-8c34-9c04b2c5ac47 00:07:58.696 Expected events matched: 00:07:58.696 bdev_register:222c1491-f4ec-4ce2-9021-e38e26bce180 00:07:58.696 bdev_register:51c33cca-e5d2-4c55-a020-b91e2627259a 00:07:58.696 bdev_register:9d138d4f-7753-478f-b979-21ef8b58946e 00:07:58.696 bdev_register:Malloc0 00:07:58.696 bdev_register:Malloc0p0 00:07:58.696 bdev_register:Malloc0p1 00:07:58.696 bdev_register:Malloc0p2 00:07:58.696 bdev_register:Malloc1 00:07:58.696 bdev_register:Malloc3 00:07:58.696 bdev_register:Null0 00:07:58.696 bdev_register:Nvme0n1 00:07:58.696 bdev_register:Nvme0n1p0 00:07:58.696 bdev_register:Nvme0n1p1 00:07:58.696 bdev_register:PTBdevFromMalloc3 00:07:58.696 bdev_register:aio_disk 00:07:58.696 bdev_register:cc32a658-9270-4816-8c34-9c04b2c5ac47 00:07:58.696 14:00:44 json_config -- json_config/json_config.sh@180 -- # timing_exit create_bdev_subsystem_config 00:07:58.696 14:00:44 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:58.696 14:00:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:58.696 14:00:44 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:07:58.696 14:00:44 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:07:58.696 14:00:44 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:07:58.696 14:00:44 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:07:58.696 14:00:44 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:58.696 14:00:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:58.696 14:00:44 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:07:58.697 14:00:44 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:58.697 14:00:44 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:59.282 MallocBdevForConfigChangeCheck 00:07:59.282 14:00:44 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:07:59.282 14:00:44 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:59.282 14:00:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:59.282 14:00:45 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:07:59.282 14:00:45 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:59.539 14:00:45 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:07:59.539 INFO: shutting down applications... 00:07:59.539 14:00:45 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:07:59.539 14:00:45 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:07:59.539 14:00:45 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:07:59.539 14:00:45 json_config -- json_config/json_config.sh@333 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:07:59.796 [2024-07-15 14:00:45.608378] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev Nvme0n1p0 being removed: closing lvstore lvs_test 00:08:00.054 Calling clear_vhost_scsi_subsystem 00:08:00.054 Calling clear_iscsi_subsystem 00:08:00.054 Calling clear_vhost_blk_subsystem 00:08:00.054 Calling clear_nbd_subsystem 00:08:00.054 Calling clear_nvmf_subsystem 00:08:00.054 Calling clear_bdev_subsystem 00:08:00.054 14:00:45 json_config -- json_config/json_config.sh@337 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:08:00.054 14:00:45 json_config -- json_config/json_config.sh@343 -- # count=100 00:08:00.054 14:00:45 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:08:00.054 14:00:45 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:08:00.054 14:00:45 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:00.054 14:00:45 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:08:00.312 14:00:46 json_config -- json_config/json_config.sh@345 -- # break 00:08:00.312 14:00:46 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:08:00.312 14:00:46 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:08:00.312 14:00:46 json_config -- json_config/common.sh@31 -- # local app=target 00:08:00.312 14:00:46 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:08:00.312 14:00:46 json_config -- json_config/common.sh@35 -- # [[ -n 178415 ]] 00:08:00.312 14:00:46 json_config -- json_config/common.sh@38 -- # kill -SIGINT 178415 00:08:00.312 14:00:46 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:08:00.312 14:00:46 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:00.312 14:00:46 json_config -- json_config/common.sh@41 -- # kill -0 178415 00:08:00.312 14:00:46 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:08:00.877 14:00:46 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:08:00.877 14:00:46 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:00.877 14:00:46 json_config -- json_config/common.sh@41 -- # kill -0 178415 00:08:00.877 14:00:46 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:08:01.444 14:00:47 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:08:01.444 14:00:47 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:01.444 14:00:47 json_config -- json_config/common.sh@41 -- # kill -0 178415 00:08:01.444 14:00:47 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:08:01.444 14:00:47 json_config -- json_config/common.sh@43 -- # break 00:08:01.444 14:00:47 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:08:01.444 14:00:47 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:08:01.444 SPDK target shutdown done 00:08:01.444 14:00:47 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:08:01.444 INFO: relaunching applications... 00:08:01.444 14:00:47 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:01.444 14:00:47 json_config -- json_config/common.sh@9 -- # local app=target 00:08:01.444 14:00:47 json_config -- json_config/common.sh@10 -- # shift 00:08:01.444 14:00:47 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:08:01.444 14:00:47 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:08:01.444 14:00:47 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:08:01.444 14:00:47 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:01.444 14:00:47 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:01.444 14:00:47 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=178696 00:08:01.444 14:00:47 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:08:01.444 Waiting for target to run... 00:08:01.444 14:00:47 json_config -- json_config/common.sh@25 -- # waitforlisten 178696 /var/tmp/spdk_tgt.sock 00:08:01.444 14:00:47 json_config -- common/autotest_common.sh@829 -- # '[' -z 178696 ']' 00:08:01.444 14:00:47 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:01.444 14:00:47 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:01.444 14:00:47 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:01.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:01.444 14:00:47 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:01.444 14:00:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:01.444 14:00:47 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:01.444 [2024-07-15 14:00:47.285054] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:08:01.444 [2024-07-15 14:00:47.285614] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid178696 ] 00:08:02.013 [2024-07-15 14:00:47.733150] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.013 [2024-07-15 14:00:47.986094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.945 [2024-07-15 14:00:48.683522] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:08:02.945 [2024-07-15 14:00:48.684331] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:08:02.945 [2024-07-15 14:00:48.691495] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:08:02.945 [2024-07-15 14:00:48.691789] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:08:02.945 [2024-07-15 14:00:48.699523] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:02.945 [2024-07-15 14:00:48.699793] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:08:02.945 [2024-07-15 14:00:48.700032] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:08:02.945 [2024-07-15 14:00:48.790771] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:02.945 [2024-07-15 14:00:48.791372] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:02.945 [2024-07-15 14:00:48.791657] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:02.945 [2024-07-15 14:00:48.791911] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:02.945 [2024-07-15 14:00:48.792549] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:02.945 [2024-07-15 14:00:48.792783] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:08:02.945 14:00:48 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:02.945 14:00:48 json_config -- common/autotest_common.sh@862 -- # return 0 00:08:02.945 14:00:48 json_config -- json_config/common.sh@26 -- # echo '' 00:08:02.945 00:08:02.945 14:00:48 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:08:02.945 14:00:48 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:08:02.945 INFO: Checking if target configuration is the same... 00:08:02.945 14:00:48 json_config -- json_config/json_config.sh@378 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:02.945 14:00:48 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:08:02.945 14:00:48 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:02.945 + '[' 2 -ne 2 ']' 00:08:02.945 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:08:02.945 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:08:02.945 + rootdir=/home/vagrant/spdk_repo/spdk 00:08:02.945 +++ basename /dev/fd/62 00:08:02.945 ++ mktemp /tmp/62.XXX 00:08:02.945 + tmp_file_1=/tmp/62.yku 00:08:02.945 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:02.945 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:08:02.945 + tmp_file_2=/tmp/spdk_tgt_config.json.qIz 00:08:02.945 + ret=0 00:08:02.945 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:08:03.507 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:08:03.507 + diff -u /tmp/62.yku /tmp/spdk_tgt_config.json.qIz 00:08:03.507 + echo 'INFO: JSON config files are the same' 00:08:03.507 INFO: JSON config files are the same 00:08:03.507 + rm /tmp/62.yku /tmp/spdk_tgt_config.json.qIz 00:08:03.507 + exit 0 00:08:03.507 14:00:49 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:08:03.507 14:00:49 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:08:03.507 INFO: changing configuration and checking if this can be detected... 00:08:03.507 14:00:49 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:08:03.507 14:00:49 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:08:03.763 14:00:49 json_config -- json_config/json_config.sh@387 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:03.763 14:00:49 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:08:03.763 14:00:49 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:03.763 + '[' 2 -ne 2 ']' 00:08:03.763 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:08:03.763 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:08:03.763 + rootdir=/home/vagrant/spdk_repo/spdk 00:08:03.763 +++ basename /dev/fd/62 00:08:03.763 ++ mktemp /tmp/62.XXX 00:08:03.763 + tmp_file_1=/tmp/62.KKP 00:08:03.763 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:03.763 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:08:03.763 + tmp_file_2=/tmp/spdk_tgt_config.json.dlh 00:08:03.763 + ret=0 00:08:03.763 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:08:04.326 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:08:04.326 + diff -u /tmp/62.KKP /tmp/spdk_tgt_config.json.dlh 00:08:04.326 + ret=1 00:08:04.326 + echo '=== Start of file: /tmp/62.KKP ===' 00:08:04.326 + cat /tmp/62.KKP 00:08:04.326 + echo '=== End of file: /tmp/62.KKP ===' 00:08:04.326 + echo '' 00:08:04.326 + echo '=== Start of file: /tmp/spdk_tgt_config.json.dlh ===' 00:08:04.326 + cat /tmp/spdk_tgt_config.json.dlh 00:08:04.326 + echo '=== End of file: /tmp/spdk_tgt_config.json.dlh ===' 00:08:04.326 + echo '' 00:08:04.326 + rm /tmp/62.KKP /tmp/spdk_tgt_config.json.dlh 00:08:04.326 + exit 1 00:08:04.326 14:00:50 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:08:04.326 INFO: configuration change detected. 00:08:04.326 14:00:50 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:08:04.326 14:00:50 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:08:04.326 14:00:50 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:04.326 14:00:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:04.326 14:00:50 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:08:04.326 14:00:50 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:08:04.326 14:00:50 json_config -- json_config/json_config.sh@317 -- # [[ -n 178696 ]] 00:08:04.326 14:00:50 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:08:04.326 14:00:50 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:08:04.326 14:00:50 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:04.326 14:00:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:04.326 14:00:50 json_config -- json_config/json_config.sh@186 -- # [[ 1 -eq 1 ]] 00:08:04.326 14:00:50 json_config -- json_config/json_config.sh@187 -- # tgt_rpc bdev_lvol_delete lvs_test/clone0 00:08:04.326 14:00:50 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/clone0 00:08:04.583 14:00:50 json_config -- json_config/json_config.sh@188 -- # tgt_rpc bdev_lvol_delete lvs_test/lvol0 00:08:04.583 14:00:50 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/lvol0 00:08:04.841 14:00:50 json_config -- json_config/json_config.sh@189 -- # tgt_rpc bdev_lvol_delete lvs_test/snapshot0 00:08:04.841 14:00:50 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/snapshot0 00:08:05.098 14:00:51 json_config -- json_config/json_config.sh@190 -- # tgt_rpc bdev_lvol_delete_lvstore -l lvs_test 00:08:05.098 14:00:51 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete_lvstore -l lvs_test 00:08:05.664 14:00:51 json_config -- json_config/json_config.sh@193 -- # uname -s 00:08:05.664 14:00:51 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:08:05.664 14:00:51 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:08:05.664 14:00:51 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:08:05.664 14:00:51 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:08:05.664 14:00:51 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:05.664 14:00:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:05.664 14:00:51 json_config -- json_config/json_config.sh@323 -- # killprocess 178696 00:08:05.664 14:00:51 json_config -- common/autotest_common.sh@948 -- # '[' -z 178696 ']' 00:08:05.664 14:00:51 json_config -- common/autotest_common.sh@952 -- # kill -0 178696 00:08:05.664 14:00:51 json_config -- common/autotest_common.sh@953 -- # uname 00:08:05.664 14:00:51 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:05.664 14:00:51 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 178696 00:08:05.664 14:00:51 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:05.664 14:00:51 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:05.664 14:00:51 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 178696' 00:08:05.664 killing process with pid 178696 00:08:05.664 14:00:51 json_config -- common/autotest_common.sh@967 -- # kill 178696 00:08:05.664 14:00:51 json_config -- common/autotest_common.sh@972 -- # wait 178696 00:08:07.033 14:00:52 json_config -- json_config/json_config.sh@326 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:08:07.033 14:00:52 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:08:07.033 14:00:52 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:07.033 14:00:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:07.033 14:00:52 json_config -- json_config/json_config.sh@328 -- # return 0 00:08:07.033 14:00:52 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:08:07.033 INFO: Success 00:08:07.033 00:08:07.033 real 0m15.445s 00:08:07.033 user 0m22.671s 00:08:07.033 sys 0m2.673s 00:08:07.033 14:00:52 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:07.033 14:00:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:07.033 ************************************ 00:08:07.033 END TEST json_config 00:08:07.033 ************************************ 00:08:07.033 14:00:52 -- common/autotest_common.sh@1142 -- # return 0 00:08:07.033 14:00:52 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:08:07.033 14:00:52 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:07.033 14:00:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:07.033 14:00:52 -- common/autotest_common.sh@10 -- # set +x 00:08:07.033 ************************************ 00:08:07.033 START TEST json_config_extra_key 00:08:07.033 ************************************ 00:08:07.033 14:00:52 json_config_extra_key -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:08:07.033 14:00:52 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:07.033 14:00:52 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:08:07.033 14:00:52 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:07.033 14:00:52 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:07.033 14:00:52 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:07.033 14:00:52 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:07.033 14:00:52 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:07.033 14:00:52 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:07.033 14:00:52 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:07.033 14:00:52 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:07.033 14:00:52 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:07.033 14:00:52 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:07.033 14:00:52 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6eb37903-5e6e-4bf2-b995-7433baab6b1f 00:08:07.033 14:00:52 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=6eb37903-5e6e-4bf2-b995-7433baab6b1f 00:08:07.033 14:00:52 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:07.033 14:00:52 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:07.033 14:00:52 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:07.033 14:00:52 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:07.033 14:00:52 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:07.033 14:00:52 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:07.033 14:00:52 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:07.033 14:00:52 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:07.033 14:00:52 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:08:07.033 14:00:52 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:08:07.033 14:00:52 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:08:07.033 14:00:52 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:08:07.034 14:00:52 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:08:07.034 14:00:52 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:08:07.034 14:00:52 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:07.034 14:00:52 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:07.034 14:00:52 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:07.034 14:00:52 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:07.034 14:00:52 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:07.034 14:00:52 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:07.034 14:00:52 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:07.034 14:00:52 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:07.034 14:00:52 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:08:07.034 14:00:52 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:08:07.034 14:00:52 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:08:07.034 14:00:52 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:08:07.034 14:00:52 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:08:07.034 14:00:52 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:08:07.034 14:00:52 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:08:07.034 14:00:52 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:08:07.034 14:00:52 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:08:07.034 14:00:52 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:08:07.034 14:00:52 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:08:07.034 INFO: launching applications... 00:08:07.034 14:00:52 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:08:07.034 14:00:52 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:08:07.034 14:00:52 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:08:07.034 14:00:52 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:08:07.034 14:00:52 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:08:07.034 14:00:52 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:08:07.034 14:00:52 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:07.034 14:00:52 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:07.034 14:00:52 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=178878 00:08:07.034 14:00:52 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:08:07.034 14:00:52 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:08:07.034 Waiting for target to run... 00:08:07.034 14:00:52 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 178878 /var/tmp/spdk_tgt.sock 00:08:07.034 14:00:52 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 178878 ']' 00:08:07.034 14:00:52 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:07.034 14:00:52 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:07.034 14:00:52 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:07.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:07.034 14:00:52 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:07.034 14:00:52 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:08:07.034 [2024-07-15 14:00:52.838583] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:08:07.034 [2024-07-15 14:00:52.839028] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid178878 ] 00:08:07.595 [2024-07-15 14:00:53.365392] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.852 [2024-07-15 14:00:53.650554] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.416 14:00:54 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:08.416 14:00:54 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:08:08.416 14:00:54 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:08:08.416 00:08:08.416 14:00:54 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:08:08.416 INFO: shutting down applications... 00:08:08.416 14:00:54 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:08:08.416 14:00:54 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:08:08.416 14:00:54 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:08:08.416 14:00:54 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 178878 ]] 00:08:08.416 14:00:54 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 178878 00:08:08.416 14:00:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:08:08.416 14:00:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:08.416 14:00:54 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 178878 00:08:08.416 14:00:54 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:08.994 14:00:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:08.994 14:00:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:08.994 14:00:54 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 178878 00:08:08.994 14:00:54 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:09.561 14:00:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:09.561 14:00:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:09.561 14:00:55 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 178878 00:08:09.561 14:00:55 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:10.124 14:00:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:10.124 14:00:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:10.124 14:00:55 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 178878 00:08:10.124 14:00:55 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:10.689 14:00:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:10.689 14:00:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:10.689 14:00:56 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 178878 00:08:10.689 14:00:56 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:10.946 14:00:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:10.946 14:00:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:10.946 14:00:56 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 178878 00:08:10.946 14:00:56 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:11.514 14:00:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:11.514 14:00:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:11.514 14:00:57 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 178878 00:08:11.514 14:00:57 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:08:11.514 14:00:57 json_config_extra_key -- json_config/common.sh@43 -- # break 00:08:11.514 14:00:57 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:08:11.514 14:00:57 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:08:11.514 SPDK target shutdown done 00:08:11.514 14:00:57 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:08:11.514 Success 00:08:11.514 00:08:11.514 real 0m4.680s 00:08:11.514 user 0m4.148s 00:08:11.514 sys 0m0.651s 00:08:11.514 14:00:57 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:11.514 14:00:57 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:08:11.514 ************************************ 00:08:11.514 END TEST json_config_extra_key 00:08:11.514 ************************************ 00:08:11.514 14:00:57 -- common/autotest_common.sh@1142 -- # return 0 00:08:11.514 14:00:57 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:11.514 14:00:57 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:11.514 14:00:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:11.514 14:00:57 -- common/autotest_common.sh@10 -- # set +x 00:08:11.514 ************************************ 00:08:11.514 START TEST alias_rpc 00:08:11.514 ************************************ 00:08:11.514 14:00:57 alias_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:11.771 * Looking for test storage... 00:08:11.771 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:08:11.771 14:00:57 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:11.771 14:00:57 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=178995 00:08:11.771 14:00:57 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:11.771 14:00:57 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 178995 00:08:11.771 14:00:57 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 178995 ']' 00:08:11.771 14:00:57 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:11.771 14:00:57 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:11.771 14:00:57 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:11.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:11.771 14:00:57 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:11.771 14:00:57 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:11.771 [2024-07-15 14:00:57.581756] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:08:11.771 [2024-07-15 14:00:57.582557] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid178995 ] 00:08:11.771 [2024-07-15 14:00:57.743968] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.028 [2024-07-15 14:00:57.996686] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.961 14:00:58 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:12.961 14:00:58 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:08:12.961 14:00:58 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:08:13.219 14:00:59 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 178995 00:08:13.219 14:00:59 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 178995 ']' 00:08:13.219 14:00:59 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 178995 00:08:13.219 14:00:59 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:08:13.219 14:00:59 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:13.219 14:00:59 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 178995 00:08:13.219 14:00:59 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:13.219 14:00:59 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:13.219 14:00:59 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 178995' 00:08:13.219 killing process with pid 178995 00:08:13.219 14:00:59 alias_rpc -- common/autotest_common.sh@967 -- # kill 178995 00:08:13.219 14:00:59 alias_rpc -- common/autotest_common.sh@972 -- # wait 178995 00:08:15.746 00:08:15.746 real 0m3.794s 00:08:15.746 user 0m3.931s 00:08:15.746 sys 0m0.509s 00:08:15.746 14:01:01 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:15.746 14:01:01 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:15.746 ************************************ 00:08:15.746 END TEST alias_rpc 00:08:15.746 ************************************ 00:08:15.746 14:01:01 -- common/autotest_common.sh@1142 -- # return 0 00:08:15.746 14:01:01 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:08:15.746 14:01:01 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:08:15.746 14:01:01 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:15.746 14:01:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:15.747 14:01:01 -- common/autotest_common.sh@10 -- # set +x 00:08:15.747 ************************************ 00:08:15.747 START TEST spdkcli_tcp 00:08:15.747 ************************************ 00:08:15.747 14:01:01 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:08:15.747 * Looking for test storage... 00:08:15.747 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:08:15.747 14:01:01 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:08:15.747 14:01:01 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:08:15.747 14:01:01 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:08:15.747 14:01:01 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:08:15.747 14:01:01 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:08:15.747 14:01:01 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:08:15.747 14:01:01 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:08:15.747 14:01:01 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:15.747 14:01:01 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:15.747 14:01:01 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=179103 00:08:15.747 14:01:01 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:08:15.747 14:01:01 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 179103 00:08:15.747 14:01:01 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 179103 ']' 00:08:15.747 14:01:01 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:15.747 14:01:01 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:15.747 14:01:01 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:15.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:15.747 14:01:01 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:15.747 14:01:01 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:15.747 [2024-07-15 14:01:01.451842] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:08:15.747 [2024-07-15 14:01:01.452305] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid179103 ] 00:08:15.747 [2024-07-15 14:01:01.623208] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:16.032 [2024-07-15 14:01:01.885069] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.032 [2024-07-15 14:01:01.885069] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:16.968 14:01:02 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:16.968 14:01:02 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:08:16.968 14:01:02 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=179143 00:08:16.968 14:01:02 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:08:16.968 14:01:02 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:08:16.968 [ 00:08:16.968 "spdk_get_version", 00:08:16.968 "rpc_get_methods", 00:08:16.968 "keyring_get_keys", 00:08:16.968 "trace_get_info", 00:08:16.968 "trace_get_tpoint_group_mask", 00:08:16.968 "trace_disable_tpoint_group", 00:08:16.968 "trace_enable_tpoint_group", 00:08:16.968 "trace_clear_tpoint_mask", 00:08:16.968 "trace_set_tpoint_mask", 00:08:16.968 "framework_get_pci_devices", 00:08:16.968 "framework_get_config", 00:08:16.968 "framework_get_subsystems", 00:08:16.968 "iobuf_get_stats", 00:08:16.968 "iobuf_set_options", 00:08:16.968 "sock_get_default_impl", 00:08:16.968 "sock_set_default_impl", 00:08:16.968 "sock_impl_set_options", 00:08:16.968 "sock_impl_get_options", 00:08:16.968 "vmd_rescan", 00:08:16.968 "vmd_remove_device", 00:08:16.968 "vmd_enable", 00:08:16.968 "accel_get_stats", 00:08:16.968 "accel_set_options", 00:08:16.968 "accel_set_driver", 00:08:16.968 "accel_crypto_key_destroy", 00:08:16.968 "accel_crypto_keys_get", 00:08:16.968 "accel_crypto_key_create", 00:08:16.968 "accel_assign_opc", 00:08:16.968 "accel_get_module_info", 00:08:16.968 "accel_get_opc_assignments", 00:08:16.968 "notify_get_notifications", 00:08:16.968 "notify_get_types", 00:08:16.968 "bdev_get_histogram", 00:08:16.968 "bdev_enable_histogram", 00:08:16.968 "bdev_set_qos_limit", 00:08:16.968 "bdev_set_qd_sampling_period", 00:08:16.968 "bdev_get_bdevs", 00:08:16.968 "bdev_reset_iostat", 00:08:16.968 "bdev_get_iostat", 00:08:16.968 "bdev_examine", 00:08:16.968 "bdev_wait_for_examine", 00:08:16.968 "bdev_set_options", 00:08:16.968 "scsi_get_devices", 00:08:16.968 "thread_set_cpumask", 00:08:16.968 "framework_get_governor", 00:08:16.968 "framework_get_scheduler", 00:08:16.968 "framework_set_scheduler", 00:08:16.968 "framework_get_reactors", 00:08:16.968 "thread_get_io_channels", 00:08:16.968 "thread_get_pollers", 00:08:16.968 "thread_get_stats", 00:08:16.968 "framework_monitor_context_switch", 00:08:16.968 "spdk_kill_instance", 00:08:16.968 "log_enable_timestamps", 00:08:16.968 "log_get_flags", 00:08:16.968 "log_clear_flag", 00:08:16.968 "log_set_flag", 00:08:16.968 "log_get_level", 00:08:16.968 "log_set_level", 00:08:16.968 "log_get_print_level", 00:08:16.968 "log_set_print_level", 00:08:16.968 "framework_enable_cpumask_locks", 00:08:16.968 "framework_disable_cpumask_locks", 00:08:16.968 "framework_wait_init", 00:08:16.968 "framework_start_init", 00:08:16.968 "virtio_blk_create_transport", 00:08:16.968 "virtio_blk_get_transports", 00:08:16.968 "vhost_controller_set_coalescing", 00:08:16.968 "vhost_get_controllers", 00:08:16.968 "vhost_delete_controller", 00:08:16.968 "vhost_create_blk_controller", 00:08:16.968 "vhost_scsi_controller_remove_target", 00:08:16.968 "vhost_scsi_controller_add_target", 00:08:16.968 "vhost_start_scsi_controller", 00:08:16.968 "vhost_create_scsi_controller", 00:08:16.968 "nbd_get_disks", 00:08:16.968 "nbd_stop_disk", 00:08:16.968 "nbd_start_disk", 00:08:16.968 "env_dpdk_get_mem_stats", 00:08:16.968 "nvmf_stop_mdns_prr", 00:08:16.968 "nvmf_publish_mdns_prr", 00:08:16.968 "nvmf_subsystem_get_listeners", 00:08:16.968 "nvmf_subsystem_get_qpairs", 00:08:16.968 "nvmf_subsystem_get_controllers", 00:08:16.968 "nvmf_get_stats", 00:08:16.968 "nvmf_get_transports", 00:08:16.968 "nvmf_create_transport", 00:08:16.968 "nvmf_get_targets", 00:08:16.968 "nvmf_delete_target", 00:08:16.968 "nvmf_create_target", 00:08:16.968 "nvmf_subsystem_allow_any_host", 00:08:16.968 "nvmf_subsystem_remove_host", 00:08:16.968 "nvmf_subsystem_add_host", 00:08:16.968 "nvmf_ns_remove_host", 00:08:16.968 "nvmf_ns_add_host", 00:08:16.968 "nvmf_subsystem_remove_ns", 00:08:16.968 "nvmf_subsystem_add_ns", 00:08:16.968 "nvmf_subsystem_listener_set_ana_state", 00:08:16.968 "nvmf_discovery_get_referrals", 00:08:16.968 "nvmf_discovery_remove_referral", 00:08:16.968 "nvmf_discovery_add_referral", 00:08:16.968 "nvmf_subsystem_remove_listener", 00:08:16.968 "nvmf_subsystem_add_listener", 00:08:16.968 "nvmf_delete_subsystem", 00:08:16.968 "nvmf_create_subsystem", 00:08:16.968 "nvmf_get_subsystems", 00:08:16.968 "nvmf_set_crdt", 00:08:16.968 "nvmf_set_config", 00:08:16.968 "nvmf_set_max_subsystems", 00:08:16.968 "iscsi_get_histogram", 00:08:16.968 "iscsi_enable_histogram", 00:08:16.968 "iscsi_set_options", 00:08:16.968 "iscsi_get_auth_groups", 00:08:16.968 "iscsi_auth_group_remove_secret", 00:08:16.968 "iscsi_auth_group_add_secret", 00:08:16.968 "iscsi_delete_auth_group", 00:08:16.968 "iscsi_create_auth_group", 00:08:16.968 "iscsi_set_discovery_auth", 00:08:16.968 "iscsi_get_options", 00:08:16.968 "iscsi_target_node_request_logout", 00:08:16.968 "iscsi_target_node_set_redirect", 00:08:16.968 "iscsi_target_node_set_auth", 00:08:16.968 "iscsi_target_node_add_lun", 00:08:16.968 "iscsi_get_stats", 00:08:16.968 "iscsi_get_connections", 00:08:16.968 "iscsi_portal_group_set_auth", 00:08:16.968 "iscsi_start_portal_group", 00:08:16.968 "iscsi_delete_portal_group", 00:08:16.968 "iscsi_create_portal_group", 00:08:16.968 "iscsi_get_portal_groups", 00:08:16.969 "iscsi_delete_target_node", 00:08:16.969 "iscsi_target_node_remove_pg_ig_maps", 00:08:16.969 "iscsi_target_node_add_pg_ig_maps", 00:08:16.969 "iscsi_create_target_node", 00:08:16.969 "iscsi_get_target_nodes", 00:08:16.969 "iscsi_delete_initiator_group", 00:08:16.969 "iscsi_initiator_group_remove_initiators", 00:08:16.969 "iscsi_initiator_group_add_initiators", 00:08:16.969 "iscsi_create_initiator_group", 00:08:16.969 "iscsi_get_initiator_groups", 00:08:16.969 "keyring_linux_set_options", 00:08:16.969 "keyring_file_remove_key", 00:08:16.969 "keyring_file_add_key", 00:08:16.969 "iaa_scan_accel_module", 00:08:16.969 "dsa_scan_accel_module", 00:08:16.969 "ioat_scan_accel_module", 00:08:16.969 "accel_error_inject_error", 00:08:16.969 "bdev_iscsi_delete", 00:08:16.969 "bdev_iscsi_create", 00:08:16.969 "bdev_iscsi_set_options", 00:08:16.969 "bdev_virtio_attach_controller", 00:08:16.969 "bdev_virtio_scsi_get_devices", 00:08:16.969 "bdev_virtio_detach_controller", 00:08:16.969 "bdev_virtio_blk_set_hotplug", 00:08:16.969 "bdev_ftl_set_property", 00:08:16.969 "bdev_ftl_get_properties", 00:08:16.969 "bdev_ftl_get_stats", 00:08:16.969 "bdev_ftl_unmap", 00:08:16.969 "bdev_ftl_unload", 00:08:16.969 "bdev_ftl_delete", 00:08:16.969 "bdev_ftl_load", 00:08:16.969 "bdev_ftl_create", 00:08:16.969 "bdev_aio_delete", 00:08:16.969 "bdev_aio_rescan", 00:08:16.969 "bdev_aio_create", 00:08:16.969 "blobfs_create", 00:08:16.969 "blobfs_detect", 00:08:16.969 "blobfs_set_cache_size", 00:08:16.969 "bdev_zone_block_delete", 00:08:16.969 "bdev_zone_block_create", 00:08:16.969 "bdev_delay_delete", 00:08:16.969 "bdev_delay_create", 00:08:16.969 "bdev_delay_update_latency", 00:08:16.969 "bdev_split_delete", 00:08:16.969 "bdev_split_create", 00:08:16.969 "bdev_error_inject_error", 00:08:16.969 "bdev_error_delete", 00:08:16.969 "bdev_error_create", 00:08:16.969 "bdev_raid_set_options", 00:08:16.969 "bdev_raid_remove_base_bdev", 00:08:16.969 "bdev_raid_add_base_bdev", 00:08:16.969 "bdev_raid_delete", 00:08:16.969 "bdev_raid_create", 00:08:16.969 "bdev_raid_get_bdevs", 00:08:16.969 "bdev_lvol_set_parent_bdev", 00:08:16.969 "bdev_lvol_set_parent", 00:08:16.969 "bdev_lvol_check_shallow_copy", 00:08:16.969 "bdev_lvol_start_shallow_copy", 00:08:16.969 "bdev_lvol_grow_lvstore", 00:08:16.969 "bdev_lvol_get_lvols", 00:08:16.969 "bdev_lvol_get_lvstores", 00:08:16.969 "bdev_lvol_delete", 00:08:16.969 "bdev_lvol_set_read_only", 00:08:16.969 "bdev_lvol_resize", 00:08:16.969 "bdev_lvol_decouple_parent", 00:08:16.969 "bdev_lvol_inflate", 00:08:16.969 "bdev_lvol_rename", 00:08:16.969 "bdev_lvol_clone_bdev", 00:08:16.969 "bdev_lvol_clone", 00:08:16.969 "bdev_lvol_snapshot", 00:08:16.969 "bdev_lvol_create", 00:08:16.969 "bdev_lvol_delete_lvstore", 00:08:16.969 "bdev_lvol_rename_lvstore", 00:08:16.969 "bdev_lvol_create_lvstore", 00:08:16.969 "bdev_passthru_delete", 00:08:16.969 "bdev_passthru_create", 00:08:16.969 "bdev_nvme_cuse_unregister", 00:08:16.969 "bdev_nvme_cuse_register", 00:08:16.969 "bdev_opal_new_user", 00:08:16.969 "bdev_opal_set_lock_state", 00:08:16.969 "bdev_opal_delete", 00:08:16.969 "bdev_opal_get_info", 00:08:16.969 "bdev_opal_create", 00:08:16.969 "bdev_nvme_opal_revert", 00:08:16.969 "bdev_nvme_opal_init", 00:08:16.969 "bdev_nvme_send_cmd", 00:08:16.969 "bdev_nvme_get_path_iostat", 00:08:16.969 "bdev_nvme_get_mdns_discovery_info", 00:08:16.969 "bdev_nvme_stop_mdns_discovery", 00:08:16.969 "bdev_nvme_start_mdns_discovery", 00:08:16.969 "bdev_nvme_set_multipath_policy", 00:08:16.969 "bdev_nvme_set_preferred_path", 00:08:16.969 "bdev_nvme_get_io_paths", 00:08:16.969 "bdev_nvme_remove_error_injection", 00:08:16.969 "bdev_nvme_add_error_injection", 00:08:16.969 "bdev_nvme_get_discovery_info", 00:08:16.969 "bdev_nvme_stop_discovery", 00:08:16.969 "bdev_nvme_start_discovery", 00:08:16.969 "bdev_nvme_get_controller_health_info", 00:08:16.969 "bdev_nvme_disable_controller", 00:08:16.969 "bdev_nvme_enable_controller", 00:08:16.969 "bdev_nvme_reset_controller", 00:08:16.969 "bdev_nvme_get_transport_statistics", 00:08:16.969 "bdev_nvme_apply_firmware", 00:08:16.969 "bdev_nvme_detach_controller", 00:08:16.969 "bdev_nvme_get_controllers", 00:08:16.969 "bdev_nvme_attach_controller", 00:08:16.969 "bdev_nvme_set_hotplug", 00:08:16.969 "bdev_nvme_set_options", 00:08:16.969 "bdev_null_resize", 00:08:16.969 "bdev_null_delete", 00:08:16.969 "bdev_null_create", 00:08:16.969 "bdev_malloc_delete", 00:08:16.969 "bdev_malloc_create" 00:08:16.969 ] 00:08:16.969 14:01:02 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:08:16.969 14:01:02 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:16.969 14:01:02 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:17.228 14:01:02 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:08:17.228 14:01:02 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 179103 00:08:17.228 14:01:02 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 179103 ']' 00:08:17.228 14:01:02 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 179103 00:08:17.228 14:01:02 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:08:17.228 14:01:02 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:17.228 14:01:02 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 179103 00:08:17.228 14:01:03 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:17.228 killing process with pid 179103 00:08:17.228 14:01:03 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:17.228 14:01:03 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 179103' 00:08:17.228 14:01:03 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 179103 00:08:17.228 14:01:03 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 179103 00:08:19.783 00:08:19.783 real 0m3.921s 00:08:19.783 user 0m7.023s 00:08:19.783 sys 0m0.593s 00:08:19.783 14:01:05 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:19.783 14:01:05 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:19.783 ************************************ 00:08:19.783 END TEST spdkcli_tcp 00:08:19.783 ************************************ 00:08:19.783 14:01:05 -- common/autotest_common.sh@1142 -- # return 0 00:08:19.783 14:01:05 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:19.783 14:01:05 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:19.783 14:01:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:19.783 14:01:05 -- common/autotest_common.sh@10 -- # set +x 00:08:19.783 ************************************ 00:08:19.783 START TEST dpdk_mem_utility 00:08:19.783 ************************************ 00:08:19.783 14:01:05 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:19.783 * Looking for test storage... 00:08:19.783 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:08:19.783 14:01:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:08:19.783 14:01:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=179242 00:08:19.783 14:01:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 179242 00:08:19.783 14:01:05 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 179242 ']' 00:08:19.783 14:01:05 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:19.783 14:01:05 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:19.783 14:01:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:19.783 14:01:05 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:19.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:19.783 14:01:05 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:19.783 14:01:05 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:19.783 [2024-07-15 14:01:05.422032] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:08:19.783 [2024-07-15 14:01:05.422595] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid179242 ] 00:08:19.783 [2024-07-15 14:01:05.576871] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.041 [2024-07-15 14:01:05.792477] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.622 14:01:06 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:20.622 14:01:06 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:08:20.622 14:01:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:08:20.622 14:01:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:08:20.622 14:01:06 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.622 14:01:06 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:20.622 { 00:08:20.622 "filename": "/tmp/spdk_mem_dump.txt" 00:08:20.622 } 00:08:20.622 14:01:06 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.622 14:01:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:08:20.883 DPDK memory size 820.000000 MiB in 1 heap(s) 00:08:20.883 1 heaps totaling size 820.000000 MiB 00:08:20.883 size: 820.000000 MiB heap id: 0 00:08:20.883 end heaps---------- 00:08:20.883 8 mempools totaling size 598.116089 MiB 00:08:20.883 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:08:20.883 size: 158.602051 MiB name: PDU_data_out_Pool 00:08:20.883 size: 84.521057 MiB name: bdev_io_179242 00:08:20.883 size: 51.011292 MiB name: evtpool_179242 00:08:20.883 size: 50.003479 MiB name: msgpool_179242 00:08:20.883 size: 21.763794 MiB name: PDU_Pool 00:08:20.883 size: 19.513306 MiB name: SCSI_TASK_Pool 00:08:20.883 size: 0.026123 MiB name: Session_Pool 00:08:20.883 end mempools------- 00:08:20.883 6 memzones totaling size 4.142822 MiB 00:08:20.883 size: 1.000366 MiB name: RG_ring_0_179242 00:08:20.883 size: 1.000366 MiB name: RG_ring_1_179242 00:08:20.883 size: 1.000366 MiB name: RG_ring_4_179242 00:08:20.883 size: 1.000366 MiB name: RG_ring_5_179242 00:08:20.883 size: 0.125366 MiB name: RG_ring_2_179242 00:08:20.883 size: 0.015991 MiB name: RG_ring_3_179242 00:08:20.883 end memzones------- 00:08:20.883 14:01:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:08:20.883 heap id: 0 total size: 820.000000 MiB number of busy elements: 227 number of free elements: 18 00:08:20.883 list of free elements. size: 18.469482 MiB 00:08:20.883 element at address: 0x200000400000 with size: 1.999451 MiB 00:08:20.883 element at address: 0x200000800000 with size: 1.996887 MiB 00:08:20.883 element at address: 0x200007000000 with size: 1.995972 MiB 00:08:20.883 element at address: 0x20000b200000 with size: 1.995972 MiB 00:08:20.883 element at address: 0x200019100040 with size: 0.999939 MiB 00:08:20.883 element at address: 0x200019500040 with size: 0.999939 MiB 00:08:20.883 element at address: 0x200019600000 with size: 0.999329 MiB 00:08:20.883 element at address: 0x200003e00000 with size: 0.996094 MiB 00:08:20.883 element at address: 0x200032200000 with size: 0.994324 MiB 00:08:20.883 element at address: 0x200018e00000 with size: 0.959656 MiB 00:08:20.883 element at address: 0x200019900040 with size: 0.937256 MiB 00:08:20.883 element at address: 0x200000200000 with size: 0.834106 MiB 00:08:20.883 element at address: 0x20001b000000 with size: 0.561218 MiB 00:08:20.884 element at address: 0x200019200000 with size: 0.489197 MiB 00:08:20.884 element at address: 0x200019a00000 with size: 0.485413 MiB 00:08:20.884 element at address: 0x200013800000 with size: 0.468872 MiB 00:08:20.884 element at address: 0x200028400000 with size: 0.399719 MiB 00:08:20.884 element at address: 0x200003a00000 with size: 0.356140 MiB 00:08:20.884 list of standard malloc elements. size: 199.266113 MiB 00:08:20.884 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:08:20.884 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:08:20.884 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:08:20.884 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:08:20.884 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:08:20.884 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:08:20.884 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:08:20.884 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:08:20.884 element at address: 0x20000b1ff380 with size: 0.000366 MiB 00:08:20.884 element at address: 0x20000b1ff040 with size: 0.000305 MiB 00:08:20.884 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:08:20.884 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:08:20.884 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:08:20.884 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:08:20.884 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:08:20.884 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:08:20.884 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:08:20.884 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:08:20.884 element at address: 0x2000002d5f80 with size: 0.000244 MiB 00:08:20.884 element at address: 0x2000002d6080 with size: 0.000244 MiB 00:08:20.884 element at address: 0x2000002d6180 with size: 0.000244 MiB 00:08:20.884 element at address: 0x2000002d6280 with size: 0.000244 MiB 00:08:20.884 element at address: 0x2000002d6380 with size: 0.000244 MiB 00:08:20.884 element at address: 0x2000002d6480 with size: 0.000244 MiB 00:08:20.884 element at address: 0x2000002d6580 with size: 0.000244 MiB 00:08:20.884 element at address: 0x2000002d6680 with size: 0.000244 MiB 00:08:20.884 element at address: 0x2000002d6780 with size: 0.000244 MiB 00:08:20.884 element at address: 0x2000002d6880 with size: 0.000244 MiB 00:08:20.884 element at address: 0x2000002d6980 with size: 0.000244 MiB 00:08:20.884 element at address: 0x2000002d6a80 with size: 0.000244 MiB 00:08:20.884 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:08:20.884 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:08:20.884 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:08:20.884 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:08:20.884 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:08:20.884 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:08:20.884 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:08:20.884 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:08:20.884 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:08:20.884 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:08:20.884 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:08:20.884 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:08:20.884 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:08:20.884 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:08:20.884 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:08:20.884 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:08:20.884 element at address: 0x200003aff980 with size: 0.000244 MiB 00:08:20.884 element at address: 0x200003affa80 with size: 0.000244 MiB 00:08:20.884 element at address: 0x200003eff000 with size: 0.000244 MiB 00:08:20.884 element at address: 0x20000b1ff180 with size: 0.000244 MiB 00:08:20.884 element at address: 0x20000b1ff280 with size: 0.000244 MiB 00:08:20.884 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:08:20.884 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:08:20.884 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:08:20.884 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:08:20.884 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:08:20.884 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:08:20.884 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:08:20.884 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:08:20.884 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:08:20.884 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:08:20.884 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:08:20.884 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:08:20.884 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:08:20.884 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:08:20.884 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:08:20.884 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:08:20.884 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:08:20.884 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:08:20.884 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:08:20.884 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:08:20.884 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:08:20.884 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:08:20.884 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:08:20.884 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:08:20.884 element at address: 0x200013878080 with size: 0.000244 MiB 00:08:20.884 element at address: 0x200013878180 with size: 0.000244 MiB 00:08:20.884 element at address: 0x200013878280 with size: 0.000244 MiB 00:08:20.884 element at address: 0x200013878380 with size: 0.000244 MiB 00:08:20.884 element at address: 0x200013878480 with size: 0.000244 MiB 00:08:20.884 element at address: 0x200013878580 with size: 0.000244 MiB 00:08:20.884 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:08:20.884 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:08:20.884 element at address: 0x20001927d3c0 with size: 0.000244 MiB 00:08:20.884 element at address: 0x20001927d4c0 with size: 0.000244 MiB 00:08:20.884 element at address: 0x20001927d5c0 with size: 0.000244 MiB 00:08:20.884 element at address: 0x20001927d6c0 with size: 0.000244 MiB 00:08:20.884 element at address: 0x20001927d7c0 with size: 0.000244 MiB 00:08:20.884 element at address: 0x20001927d8c0 with size: 0.000244 MiB 00:08:20.884 element at address: 0x20001927d9c0 with size: 0.000244 MiB 00:08:20.884 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:08:20.884 element at address: 0x200019abc680 with size: 0.000244 MiB 00:08:20.884 element at address: 0x20001b08fac0 with size: 0.000244 MiB 00:08:20.884 element at address: 0x20001b08fbc0 with size: 0.000244 MiB 00:08:20.884 element at address: 0x20001b08fcc0 with size: 0.000244 MiB 00:08:20.884 element at address: 0x20001b08fdc0 with size: 0.000244 MiB 00:08:20.884 element at address: 0x20001b08fec0 with size: 0.000244 MiB 00:08:20.884 element at address: 0x20001b08ffc0 with size: 0.000244 MiB 00:08:20.884 element at address: 0x20001b0900c0 with size: 0.000244 MiB 00:08:20.884 element at address: 0x20001b0901c0 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20001b0902c0 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20001b0903c0 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20001b0904c0 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20001b0905c0 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20001b0906c0 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20001b0907c0 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20001b0908c0 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20001b0909c0 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20001b090ac0 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20001b090bc0 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20001b090cc0 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20001b090dc0 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20001b090ec0 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20001b090fc0 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20001b0910c0 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20001b0911c0 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20001b0912c0 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20001b0913c0 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20001b0914c0 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20001b0915c0 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20001b0916c0 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20001b0917c0 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20001b0918c0 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20001b0919c0 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20001b091ac0 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20001b091bc0 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20001b091cc0 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20001b091dc0 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20001b091ec0 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20001b091fc0 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20001b0920c0 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20001b0921c0 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20001b0922c0 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20001b0923c0 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20001b0924c0 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20001b0925c0 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20001b0926c0 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20001b0927c0 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20001b0928c0 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20001b0929c0 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20001b092ac0 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20001b092bc0 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20001b092cc0 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20001b092dc0 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20001b092ec0 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20001b092fc0 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20001b0930c0 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20001b0931c0 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20001b0932c0 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20001b0933c0 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20001b0934c0 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20001b0935c0 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20001b0936c0 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20001b0937c0 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20001b0938c0 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20001b0939c0 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20001b093ac0 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20001b093bc0 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20001b093cc0 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20001b093dc0 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20001b093ec0 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20001b093fc0 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20001b0940c0 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20001b0941c0 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20001b0942c0 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20001b0943c0 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20001b0944c0 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20001b0945c0 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20001b0946c0 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20001b0947c0 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20001b0948c0 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20001b0949c0 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20001b094ac0 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:08:20.885 element at address: 0x200028466540 with size: 0.000244 MiB 00:08:20.885 element at address: 0x200028466640 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20002846d300 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20002846d580 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20002846d680 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20002846d780 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20002846d880 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20002846d980 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20002846da80 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20002846db80 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20002846de80 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20002846df80 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20002846e080 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20002846e180 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20002846e280 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20002846e380 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20002846e480 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20002846e580 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20002846e680 with size: 0.000244 MiB 00:08:20.885 element at address: 0x20002846e780 with size: 0.000244 MiB 00:08:20.886 element at address: 0x20002846e880 with size: 0.000244 MiB 00:08:20.886 element at address: 0x20002846e980 with size: 0.000244 MiB 00:08:20.886 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:08:20.886 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:08:20.886 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:08:20.886 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:08:20.886 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:08:20.886 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:08:20.886 element at address: 0x20002846f080 with size: 0.000244 MiB 00:08:20.886 element at address: 0x20002846f180 with size: 0.000244 MiB 00:08:20.886 element at address: 0x20002846f280 with size: 0.000244 MiB 00:08:20.886 element at address: 0x20002846f380 with size: 0.000244 MiB 00:08:20.886 element at address: 0x20002846f480 with size: 0.000244 MiB 00:08:20.886 element at address: 0x20002846f580 with size: 0.000244 MiB 00:08:20.886 element at address: 0x20002846f680 with size: 0.000244 MiB 00:08:20.886 element at address: 0x20002846f780 with size: 0.000244 MiB 00:08:20.886 element at address: 0x20002846f880 with size: 0.000244 MiB 00:08:20.886 element at address: 0x20002846f980 with size: 0.000244 MiB 00:08:20.886 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:08:20.886 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:08:20.886 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:08:20.886 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:08:20.886 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:08:20.886 list of memzone associated elements. size: 602.264404 MiB 00:08:20.886 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:08:20.886 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:08:20.886 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:08:20.886 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:08:20.886 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:08:20.886 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_179242_0 00:08:20.886 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:08:20.886 associated memzone info: size: 48.002930 MiB name: MP_evtpool_179242_0 00:08:20.886 element at address: 0x200003fff340 with size: 48.003113 MiB 00:08:20.886 associated memzone info: size: 48.002930 MiB name: MP_msgpool_179242_0 00:08:20.886 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:08:20.886 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:08:20.886 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:08:20.886 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:08:20.886 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:08:20.886 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_179242 00:08:20.886 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:08:20.886 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_179242 00:08:20.886 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:08:20.886 associated memzone info: size: 1.007996 MiB name: MP_evtpool_179242 00:08:20.886 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:08:20.886 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:08:20.886 element at address: 0x200019abc780 with size: 1.008179 MiB 00:08:20.886 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:08:20.886 element at address: 0x200018efde00 with size: 1.008179 MiB 00:08:20.886 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:08:20.886 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:08:20.886 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:08:20.886 element at address: 0x200003eff100 with size: 1.000549 MiB 00:08:20.886 associated memzone info: size: 1.000366 MiB name: RG_ring_0_179242 00:08:20.886 element at address: 0x200003affb80 with size: 1.000549 MiB 00:08:20.886 associated memzone info: size: 1.000366 MiB name: RG_ring_1_179242 00:08:20.886 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:08:20.886 associated memzone info: size: 1.000366 MiB name: RG_ring_4_179242 00:08:20.886 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:08:20.886 associated memzone info: size: 1.000366 MiB name: RG_ring_5_179242 00:08:20.886 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:08:20.886 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_179242 00:08:20.886 element at address: 0x20001927dac0 with size: 0.500549 MiB 00:08:20.886 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:08:20.886 element at address: 0x200013878680 with size: 0.500549 MiB 00:08:20.886 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:08:20.886 element at address: 0x200019a7c440 with size: 0.250549 MiB 00:08:20.886 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:08:20.886 element at address: 0x200003adf740 with size: 0.125549 MiB 00:08:20.886 associated memzone info: size: 0.125366 MiB name: RG_ring_2_179242 00:08:20.886 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:08:20.886 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:08:20.886 element at address: 0x200028466740 with size: 0.023804 MiB 00:08:20.886 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:08:20.886 element at address: 0x200003adb500 with size: 0.016174 MiB 00:08:20.886 associated memzone info: size: 0.015991 MiB name: RG_ring_3_179242 00:08:20.886 element at address: 0x20002846c8c0 with size: 0.002502 MiB 00:08:20.886 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:08:20.886 element at address: 0x2000002d6b80 with size: 0.000366 MiB 00:08:20.886 associated memzone info: size: 0.000183 MiB name: MP_msgpool_179242 00:08:20.886 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:08:20.886 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_179242 00:08:20.886 element at address: 0x20002846d400 with size: 0.000366 MiB 00:08:20.886 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:08:20.886 14:01:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:08:20.886 14:01:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 179242 00:08:20.886 14:01:06 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 179242 ']' 00:08:20.886 14:01:06 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 179242 00:08:20.886 14:01:06 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:08:20.886 14:01:06 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:20.886 14:01:06 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 179242 00:08:20.886 14:01:06 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:20.886 14:01:06 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:20.886 14:01:06 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 179242' 00:08:20.886 killing process with pid 179242 00:08:20.886 14:01:06 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 179242 00:08:20.886 14:01:06 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 179242 00:08:23.420 00:08:23.420 real 0m3.654s 00:08:23.420 user 0m3.687s 00:08:23.420 sys 0m0.524s 00:08:23.420 14:01:08 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:23.420 14:01:08 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:23.420 ************************************ 00:08:23.420 END TEST dpdk_mem_utility 00:08:23.420 ************************************ 00:08:23.420 14:01:08 -- common/autotest_common.sh@1142 -- # return 0 00:08:23.420 14:01:08 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:08:23.420 14:01:08 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:23.420 14:01:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:23.420 14:01:08 -- common/autotest_common.sh@10 -- # set +x 00:08:23.420 ************************************ 00:08:23.420 START TEST event 00:08:23.420 ************************************ 00:08:23.420 14:01:09 event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:08:23.420 * Looking for test storage... 00:08:23.421 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:08:23.421 14:01:09 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:23.421 14:01:09 event -- bdev/nbd_common.sh@6 -- # set -e 00:08:23.421 14:01:09 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:23.421 14:01:09 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:08:23.421 14:01:09 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:23.421 14:01:09 event -- common/autotest_common.sh@10 -- # set +x 00:08:23.421 ************************************ 00:08:23.421 START TEST event_perf 00:08:23.421 ************************************ 00:08:23.421 14:01:09 event.event_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:23.421 Running I/O for 1 seconds...[2024-07-15 14:01:09.137783] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:08:23.421 [2024-07-15 14:01:09.138117] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid179349 ] 00:08:23.421 [2024-07-15 14:01:09.320819] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:23.680 [2024-07-15 14:01:09.545700] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:23.680 [2024-07-15 14:01:09.545855] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.680 [2024-07-15 14:01:09.545855] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:23.680 Running I/O for 1 seconds...[2024-07-15 14:01:09.545770] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:25.058 00:08:25.058 lcore 0: 166501 00:08:25.058 lcore 1: 166502 00:08:25.058 lcore 2: 166499 00:08:25.058 lcore 3: 166500 00:08:25.058 done. 00:08:25.058 00:08:25.058 real 0m1.844s 00:08:25.058 user 0m4.592s 00:08:25.058 sys 0m0.128s 00:08:25.058 14:01:10 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:25.058 14:01:10 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:08:25.058 ************************************ 00:08:25.058 END TEST event_perf 00:08:25.058 ************************************ 00:08:25.058 14:01:10 event -- common/autotest_common.sh@1142 -- # return 0 00:08:25.058 14:01:10 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:08:25.058 14:01:10 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:25.058 14:01:10 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:25.058 14:01:10 event -- common/autotest_common.sh@10 -- # set +x 00:08:25.058 ************************************ 00:08:25.058 START TEST event_reactor 00:08:25.058 ************************************ 00:08:25.058 14:01:10 event.event_reactor -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:08:25.058 [2024-07-15 14:01:11.039022] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:08:25.058 [2024-07-15 14:01:11.039445] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid179402 ] 00:08:25.317 [2024-07-15 14:01:11.198657] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.586 [2024-07-15 14:01:11.413048] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.963 test_start 00:08:26.963 oneshot 00:08:26.963 tick 100 00:08:26.963 tick 100 00:08:26.963 tick 250 00:08:26.963 tick 100 00:08:26.963 tick 100 00:08:26.963 tick 100 00:08:26.963 tick 500 00:08:26.963 tick 250 00:08:26.963 tick 100 00:08:26.963 tick 100 00:08:26.963 tick 250 00:08:26.963 tick 100 00:08:26.963 tick 100 00:08:26.963 test_end 00:08:26.963 00:08:26.963 real 0m1.806s 00:08:26.963 user 0m1.580s 00:08:26.963 sys 0m0.116s 00:08:26.963 14:01:12 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:26.963 14:01:12 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:08:26.963 ************************************ 00:08:26.963 END TEST event_reactor 00:08:26.963 ************************************ 00:08:26.963 14:01:12 event -- common/autotest_common.sh@1142 -- # return 0 00:08:26.963 14:01:12 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:26.964 14:01:12 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:26.964 14:01:12 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:26.964 14:01:12 event -- common/autotest_common.sh@10 -- # set +x 00:08:26.964 ************************************ 00:08:26.964 START TEST event_reactor_perf 00:08:26.964 ************************************ 00:08:26.964 14:01:12 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:26.964 [2024-07-15 14:01:12.906949] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:08:26.964 [2024-07-15 14:01:12.907239] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid179453 ] 00:08:27.223 [2024-07-15 14:01:13.069480] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.481 [2024-07-15 14:01:13.290274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.855 test_start 00:08:28.855 test_end 00:08:28.855 Performance: 605108 events per second 00:08:28.855 00:08:28.855 real 0m1.816s 00:08:28.855 user 0m1.590s 00:08:28.855 sys 0m0.115s 00:08:28.855 14:01:14 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:28.855 14:01:14 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:08:28.855 ************************************ 00:08:28.855 END TEST event_reactor_perf 00:08:28.855 ************************************ 00:08:28.855 14:01:14 event -- common/autotest_common.sh@1142 -- # return 0 00:08:28.855 14:01:14 event -- event/event.sh@49 -- # uname -s 00:08:28.855 14:01:14 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:08:28.855 14:01:14 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:08:28.855 14:01:14 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:28.855 14:01:14 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:28.855 14:01:14 event -- common/autotest_common.sh@10 -- # set +x 00:08:28.855 ************************************ 00:08:28.855 START TEST event_scheduler 00:08:28.855 ************************************ 00:08:28.855 14:01:14 event.event_scheduler -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:08:28.855 * Looking for test storage... 00:08:28.855 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:08:28.855 14:01:14 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:08:28.855 14:01:14 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:08:28.855 14:01:14 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=179525 00:08:28.855 14:01:14 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:08:28.855 14:01:14 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 179525 00:08:28.855 14:01:14 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 179525 ']' 00:08:28.855 14:01:14 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:28.855 14:01:14 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:28.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:28.855 14:01:14 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:28.855 14:01:14 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:28.855 14:01:14 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:29.113 [2024-07-15 14:01:14.858693] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:08:29.113 [2024-07-15 14:01:14.859256] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid179525 ] 00:08:29.113 [2024-07-15 14:01:15.042415] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:29.371 [2024-07-15 14:01:15.301583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.371 [2024-07-15 14:01:15.301742] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:29.371 [2024-07-15 14:01:15.301854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:29.371 [2024-07-15 14:01:15.301860] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:29.937 14:01:15 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:29.937 14:01:15 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:08:29.937 14:01:15 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:08:29.937 14:01:15 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.937 14:01:15 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:29.937 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:29.937 POWER: Cannot set governor of lcore 0 to userspace 00:08:29.937 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:29.937 POWER: Cannot set governor of lcore 0 to performance 00:08:29.937 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:29.937 POWER: Cannot set governor of lcore 0 to userspace 00:08:29.937 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:29.937 POWER: Cannot set governor of lcore 0 to userspace 00:08:29.937 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:08:29.937 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:08:29.937 POWER: Unable to set Power Management Environment for lcore 0 00:08:29.937 [2024-07-15 14:01:15.880215] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:08:29.937 [2024-07-15 14:01:15.880278] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:08:29.937 [2024-07-15 14:01:15.880334] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:08:29.937 [2024-07-15 14:01:15.880371] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:08:29.937 [2024-07-15 14:01:15.880418] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:08:29.937 [2024-07-15 14:01:15.880443] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:08:29.937 14:01:15 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.937 14:01:15 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:08:29.937 14:01:15 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.937 14:01:15 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:30.504 [2024-07-15 14:01:16.242027] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:08:30.504 14:01:16 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.504 14:01:16 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:08:30.504 14:01:16 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:30.504 14:01:16 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:30.504 14:01:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:30.504 ************************************ 00:08:30.504 START TEST scheduler_create_thread 00:08:30.504 ************************************ 00:08:30.504 14:01:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:08:30.505 14:01:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:08:30.505 14:01:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.505 14:01:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:30.505 2 00:08:30.505 14:01:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.505 14:01:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:08:30.505 14:01:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.505 14:01:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:30.505 3 00:08:30.505 14:01:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.505 14:01:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:08:30.505 14:01:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.505 14:01:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:30.505 4 00:08:30.505 14:01:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.505 14:01:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:08:30.505 14:01:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.505 14:01:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:30.505 5 00:08:30.505 14:01:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.505 14:01:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:08:30.505 14:01:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.505 14:01:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:30.505 6 00:08:30.505 14:01:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.505 14:01:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:08:30.505 14:01:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.505 14:01:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:30.505 7 00:08:30.505 14:01:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.505 14:01:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:08:30.505 14:01:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.505 14:01:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:30.505 8 00:08:30.505 14:01:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.505 14:01:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:08:30.505 14:01:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.505 14:01:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:30.505 9 00:08:30.505 14:01:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.505 14:01:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:08:30.505 14:01:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.505 14:01:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:30.505 10 00:08:30.505 14:01:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.505 14:01:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:08:30.505 14:01:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.505 14:01:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:30.505 14:01:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.505 14:01:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:08:30.505 14:01:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:08:30.505 14:01:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.505 14:01:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:30.505 14:01:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.505 14:01:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:08:30.505 14:01:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.505 14:01:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:30.505 14:01:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.505 14:01:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:08:30.505 14:01:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:08:30.505 14:01:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.505 14:01:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:31.072 14:01:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.072 00:08:31.072 real 0m0.594s 00:08:31.072 user 0m0.012s 00:08:31.072 ************************************ 00:08:31.072 END TEST scheduler_create_thread 00:08:31.072 ************************************ 00:08:31.072 sys 0m0.005s 00:08:31.072 14:01:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:31.072 14:01:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:31.072 14:01:16 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:08:31.072 14:01:16 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:08:31.072 14:01:16 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 179525 00:08:31.072 14:01:16 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 179525 ']' 00:08:31.072 14:01:16 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 179525 00:08:31.072 14:01:16 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:08:31.072 14:01:16 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:31.072 14:01:16 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 179525 00:08:31.072 14:01:16 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:08:31.072 killing process with pid 179525 00:08:31.072 14:01:16 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:08:31.072 14:01:16 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 179525' 00:08:31.072 14:01:16 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 179525 00:08:31.072 14:01:16 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 179525 00:08:31.330 [2024-07-15 14:01:17.328847] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:08:32.705 00:08:32.705 real 0m3.895s 00:08:32.705 user 0m7.297s 00:08:32.705 sys 0m0.491s 00:08:32.705 14:01:18 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:32.705 14:01:18 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:32.705 ************************************ 00:08:32.705 END TEST event_scheduler 00:08:32.705 ************************************ 00:08:32.705 14:01:18 event -- common/autotest_common.sh@1142 -- # return 0 00:08:32.705 14:01:18 event -- event/event.sh@51 -- # modprobe -n nbd 00:08:32.705 14:01:18 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:08:32.705 14:01:18 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:32.705 14:01:18 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:32.705 14:01:18 event -- common/autotest_common.sh@10 -- # set +x 00:08:32.705 ************************************ 00:08:32.705 START TEST app_repeat 00:08:32.705 ************************************ 00:08:32.705 14:01:18 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:08:32.705 14:01:18 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:32.705 14:01:18 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:32.705 14:01:18 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:08:32.705 14:01:18 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:32.705 14:01:18 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:08:32.705 14:01:18 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:08:32.705 14:01:18 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:08:32.964 14:01:18 event.app_repeat -- event/event.sh@19 -- # repeat_pid=179629 00:08:32.964 14:01:18 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:08:32.964 14:01:18 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:08:32.964 14:01:18 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 179629' 00:08:32.964 Process app_repeat pid: 179629 00:08:32.964 14:01:18 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:32.964 14:01:18 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:08:32.964 spdk_app_start Round 0 00:08:32.964 14:01:18 event.app_repeat -- event/event.sh@25 -- # waitforlisten 179629 /var/tmp/spdk-nbd.sock 00:08:32.964 14:01:18 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 179629 ']' 00:08:32.964 14:01:18 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:32.964 14:01:18 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:32.964 14:01:18 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:32.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:32.964 14:01:18 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:32.964 14:01:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:32.964 [2024-07-15 14:01:18.749318] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:08:32.964 [2024-07-15 14:01:18.749979] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid179629 ] 00:08:32.964 [2024-07-15 14:01:18.915202] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:33.222 [2024-07-15 14:01:19.174451] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:33.222 [2024-07-15 14:01:19.174463] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.807 14:01:19 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:33.807 14:01:19 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:08:33.807 14:01:19 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:34.073 Malloc0 00:08:34.073 14:01:20 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:34.640 Malloc1 00:08:34.640 14:01:20 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:34.640 14:01:20 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:34.640 14:01:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:34.640 14:01:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:34.640 14:01:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:34.640 14:01:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:34.640 14:01:20 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:34.640 14:01:20 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:34.640 14:01:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:34.640 14:01:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:34.640 14:01:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:34.640 14:01:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:34.640 14:01:20 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:34.640 14:01:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:34.640 14:01:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:34.640 14:01:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:34.899 /dev/nbd0 00:08:34.899 14:01:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:34.899 14:01:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:34.899 14:01:20 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:08:34.899 14:01:20 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:08:34.899 14:01:20 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:34.899 14:01:20 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:34.899 14:01:20 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:08:34.899 14:01:20 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:08:34.899 14:01:20 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:34.899 14:01:20 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:34.899 14:01:20 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:34.899 1+0 records in 00:08:34.899 1+0 records out 00:08:34.899 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000370267 s, 11.1 MB/s 00:08:34.899 14:01:20 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:34.899 14:01:20 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:08:34.899 14:01:20 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:34.899 14:01:20 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:34.899 14:01:20 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:08:34.899 14:01:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:34.899 14:01:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:34.899 14:01:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:35.158 /dev/nbd1 00:08:35.158 14:01:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:35.158 14:01:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:35.158 14:01:20 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:08:35.158 14:01:20 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:08:35.158 14:01:20 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:35.158 14:01:20 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:35.158 14:01:20 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:08:35.158 14:01:20 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:08:35.158 14:01:20 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:35.158 14:01:20 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:35.158 14:01:20 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:35.158 1+0 records in 00:08:35.158 1+0 records out 00:08:35.158 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00021649 s, 18.9 MB/s 00:08:35.158 14:01:20 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:35.158 14:01:20 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:08:35.158 14:01:20 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:35.158 14:01:21 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:35.158 14:01:21 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:08:35.158 14:01:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:35.158 14:01:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:35.158 14:01:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:35.158 14:01:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:35.158 14:01:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:35.416 14:01:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:35.416 { 00:08:35.416 "nbd_device": "/dev/nbd0", 00:08:35.416 "bdev_name": "Malloc0" 00:08:35.416 }, 00:08:35.416 { 00:08:35.416 "nbd_device": "/dev/nbd1", 00:08:35.416 "bdev_name": "Malloc1" 00:08:35.416 } 00:08:35.416 ]' 00:08:35.416 14:01:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:35.416 { 00:08:35.416 "nbd_device": "/dev/nbd0", 00:08:35.416 "bdev_name": "Malloc0" 00:08:35.416 }, 00:08:35.416 { 00:08:35.416 "nbd_device": "/dev/nbd1", 00:08:35.416 "bdev_name": "Malloc1" 00:08:35.416 } 00:08:35.416 ]' 00:08:35.416 14:01:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:35.416 14:01:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:35.416 /dev/nbd1' 00:08:35.416 14:01:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:35.416 /dev/nbd1' 00:08:35.416 14:01:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:35.416 14:01:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:35.416 14:01:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:35.416 14:01:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:35.416 14:01:21 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:35.416 14:01:21 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:35.416 14:01:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:35.416 14:01:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:35.416 14:01:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:35.416 14:01:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:35.416 14:01:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:35.416 14:01:21 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:35.416 256+0 records in 00:08:35.416 256+0 records out 00:08:35.416 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00618418 s, 170 MB/s 00:08:35.416 14:01:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:35.416 14:01:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:35.416 256+0 records in 00:08:35.416 256+0 records out 00:08:35.416 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0255523 s, 41.0 MB/s 00:08:35.416 14:01:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:35.416 14:01:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:35.416 256+0 records in 00:08:35.416 256+0 records out 00:08:35.416 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0260942 s, 40.2 MB/s 00:08:35.416 14:01:21 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:35.416 14:01:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:35.416 14:01:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:35.416 14:01:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:35.416 14:01:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:35.416 14:01:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:35.416 14:01:21 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:35.674 14:01:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:35.674 14:01:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:35.674 14:01:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:35.674 14:01:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:35.674 14:01:21 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:35.674 14:01:21 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:35.674 14:01:21 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:35.674 14:01:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:35.674 14:01:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:35.674 14:01:21 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:35.674 14:01:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:35.674 14:01:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:35.931 14:01:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:35.931 14:01:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:35.931 14:01:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:35.931 14:01:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:35.931 14:01:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:35.931 14:01:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:35.931 14:01:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:35.931 14:01:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:35.931 14:01:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:35.931 14:01:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:36.189 14:01:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:36.189 14:01:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:36.189 14:01:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:36.189 14:01:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:36.189 14:01:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:36.189 14:01:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:36.189 14:01:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:36.189 14:01:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:36.189 14:01:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:36.189 14:01:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:36.189 14:01:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:36.446 14:01:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:36.446 14:01:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:36.446 14:01:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:36.446 14:01:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:36.446 14:01:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:36.446 14:01:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:36.446 14:01:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:36.446 14:01:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:36.446 14:01:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:36.446 14:01:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:36.446 14:01:22 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:36.446 14:01:22 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:36.446 14:01:22 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:37.012 14:01:22 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:38.383 [2024-07-15 14:01:24.027430] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:38.383 [2024-07-15 14:01:24.233813] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:38.383 [2024-07-15 14:01:24.233821] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.656 [2024-07-15 14:01:24.420027] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:38.656 [2024-07-15 14:01:24.420155] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:40.057 14:01:25 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:40.057 spdk_app_start Round 1 00:08:40.057 14:01:25 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:08:40.057 14:01:25 event.app_repeat -- event/event.sh@25 -- # waitforlisten 179629 /var/tmp/spdk-nbd.sock 00:08:40.057 14:01:25 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 179629 ']' 00:08:40.057 14:01:25 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:40.057 14:01:25 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:40.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:40.057 14:01:25 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:40.057 14:01:25 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:40.057 14:01:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:40.315 14:01:26 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:40.315 14:01:26 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:08:40.315 14:01:26 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:40.574 Malloc0 00:08:40.574 14:01:26 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:40.832 Malloc1 00:08:40.833 14:01:26 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:40.833 14:01:26 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:40.833 14:01:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:40.833 14:01:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:40.833 14:01:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:40.833 14:01:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:40.833 14:01:26 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:40.833 14:01:26 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:40.833 14:01:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:40.833 14:01:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:40.833 14:01:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:40.833 14:01:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:40.833 14:01:26 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:40.833 14:01:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:40.833 14:01:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:40.833 14:01:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:41.091 /dev/nbd0 00:08:41.091 14:01:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:41.091 14:01:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:41.091 14:01:27 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:08:41.091 14:01:27 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:08:41.091 14:01:27 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:41.091 14:01:27 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:41.091 14:01:27 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:08:41.091 14:01:27 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:08:41.091 14:01:27 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:41.091 14:01:27 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:41.091 14:01:27 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:41.091 1+0 records in 00:08:41.091 1+0 records out 00:08:41.091 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000171871 s, 23.8 MB/s 00:08:41.091 14:01:27 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:41.091 14:01:27 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:08:41.091 14:01:27 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:41.091 14:01:27 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:41.091 14:01:27 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:08:41.091 14:01:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:41.091 14:01:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:41.091 14:01:27 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:41.349 /dev/nbd1 00:08:41.349 14:01:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:41.349 14:01:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:41.349 14:01:27 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:08:41.349 14:01:27 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:08:41.349 14:01:27 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:41.349 14:01:27 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:41.349 14:01:27 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:08:41.349 14:01:27 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:08:41.349 14:01:27 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:41.349 14:01:27 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:41.349 14:01:27 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:41.349 1+0 records in 00:08:41.349 1+0 records out 00:08:41.349 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00026486 s, 15.5 MB/s 00:08:41.349 14:01:27 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:41.349 14:01:27 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:08:41.349 14:01:27 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:41.349 14:01:27 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:41.349 14:01:27 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:08:41.349 14:01:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:41.349 14:01:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:41.349 14:01:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:41.349 14:01:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:41.349 14:01:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:41.608 14:01:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:41.608 { 00:08:41.608 "nbd_device": "/dev/nbd0", 00:08:41.608 "bdev_name": "Malloc0" 00:08:41.608 }, 00:08:41.608 { 00:08:41.608 "nbd_device": "/dev/nbd1", 00:08:41.608 "bdev_name": "Malloc1" 00:08:41.608 } 00:08:41.608 ]' 00:08:41.608 14:01:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:41.608 { 00:08:41.608 "nbd_device": "/dev/nbd0", 00:08:41.608 "bdev_name": "Malloc0" 00:08:41.608 }, 00:08:41.608 { 00:08:41.608 "nbd_device": "/dev/nbd1", 00:08:41.608 "bdev_name": "Malloc1" 00:08:41.608 } 00:08:41.608 ]' 00:08:41.608 14:01:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:41.867 14:01:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:41.867 /dev/nbd1' 00:08:41.867 14:01:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:41.867 /dev/nbd1' 00:08:41.867 14:01:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:41.867 14:01:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:41.867 14:01:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:41.867 14:01:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:41.867 14:01:27 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:41.867 14:01:27 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:41.867 14:01:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:41.867 14:01:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:41.867 14:01:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:41.867 14:01:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:41.867 14:01:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:41.867 14:01:27 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:41.867 256+0 records in 00:08:41.867 256+0 records out 00:08:41.867 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00654756 s, 160 MB/s 00:08:41.867 14:01:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:41.867 14:01:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:41.867 256+0 records in 00:08:41.867 256+0 records out 00:08:41.867 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0249565 s, 42.0 MB/s 00:08:41.867 14:01:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:41.867 14:01:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:41.867 256+0 records in 00:08:41.867 256+0 records out 00:08:41.867 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0281037 s, 37.3 MB/s 00:08:41.867 14:01:27 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:41.867 14:01:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:41.867 14:01:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:41.867 14:01:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:41.867 14:01:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:41.867 14:01:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:41.867 14:01:27 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:41.867 14:01:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:41.868 14:01:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:41.868 14:01:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:41.868 14:01:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:41.868 14:01:27 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:41.868 14:01:27 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:41.868 14:01:27 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:41.868 14:01:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:41.868 14:01:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:41.868 14:01:27 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:41.868 14:01:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:41.868 14:01:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:42.126 14:01:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:42.126 14:01:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:42.126 14:01:28 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:42.126 14:01:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:42.126 14:01:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:42.126 14:01:28 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:42.126 14:01:28 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:42.126 14:01:28 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:42.126 14:01:28 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:42.126 14:01:28 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:42.693 14:01:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:42.693 14:01:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:42.693 14:01:28 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:42.693 14:01:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:42.693 14:01:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:42.693 14:01:28 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:42.693 14:01:28 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:42.693 14:01:28 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:42.693 14:01:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:42.693 14:01:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:42.693 14:01:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:42.951 14:01:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:42.951 14:01:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:42.951 14:01:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:42.951 14:01:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:42.951 14:01:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:42.951 14:01:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:42.951 14:01:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:42.951 14:01:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:42.951 14:01:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:42.951 14:01:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:42.951 14:01:28 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:42.951 14:01:28 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:42.952 14:01:28 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:43.210 14:01:29 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:44.588 [2024-07-15 14:01:30.382390] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:44.588 [2024-07-15 14:01:30.588237] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:44.588 [2024-07-15 14:01:30.588239] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.846 [2024-07-15 14:01:30.773730] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:44.846 [2024-07-15 14:01:30.774006] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:46.222 14:01:32 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:46.222 14:01:32 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:08:46.222 spdk_app_start Round 2 00:08:46.222 14:01:32 event.app_repeat -- event/event.sh@25 -- # waitforlisten 179629 /var/tmp/spdk-nbd.sock 00:08:46.222 14:01:32 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 179629 ']' 00:08:46.222 14:01:32 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:46.222 14:01:32 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:46.222 14:01:32 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:46.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:46.222 14:01:32 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:46.222 14:01:32 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:46.480 14:01:32 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:46.480 14:01:32 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:08:46.480 14:01:32 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:47.046 Malloc0 00:08:47.046 14:01:32 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:47.306 Malloc1 00:08:47.306 14:01:33 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:47.306 14:01:33 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:47.306 14:01:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:47.306 14:01:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:47.306 14:01:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:47.306 14:01:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:47.306 14:01:33 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:47.306 14:01:33 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:47.306 14:01:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:47.306 14:01:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:47.306 14:01:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:47.306 14:01:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:47.306 14:01:33 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:47.306 14:01:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:47.306 14:01:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:47.306 14:01:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:47.564 /dev/nbd0 00:08:47.564 14:01:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:47.564 14:01:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:47.564 14:01:33 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:08:47.564 14:01:33 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:08:47.564 14:01:33 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:47.565 14:01:33 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:47.565 14:01:33 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:08:47.565 14:01:33 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:08:47.565 14:01:33 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:47.565 14:01:33 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:47.565 14:01:33 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:47.565 1+0 records in 00:08:47.565 1+0 records out 00:08:47.565 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000331798 s, 12.3 MB/s 00:08:47.565 14:01:33 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:47.565 14:01:33 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:08:47.565 14:01:33 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:47.565 14:01:33 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:47.565 14:01:33 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:08:47.565 14:01:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:47.565 14:01:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:47.565 14:01:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:47.822 /dev/nbd1 00:08:47.822 14:01:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:47.822 14:01:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:47.822 14:01:33 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:08:47.822 14:01:33 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:08:47.822 14:01:33 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:47.822 14:01:33 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:47.822 14:01:33 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:08:47.822 14:01:33 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:08:47.822 14:01:33 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:47.822 14:01:33 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:47.822 14:01:33 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:47.822 1+0 records in 00:08:47.822 1+0 records out 00:08:47.822 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000398741 s, 10.3 MB/s 00:08:47.822 14:01:33 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:47.822 14:01:33 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:08:47.822 14:01:33 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:47.822 14:01:33 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:47.822 14:01:33 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:08:47.822 14:01:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:47.822 14:01:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:47.822 14:01:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:47.822 14:01:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:47.822 14:01:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:48.081 14:01:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:48.081 { 00:08:48.081 "nbd_device": "/dev/nbd0", 00:08:48.081 "bdev_name": "Malloc0" 00:08:48.081 }, 00:08:48.081 { 00:08:48.081 "nbd_device": "/dev/nbd1", 00:08:48.081 "bdev_name": "Malloc1" 00:08:48.081 } 00:08:48.081 ]' 00:08:48.081 14:01:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:48.081 { 00:08:48.081 "nbd_device": "/dev/nbd0", 00:08:48.081 "bdev_name": "Malloc0" 00:08:48.081 }, 00:08:48.081 { 00:08:48.081 "nbd_device": "/dev/nbd1", 00:08:48.081 "bdev_name": "Malloc1" 00:08:48.081 } 00:08:48.081 ]' 00:08:48.081 14:01:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:48.081 14:01:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:48.081 /dev/nbd1' 00:08:48.081 14:01:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:48.081 /dev/nbd1' 00:08:48.081 14:01:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:48.081 14:01:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:48.081 14:01:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:48.081 14:01:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:48.081 14:01:33 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:48.081 14:01:33 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:48.081 14:01:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:48.081 14:01:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:48.081 14:01:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:48.081 14:01:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:48.081 14:01:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:48.081 14:01:33 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:48.081 256+0 records in 00:08:48.081 256+0 records out 00:08:48.081 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00727692 s, 144 MB/s 00:08:48.081 14:01:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:48.081 14:01:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:48.081 256+0 records in 00:08:48.081 256+0 records out 00:08:48.081 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.018933 s, 55.4 MB/s 00:08:48.081 14:01:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:48.081 14:01:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:48.081 256+0 records in 00:08:48.081 256+0 records out 00:08:48.081 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0235875 s, 44.5 MB/s 00:08:48.081 14:01:34 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:48.081 14:01:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:48.081 14:01:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:48.081 14:01:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:48.081 14:01:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:48.081 14:01:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:48.081 14:01:34 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:48.081 14:01:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:48.081 14:01:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:48.081 14:01:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:48.081 14:01:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:48.081 14:01:34 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:48.081 14:01:34 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:48.081 14:01:34 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:48.081 14:01:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:48.081 14:01:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:48.081 14:01:34 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:48.081 14:01:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:48.081 14:01:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:48.649 14:01:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:48.649 14:01:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:48.649 14:01:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:48.649 14:01:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:48.649 14:01:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:48.649 14:01:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:48.649 14:01:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:48.649 14:01:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:48.649 14:01:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:48.649 14:01:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:48.649 14:01:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:48.649 14:01:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:48.649 14:01:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:48.649 14:01:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:48.649 14:01:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:48.649 14:01:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:48.649 14:01:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:48.649 14:01:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:48.649 14:01:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:48.649 14:01:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:48.649 14:01:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:48.907 14:01:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:48.908 14:01:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:48.908 14:01:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:48.908 14:01:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:48.908 14:01:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:48.908 14:01:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:48.908 14:01:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:48.908 14:01:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:48.908 14:01:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:48.908 14:01:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:48.908 14:01:34 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:48.908 14:01:34 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:48.908 14:01:34 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:49.475 14:01:35 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:50.852 [2024-07-15 14:01:36.557980] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:50.852 [2024-07-15 14:01:36.770628] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:50.852 [2024-07-15 14:01:36.770631] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.113 [2024-07-15 14:01:36.956865] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:51.113 [2024-07-15 14:01:36.957208] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:52.491 14:01:38 event.app_repeat -- event/event.sh@38 -- # waitforlisten 179629 /var/tmp/spdk-nbd.sock 00:08:52.491 14:01:38 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 179629 ']' 00:08:52.491 14:01:38 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:52.491 14:01:38 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:52.491 14:01:38 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:52.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:52.491 14:01:38 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:52.491 14:01:38 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:52.749 14:01:38 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:52.749 14:01:38 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:08:52.749 14:01:38 event.app_repeat -- event/event.sh@39 -- # killprocess 179629 00:08:52.749 14:01:38 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 179629 ']' 00:08:52.749 14:01:38 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 179629 00:08:52.749 14:01:38 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:08:52.749 14:01:38 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:52.749 14:01:38 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 179629 00:08:52.749 14:01:38 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:52.749 14:01:38 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:52.749 14:01:38 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 179629' 00:08:52.749 killing process with pid 179629 00:08:52.749 14:01:38 event.app_repeat -- common/autotest_common.sh@967 -- # kill 179629 00:08:52.749 14:01:38 event.app_repeat -- common/autotest_common.sh@972 -- # wait 179629 00:08:54.121 spdk_app_start is called in Round 0. 00:08:54.121 Shutdown signal received, stop current app iteration 00:08:54.121 Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 reinitialization... 00:08:54.121 spdk_app_start is called in Round 1. 00:08:54.121 Shutdown signal received, stop current app iteration 00:08:54.121 Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 reinitialization... 00:08:54.121 spdk_app_start is called in Round 2. 00:08:54.121 Shutdown signal received, stop current app iteration 00:08:54.121 Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 reinitialization... 00:08:54.121 spdk_app_start is called in Round 3. 00:08:54.121 Shutdown signal received, stop current app iteration 00:08:54.121 14:01:39 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:08:54.121 14:01:39 event.app_repeat -- event/event.sh@42 -- # return 0 00:08:54.121 00:08:54.121 real 0m21.028s 00:08:54.121 user 0m44.749s 00:08:54.121 sys 0m3.409s 00:08:54.121 14:01:39 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:54.121 14:01:39 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:54.121 ************************************ 00:08:54.121 END TEST app_repeat 00:08:54.121 ************************************ 00:08:54.121 14:01:39 event -- common/autotest_common.sh@1142 -- # return 0 00:08:54.121 14:01:39 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:08:54.121 14:01:39 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:08:54.121 14:01:39 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:54.121 14:01:39 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:54.121 14:01:39 event -- common/autotest_common.sh@10 -- # set +x 00:08:54.121 ************************************ 00:08:54.121 START TEST cpu_locks 00:08:54.121 ************************************ 00:08:54.121 14:01:39 event.cpu_locks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:08:54.121 * Looking for test storage... 00:08:54.121 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:08:54.121 14:01:39 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:08:54.121 14:01:39 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:08:54.121 14:01:39 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:08:54.121 14:01:39 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:08:54.121 14:01:39 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:54.121 14:01:39 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:54.121 14:01:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:54.121 ************************************ 00:08:54.121 START TEST default_locks 00:08:54.121 ************************************ 00:08:54.121 14:01:39 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:08:54.121 14:01:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=180116 00:08:54.121 14:01:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:54.121 14:01:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 180116 00:08:54.121 14:01:39 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 180116 ']' 00:08:54.121 14:01:39 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:54.121 14:01:39 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:54.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:54.121 14:01:39 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:54.121 14:01:39 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:54.121 14:01:39 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:54.121 [2024-07-15 14:01:39.898006] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:08:54.121 [2024-07-15 14:01:39.898172] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid180116 ] 00:08:54.121 [2024-07-15 14:01:40.048550] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.380 [2024-07-15 14:01:40.261065] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.313 14:01:41 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:55.313 14:01:41 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:08:55.313 14:01:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 180116 00:08:55.313 14:01:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 180116 00:08:55.313 14:01:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:55.570 14:01:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 180116 00:08:55.570 14:01:41 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 180116 ']' 00:08:55.570 14:01:41 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 180116 00:08:55.570 14:01:41 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:08:55.570 14:01:41 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:55.570 14:01:41 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 180116 00:08:55.570 14:01:41 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:55.570 14:01:41 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:55.570 killing process with pid 180116 00:08:55.570 14:01:41 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 180116' 00:08:55.570 14:01:41 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 180116 00:08:55.570 14:01:41 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 180116 00:08:58.096 14:01:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 180116 00:08:58.096 14:01:43 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:08:58.096 14:01:43 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 180116 00:08:58.096 14:01:43 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:08:58.096 14:01:43 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:58.096 14:01:43 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:08:58.096 14:01:43 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:58.096 14:01:43 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 180116 00:08:58.096 14:01:43 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 180116 ']' 00:08:58.096 14:01:43 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:58.096 14:01:43 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:58.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:58.096 14:01:43 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:58.096 14:01:43 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:58.096 14:01:43 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:58.096 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (180116) - No such process 00:08:58.096 ERROR: process (pid: 180116) is no longer running 00:08:58.096 14:01:43 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:58.096 14:01:43 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:08:58.096 14:01:43 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:08:58.096 14:01:43 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:58.096 14:01:43 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:58.096 14:01:43 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:58.096 14:01:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:08:58.096 14:01:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:58.096 14:01:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:08:58.096 14:01:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:58.096 00:08:58.096 real 0m3.726s 00:08:58.096 user 0m3.809s 00:08:58.096 sys 0m0.620s 00:08:58.096 14:01:43 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:58.096 14:01:43 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:58.096 ************************************ 00:08:58.096 END TEST default_locks 00:08:58.096 ************************************ 00:08:58.096 14:01:43 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:08:58.096 14:01:43 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:08:58.096 14:01:43 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:58.096 14:01:43 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:58.096 14:01:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:58.096 ************************************ 00:08:58.096 START TEST default_locks_via_rpc 00:08:58.096 ************************************ 00:08:58.096 14:01:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:08:58.096 14:01:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=180196 00:08:58.096 14:01:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 180196 00:08:58.096 14:01:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 180196 ']' 00:08:58.096 14:01:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:58.096 14:01:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:58.096 14:01:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:58.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:58.096 14:01:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:58.096 14:01:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:58.096 14:01:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:58.096 [2024-07-15 14:01:43.689798] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:08:58.096 [2024-07-15 14:01:43.690648] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid180196 ] 00:08:58.096 [2024-07-15 14:01:43.861523] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.096 [2024-07-15 14:01:44.065777] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.029 14:01:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:59.029 14:01:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:08:59.029 14:01:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:08:59.029 14:01:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.029 14:01:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:59.029 14:01:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.029 14:01:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:08:59.029 14:01:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:59.029 14:01:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:08:59.029 14:01:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:59.029 14:01:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:08:59.029 14:01:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.029 14:01:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:59.029 14:01:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.029 14:01:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 180196 00:08:59.029 14:01:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 180196 00:08:59.029 14:01:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:59.287 14:01:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 180196 00:08:59.287 14:01:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 180196 ']' 00:08:59.287 14:01:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 180196 00:08:59.287 14:01:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:08:59.287 14:01:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:59.287 14:01:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 180196 00:08:59.287 14:01:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:59.287 killing process with pid 180196 00:08:59.287 14:01:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:59.287 14:01:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 180196' 00:08:59.287 14:01:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 180196 00:08:59.287 14:01:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 180196 00:09:01.820 00:09:01.820 real 0m3.665s 00:09:01.820 user 0m3.723s 00:09:01.820 sys 0m0.564s 00:09:01.820 14:01:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:01.820 14:01:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:01.820 ************************************ 00:09:01.820 END TEST default_locks_via_rpc 00:09:01.820 ************************************ 00:09:01.820 14:01:47 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:09:01.820 14:01:47 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:09:01.820 14:01:47 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:01.820 14:01:47 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:01.820 14:01:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:01.820 ************************************ 00:09:01.820 START TEST non_locking_app_on_locked_coremask 00:09:01.820 ************************************ 00:09:01.820 14:01:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:09:01.820 14:01:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=180273 00:09:01.820 14:01:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 180273 /var/tmp/spdk.sock 00:09:01.820 14:01:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:01.820 14:01:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 180273 ']' 00:09:01.820 14:01:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:01.820 14:01:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:01.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:01.820 14:01:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:01.820 14:01:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:01.820 14:01:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:01.820 [2024-07-15 14:01:47.402205] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:09:01.820 [2024-07-15 14:01:47.402409] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid180273 ] 00:09:01.820 [2024-07-15 14:01:47.566018] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.820 [2024-07-15 14:01:47.780187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.754 14:01:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:02.754 14:01:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:09:02.754 14:01:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:09:02.754 14:01:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=180301 00:09:02.754 14:01:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 180301 /var/tmp/spdk2.sock 00:09:02.755 14:01:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 180301 ']' 00:09:02.755 14:01:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:02.755 14:01:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:02.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:02.755 14:01:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:02.755 14:01:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:02.755 14:01:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:02.755 [2024-07-15 14:01:48.582023] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:09:02.755 [2024-07-15 14:01:48.582543] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid180301 ] 00:09:02.755 [2024-07-15 14:01:48.729579] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:02.755 [2024-07-15 14:01:48.729660] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.323 [2024-07-15 14:01:49.163292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.857 14:01:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:05.857 14:01:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:09:05.857 14:01:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 180273 00:09:05.857 14:01:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 180273 00:09:05.857 14:01:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:06.425 14:01:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 180273 00:09:06.425 14:01:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 180273 ']' 00:09:06.425 14:01:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 180273 00:09:06.425 14:01:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:09:06.425 14:01:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:06.425 14:01:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 180273 00:09:06.425 14:01:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:06.425 14:01:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:06.425 killing process with pid 180273 00:09:06.425 14:01:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 180273' 00:09:06.425 14:01:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 180273 00:09:06.425 14:01:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 180273 00:09:11.687 14:01:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 180301 00:09:11.687 14:01:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 180301 ']' 00:09:11.687 14:01:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 180301 00:09:11.687 14:01:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:09:11.687 14:01:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:11.687 14:01:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 180301 00:09:11.687 14:01:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:11.687 14:01:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:11.687 killing process with pid 180301 00:09:11.687 14:01:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 180301' 00:09:11.687 14:01:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 180301 00:09:11.687 14:01:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 180301 00:09:13.065 00:09:13.065 real 0m11.623s 00:09:13.065 user 0m12.332s 00:09:13.065 sys 0m1.241s 00:09:13.065 14:01:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:13.065 14:01:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:13.065 ************************************ 00:09:13.065 END TEST non_locking_app_on_locked_coremask 00:09:13.065 ************************************ 00:09:13.065 14:01:59 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:09:13.065 14:01:59 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:09:13.065 14:01:59 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:13.065 14:01:59 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:13.065 14:01:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:13.065 ************************************ 00:09:13.065 START TEST locking_app_on_unlocked_coremask 00:09:13.065 ************************************ 00:09:13.065 14:01:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:09:13.065 14:01:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=180456 00:09:13.065 14:01:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 180456 /var/tmp/spdk.sock 00:09:13.065 14:01:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:09:13.065 14:01:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 180456 ']' 00:09:13.065 14:01:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:13.065 14:01:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:13.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:13.065 14:01:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:13.065 14:01:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:13.065 14:01:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:13.323 [2024-07-15 14:01:59.071959] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:09:13.323 [2024-07-15 14:01:59.072144] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid180456 ] 00:09:13.323 [2024-07-15 14:01:59.226264] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:13.323 [2024-07-15 14:01:59.226439] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.582 [2024-07-15 14:01:59.442111] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.519 14:02:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:14.519 14:02:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:09:14.519 14:02:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=180482 00:09:14.519 14:02:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:14.519 14:02:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 180482 /var/tmp/spdk2.sock 00:09:14.519 14:02:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 180482 ']' 00:09:14.519 14:02:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:14.519 14:02:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:14.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:14.519 14:02:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:14.519 14:02:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:14.519 14:02:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:14.519 [2024-07-15 14:02:00.248676] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:09:14.519 [2024-07-15 14:02:00.249184] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid180482 ] 00:09:14.519 [2024-07-15 14:02:00.398596] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.087 [2024-07-15 14:02:00.822874] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.619 14:02:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:17.619 14:02:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:09:17.619 14:02:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 180482 00:09:17.619 14:02:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 180482 00:09:17.619 14:02:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:18.219 14:02:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 180456 00:09:18.219 14:02:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 180456 ']' 00:09:18.219 14:02:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 180456 00:09:18.219 14:02:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:09:18.219 14:02:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:18.219 14:02:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 180456 00:09:18.219 14:02:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:18.219 14:02:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:18.219 killing process with pid 180456 00:09:18.219 14:02:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 180456' 00:09:18.219 14:02:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 180456 00:09:18.219 14:02:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 180456 00:09:23.493 14:02:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 180482 00:09:23.493 14:02:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 180482 ']' 00:09:23.493 14:02:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 180482 00:09:23.493 14:02:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:09:23.493 14:02:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:23.493 14:02:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 180482 00:09:23.493 14:02:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:23.493 14:02:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:23.493 killing process with pid 180482 00:09:23.493 14:02:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 180482' 00:09:23.493 14:02:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 180482 00:09:23.493 14:02:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 180482 00:09:25.410 00:09:25.410 real 0m11.998s 00:09:25.410 user 0m12.509s 00:09:25.410 sys 0m1.313s 00:09:25.410 14:02:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:25.410 14:02:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:25.410 ************************************ 00:09:25.410 END TEST locking_app_on_unlocked_coremask 00:09:25.410 ************************************ 00:09:25.410 14:02:11 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:09:25.410 14:02:11 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:09:25.410 14:02:11 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:25.410 14:02:11 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:25.410 14:02:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:25.410 ************************************ 00:09:25.410 START TEST locking_app_on_locked_coremask 00:09:25.410 ************************************ 00:09:25.410 14:02:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:09:25.410 14:02:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=180649 00:09:25.410 14:02:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 180649 /var/tmp/spdk.sock 00:09:25.410 14:02:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:25.410 14:02:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 180649 ']' 00:09:25.410 14:02:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:25.410 14:02:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:25.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:25.410 14:02:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:25.410 14:02:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:25.410 14:02:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:25.410 [2024-07-15 14:02:11.134923] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:09:25.410 [2024-07-15 14:02:11.135625] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid180649 ] 00:09:25.410 [2024-07-15 14:02:11.297540] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.673 [2024-07-15 14:02:11.546460] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.633 14:02:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:26.633 14:02:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:09:26.633 14:02:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=180670 00:09:26.633 14:02:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:26.633 14:02:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 180670 /var/tmp/spdk2.sock 00:09:26.633 14:02:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:09:26.633 14:02:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 180670 /var/tmp/spdk2.sock 00:09:26.633 14:02:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:09:26.633 14:02:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:26.633 14:02:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:09:26.633 14:02:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:26.633 14:02:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 180670 /var/tmp/spdk2.sock 00:09:26.633 14:02:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 180670 ']' 00:09:26.633 14:02:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:26.633 14:02:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:26.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:26.634 14:02:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:26.634 14:02:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:26.634 14:02:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:26.634 [2024-07-15 14:02:12.430300] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:09:26.634 [2024-07-15 14:02:12.430513] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid180670 ] 00:09:26.634 [2024-07-15 14:02:12.600844] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 180649 has claimed it. 00:09:26.634 [2024-07-15 14:02:12.600965] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:27.227 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (180670) - No such process 00:09:27.227 ERROR: process (pid: 180670) is no longer running 00:09:27.227 14:02:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:27.227 14:02:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:09:27.227 14:02:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:09:27.227 14:02:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:27.227 14:02:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:27.227 14:02:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:27.227 14:02:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 180649 00:09:27.227 14:02:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 180649 00:09:27.227 14:02:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:27.797 14:02:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 180649 00:09:27.797 14:02:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 180649 ']' 00:09:27.797 14:02:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 180649 00:09:27.797 14:02:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:09:27.797 14:02:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:27.797 14:02:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 180649 00:09:27.797 14:02:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:27.797 14:02:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:27.797 killing process with pid 180649 00:09:27.797 14:02:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 180649' 00:09:27.797 14:02:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 180649 00:09:27.798 14:02:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 180649 00:09:30.330 00:09:30.330 real 0m4.862s 00:09:30.330 user 0m5.081s 00:09:30.330 sys 0m0.855s 00:09:30.330 14:02:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:30.330 14:02:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:30.330 ************************************ 00:09:30.330 END TEST locking_app_on_locked_coremask 00:09:30.330 ************************************ 00:09:30.330 14:02:15 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:09:30.330 14:02:15 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:09:30.330 14:02:15 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:30.330 14:02:15 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:30.330 14:02:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:30.330 ************************************ 00:09:30.330 START TEST locking_overlapped_coremask 00:09:30.330 ************************************ 00:09:30.330 14:02:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:09:30.330 14:02:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=180746 00:09:30.330 14:02:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 180746 /var/tmp/spdk.sock 00:09:30.330 14:02:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:09:30.330 14:02:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 180746 ']' 00:09:30.330 14:02:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:30.330 14:02:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:30.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:30.331 14:02:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:30.331 14:02:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:30.331 14:02:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:30.331 [2024-07-15 14:02:16.046544] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:09:30.331 [2024-07-15 14:02:16.047186] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid180746 ] 00:09:30.331 [2024-07-15 14:02:16.218982] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:30.589 [2024-07-15 14:02:16.470126] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:30.589 [2024-07-15 14:02:16.470211] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:30.589 [2024-07-15 14:02:16.470228] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.524 14:02:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:31.524 14:02:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:09:31.524 14:02:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=180769 00:09:31.524 14:02:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 180769 /var/tmp/spdk2.sock 00:09:31.524 14:02:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:09:31.524 14:02:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:09:31.524 14:02:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 180769 /var/tmp/spdk2.sock 00:09:31.524 14:02:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:09:31.524 14:02:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:31.524 14:02:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:09:31.524 14:02:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:31.524 14:02:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 180769 /var/tmp/spdk2.sock 00:09:31.524 14:02:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 180769 ']' 00:09:31.524 14:02:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:31.524 14:02:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:31.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:31.524 14:02:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:31.524 14:02:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:31.524 14:02:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:31.524 [2024-07-15 14:02:17.372580] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:09:31.524 [2024-07-15 14:02:17.373278] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid180769 ] 00:09:31.782 [2024-07-15 14:02:17.558352] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 180746 has claimed it. 00:09:31.782 [2024-07-15 14:02:17.558455] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:32.356 ERROR: process (pid: 180769) is no longer running 00:09:32.356 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (180769) - No such process 00:09:32.356 14:02:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:32.356 14:02:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:09:32.356 14:02:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:09:32.356 14:02:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:32.356 14:02:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:32.356 14:02:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:32.356 14:02:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:09:32.356 14:02:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:32.356 14:02:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:32.356 14:02:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:32.356 14:02:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 180746 00:09:32.356 14:02:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 180746 ']' 00:09:32.356 14:02:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 180746 00:09:32.356 14:02:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:09:32.356 14:02:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:32.356 14:02:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 180746 00:09:32.356 14:02:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:32.356 14:02:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:32.356 killing process with pid 180746 00:09:32.356 14:02:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 180746' 00:09:32.356 14:02:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 180746 00:09:32.356 14:02:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 180746 00:09:34.905 00:09:34.905 real 0m4.464s 00:09:34.905 user 0m11.678s 00:09:34.905 sys 0m0.663s 00:09:34.905 14:02:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:34.905 14:02:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:34.905 ************************************ 00:09:34.905 END TEST locking_overlapped_coremask 00:09:34.905 ************************************ 00:09:34.905 14:02:20 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:09:34.905 14:02:20 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:09:34.905 14:02:20 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:34.905 14:02:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:34.905 14:02:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:34.905 ************************************ 00:09:34.905 START TEST locking_overlapped_coremask_via_rpc 00:09:34.905 ************************************ 00:09:34.905 14:02:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:09:34.905 14:02:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=180838 00:09:34.905 14:02:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 180838 /var/tmp/spdk.sock 00:09:34.905 14:02:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 180838 ']' 00:09:34.905 14:02:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:09:34.905 14:02:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:34.905 14:02:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:34.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:34.905 14:02:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:34.905 14:02:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:34.905 14:02:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:34.905 [2024-07-15 14:02:20.563943] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:09:34.905 [2024-07-15 14:02:20.564142] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid180838 ] 00:09:34.905 [2024-07-15 14:02:20.737865] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:34.905 [2024-07-15 14:02:20.738125] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:35.162 [2024-07-15 14:02:20.988880] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:35.162 [2024-07-15 14:02:20.989041] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:35.162 [2024-07-15 14:02:20.989054] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.119 14:02:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:36.119 14:02:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:09:36.119 14:02:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=180865 00:09:36.119 14:02:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:09:36.119 14:02:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 180865 /var/tmp/spdk2.sock 00:09:36.119 14:02:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 180865 ']' 00:09:36.119 14:02:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:36.119 14:02:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:36.119 14:02:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:36.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:36.119 14:02:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:36.119 14:02:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.119 [2024-07-15 14:02:21.883011] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:09:36.119 [2024-07-15 14:02:21.883681] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid180865 ] 00:09:36.119 [2024-07-15 14:02:22.067764] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:36.119 [2024-07-15 14:02:22.067848] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:36.686 [2024-07-15 14:02:22.512387] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:36.686 [2024-07-15 14:02:22.521878] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:09:36.686 [2024-07-15 14:02:22.521878] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:39.240 14:02:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:39.240 14:02:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:09:39.240 14:02:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:09:39.240 14:02:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.240 14:02:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:39.240 14:02:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.240 14:02:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:39.240 14:02:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:09:39.240 14:02:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:39.240 14:02:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:09:39.240 14:02:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:39.240 14:02:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:09:39.240 14:02:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:39.240 14:02:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:39.240 14:02:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.240 14:02:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:39.240 [2024-07-15 14:02:24.750239] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 180838 has claimed it. 00:09:39.240 request: 00:09:39.240 { 00:09:39.240 "method": "framework_enable_cpumask_locks", 00:09:39.240 "req_id": 1 00:09:39.240 } 00:09:39.240 Got JSON-RPC error response 00:09:39.240 response: 00:09:39.240 { 00:09:39.240 "code": -32603, 00:09:39.240 "message": "Failed to claim CPU core: 2" 00:09:39.240 } 00:09:39.240 14:02:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:09:39.240 14:02:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:09:39.240 14:02:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:39.240 14:02:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:39.240 14:02:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:39.240 14:02:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 180838 /var/tmp/spdk.sock 00:09:39.240 14:02:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 180838 ']' 00:09:39.240 14:02:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:39.240 14:02:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:39.240 14:02:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:39.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:39.240 14:02:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:39.240 14:02:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:39.240 14:02:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:39.240 14:02:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:09:39.240 14:02:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 180865 /var/tmp/spdk2.sock 00:09:39.240 14:02:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 180865 ']' 00:09:39.240 14:02:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:39.240 14:02:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:39.240 14:02:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:39.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:39.240 14:02:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:39.240 14:02:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:39.531 14:02:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:39.531 14:02:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:09:39.531 14:02:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:09:39.531 14:02:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:39.531 14:02:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:39.531 14:02:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:39.531 00:09:39.531 real 0m4.792s 00:09:39.531 user 0m1.656s 00:09:39.531 sys 0m0.230s 00:09:39.531 14:02:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:39.531 14:02:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:39.531 ************************************ 00:09:39.531 END TEST locking_overlapped_coremask_via_rpc 00:09:39.531 ************************************ 00:09:39.531 14:02:25 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:09:39.531 14:02:25 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:09:39.531 14:02:25 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 180838 ]] 00:09:39.531 14:02:25 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 180838 00:09:39.531 14:02:25 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 180838 ']' 00:09:39.531 14:02:25 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 180838 00:09:39.531 14:02:25 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:09:39.531 14:02:25 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:39.531 14:02:25 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 180838 00:09:39.531 14:02:25 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:39.531 14:02:25 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:39.531 14:02:25 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 180838' 00:09:39.531 killing process with pid 180838 00:09:39.531 14:02:25 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 180838 00:09:39.531 14:02:25 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 180838 00:09:42.062 14:02:27 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 180865 ]] 00:09:42.062 14:02:27 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 180865 00:09:42.062 14:02:27 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 180865 ']' 00:09:42.062 14:02:27 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 180865 00:09:42.062 14:02:27 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:09:42.062 14:02:27 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:42.062 14:02:27 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 180865 00:09:42.062 14:02:27 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:09:42.062 14:02:27 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:09:42.062 14:02:27 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 180865' 00:09:42.062 killing process with pid 180865 00:09:42.062 14:02:27 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 180865 00:09:42.062 14:02:27 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 180865 00:09:44.620 14:02:30 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:09:44.620 14:02:30 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:09:44.620 14:02:30 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 180838 ]] 00:09:44.620 14:02:30 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 180838 00:09:44.620 14:02:30 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 180838 ']' 00:09:44.620 14:02:30 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 180838 00:09:44.620 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (180838) - No such process 00:09:44.620 14:02:30 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 180838 is not found' 00:09:44.620 Process with pid 180838 is not found 00:09:44.620 14:02:30 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 180865 ]] 00:09:44.620 14:02:30 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 180865 00:09:44.620 14:02:30 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 180865 ']' 00:09:44.621 14:02:30 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 180865 00:09:44.621 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (180865) - No such process 00:09:44.621 14:02:30 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 180865 is not found' 00:09:44.621 Process with pid 180865 is not found 00:09:44.621 14:02:30 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:09:44.621 00:09:44.621 real 0m50.291s 00:09:44.621 user 1m26.797s 00:09:44.621 sys 0m6.635s 00:09:44.621 14:02:30 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:44.621 14:02:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:44.621 ************************************ 00:09:44.621 END TEST cpu_locks 00:09:44.621 ************************************ 00:09:44.621 14:02:30 event -- common/autotest_common.sh@1142 -- # return 0 00:09:44.621 00:09:44.621 real 1m21.106s 00:09:44.621 user 2m26.740s 00:09:44.621 sys 0m11.145s 00:09:44.621 14:02:30 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:44.621 14:02:30 event -- common/autotest_common.sh@10 -- # set +x 00:09:44.621 ************************************ 00:09:44.621 END TEST event 00:09:44.621 ************************************ 00:09:44.621 14:02:30 -- common/autotest_common.sh@1142 -- # return 0 00:09:44.621 14:02:30 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:09:44.621 14:02:30 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:44.621 14:02:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:44.621 14:02:30 -- common/autotest_common.sh@10 -- # set +x 00:09:44.621 ************************************ 00:09:44.621 START TEST thread 00:09:44.621 ************************************ 00:09:44.621 14:02:30 thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:09:44.621 * Looking for test storage... 00:09:44.621 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:09:44.621 14:02:30 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:44.621 14:02:30 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:09:44.621 14:02:30 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:44.621 14:02:30 thread -- common/autotest_common.sh@10 -- # set +x 00:09:44.621 ************************************ 00:09:44.621 START TEST thread_poller_perf 00:09:44.621 ************************************ 00:09:44.621 14:02:30 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:44.621 [2024-07-15 14:02:30.299064] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:09:44.621 [2024-07-15 14:02:30.299534] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid181072 ] 00:09:44.621 [2024-07-15 14:02:30.454320] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.880 [2024-07-15 14:02:30.708886] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.880 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:09:46.256 ====================================== 00:09:46.256 busy:2204973571 (cyc) 00:09:46.256 total_run_count: 1228000 00:09:46.256 tsc_hz: 2200000000 (cyc) 00:09:46.256 ====================================== 00:09:46.256 poller_cost: 1795 (cyc), 815 (nsec) 00:09:46.256 00:09:46.256 real 0m1.834s 00:09:46.256 user 0m1.609s 00:09:46.256 sys 0m0.113s 00:09:46.256 14:02:32 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:46.256 14:02:32 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:09:46.256 ************************************ 00:09:46.256 END TEST thread_poller_perf 00:09:46.256 ************************************ 00:09:46.256 14:02:32 thread -- common/autotest_common.sh@1142 -- # return 0 00:09:46.256 14:02:32 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:46.256 14:02:32 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:09:46.256 14:02:32 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:46.256 14:02:32 thread -- common/autotest_common.sh@10 -- # set +x 00:09:46.256 ************************************ 00:09:46.256 START TEST thread_poller_perf 00:09:46.256 ************************************ 00:09:46.256 14:02:32 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:46.256 [2024-07-15 14:02:32.193394] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:09:46.256 [2024-07-15 14:02:32.193834] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid181115 ] 00:09:46.514 [2024-07-15 14:02:32.350343] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:46.773 [2024-07-15 14:02:32.604349] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.773 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:09:48.182 ====================================== 00:09:48.182 busy:2204017502 (cyc) 00:09:48.182 total_run_count: 12446000 00:09:48.182 tsc_hz: 2200000000 (cyc) 00:09:48.182 ====================================== 00:09:48.182 poller_cost: 177 (cyc), 80 (nsec) 00:09:48.182 00:09:48.182 real 0m1.838s 00:09:48.182 user 0m1.603s 00:09:48.182 sys 0m0.124s 00:09:48.182 14:02:33 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:48.182 14:02:33 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:09:48.182 ************************************ 00:09:48.182 END TEST thread_poller_perf 00:09:48.182 ************************************ 00:09:48.182 14:02:34 thread -- common/autotest_common.sh@1142 -- # return 0 00:09:48.182 14:02:34 thread -- thread/thread.sh@17 -- # [[ n != \y ]] 00:09:48.182 14:02:34 thread -- thread/thread.sh@18 -- # run_test thread_spdk_lock /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:09:48.182 14:02:34 thread -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:48.182 14:02:34 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:48.182 14:02:34 thread -- common/autotest_common.sh@10 -- # set +x 00:09:48.182 ************************************ 00:09:48.182 START TEST thread_spdk_lock 00:09:48.182 ************************************ 00:09:48.182 14:02:34 thread.thread_spdk_lock -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:09:48.182 [2024-07-15 14:02:34.091979] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:09:48.182 [2024-07-15 14:02:34.092871] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid181163 ] 00:09:48.440 [2024-07-15 14:02:34.261619] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:48.698 [2024-07-15 14:02:34.480483] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:48.698 [2024-07-15 14:02:34.480487] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.265 [2024-07-15 14:02:34.960596] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 965:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:09:49.265 [2024-07-15 14:02:34.960822] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3083:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:09:49.265 [2024-07-15 14:02:34.960912] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0xc74c40 00:09:49.265 [2024-07-15 14:02:34.968783] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 860:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:09:49.265 [2024-07-15 14:02:34.968975] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:1026:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:09:49.265 [2024-07-15 14:02:34.969187] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 860:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:09:49.524 Starting test contend 00:09:49.524 Worker Delay Wait us Hold us Total us 00:09:49.524 0 3 172776 179171 351948 00:09:49.524 1 5 95226 279611 374837 00:09:49.524 PASS test contend 00:09:49.524 Starting test hold_by_poller 00:09:49.524 PASS test hold_by_poller 00:09:49.524 Starting test hold_by_message 00:09:49.524 PASS test hold_by_message 00:09:49.524 /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock summary: 00:09:49.524 100014 assertions passed 00:09:49.524 0 assertions failed 00:09:49.524 00:09:49.524 real 0m1.301s 00:09:49.524 user 0m1.558s 00:09:49.524 sys 0m0.121s 00:09:49.524 14:02:35 thread.thread_spdk_lock -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:49.524 14:02:35 thread.thread_spdk_lock -- common/autotest_common.sh@10 -- # set +x 00:09:49.524 ************************************ 00:09:49.524 END TEST thread_spdk_lock 00:09:49.524 ************************************ 00:09:49.524 14:02:35 thread -- common/autotest_common.sh@1142 -- # return 0 00:09:49.524 00:09:49.524 real 0m5.226s 00:09:49.524 user 0m4.860s 00:09:49.524 sys 0m0.509s 00:09:49.524 14:02:35 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:49.524 14:02:35 thread -- common/autotest_common.sh@10 -- # set +x 00:09:49.524 ************************************ 00:09:49.524 END TEST thread 00:09:49.524 ************************************ 00:09:49.524 14:02:35 -- common/autotest_common.sh@1142 -- # return 0 00:09:49.524 14:02:35 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:09:49.524 14:02:35 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:09:49.524 14:02:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:49.524 14:02:35 -- common/autotest_common.sh@10 -- # set +x 00:09:49.524 ************************************ 00:09:49.524 START TEST accel 00:09:49.524 ************************************ 00:09:49.524 14:02:35 accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:09:49.524 * Looking for test storage... 00:09:49.783 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:09:49.783 14:02:35 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:09:49.783 14:02:35 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:09:49.783 14:02:35 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:09:49.783 14:02:35 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=181250 00:09:49.783 14:02:35 accel -- accel/accel.sh@63 -- # waitforlisten 181250 00:09:49.783 14:02:35 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:09:49.783 14:02:35 accel -- common/autotest_common.sh@829 -- # '[' -z 181250 ']' 00:09:49.783 14:02:35 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:49.783 14:02:35 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:49.783 14:02:35 accel -- accel/accel.sh@61 -- # build_accel_config 00:09:49.783 14:02:35 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:49.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:49.783 14:02:35 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:49.783 14:02:35 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:49.783 14:02:35 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:49.783 14:02:35 accel -- common/autotest_common.sh@10 -- # set +x 00:09:49.783 14:02:35 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:49.783 14:02:35 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:49.783 14:02:35 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:49.783 14:02:35 accel -- accel/accel.sh@40 -- # local IFS=, 00:09:49.783 14:02:35 accel -- accel/accel.sh@41 -- # jq -r . 00:09:49.783 [2024-07-15 14:02:35.594891] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:09:49.783 [2024-07-15 14:02:35.595500] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid181250 ] 00:09:50.041 [2024-07-15 14:02:35.783763] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:50.041 [2024-07-15 14:02:36.004604] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.992 14:02:36 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:50.992 14:02:36 accel -- common/autotest_common.sh@862 -- # return 0 00:09:50.992 14:02:36 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:09:50.992 14:02:36 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:09:50.992 14:02:36 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:09:50.992 14:02:36 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:09:50.992 14:02:36 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:09:50.992 14:02:36 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:09:50.992 14:02:36 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:09:50.992 14:02:36 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:50.992 14:02:36 accel -- common/autotest_common.sh@10 -- # set +x 00:09:50.992 14:02:36 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:50.992 14:02:36 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:50.992 14:02:36 accel -- accel/accel.sh@72 -- # IFS== 00:09:50.992 14:02:36 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:50.992 14:02:36 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:50.992 14:02:36 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:50.992 14:02:36 accel -- accel/accel.sh@72 -- # IFS== 00:09:50.992 14:02:36 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:50.992 14:02:36 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:50.992 14:02:36 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:50.992 14:02:36 accel -- accel/accel.sh@72 -- # IFS== 00:09:50.992 14:02:36 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:50.992 14:02:36 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:50.992 14:02:36 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:50.992 14:02:36 accel -- accel/accel.sh@72 -- # IFS== 00:09:50.992 14:02:36 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:50.992 14:02:36 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:50.992 14:02:36 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:50.992 14:02:36 accel -- accel/accel.sh@72 -- # IFS== 00:09:50.992 14:02:36 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:50.992 14:02:36 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:50.992 14:02:36 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:50.992 14:02:36 accel -- accel/accel.sh@72 -- # IFS== 00:09:50.992 14:02:36 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:50.992 14:02:36 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:50.992 14:02:36 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:50.992 14:02:36 accel -- accel/accel.sh@72 -- # IFS== 00:09:50.992 14:02:36 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:50.992 14:02:36 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:50.992 14:02:36 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:50.992 14:02:36 accel -- accel/accel.sh@72 -- # IFS== 00:09:50.992 14:02:36 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:50.992 14:02:36 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:50.992 14:02:36 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:50.992 14:02:36 accel -- accel/accel.sh@72 -- # IFS== 00:09:50.992 14:02:36 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:50.992 14:02:36 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:50.992 14:02:36 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:50.992 14:02:36 accel -- accel/accel.sh@72 -- # IFS== 00:09:50.992 14:02:36 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:50.992 14:02:36 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:50.992 14:02:36 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:50.992 14:02:36 accel -- accel/accel.sh@72 -- # IFS== 00:09:50.992 14:02:36 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:50.992 14:02:36 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:50.992 14:02:36 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:50.992 14:02:36 accel -- accel/accel.sh@72 -- # IFS== 00:09:50.992 14:02:36 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:50.992 14:02:36 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:50.992 14:02:36 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:50.992 14:02:36 accel -- accel/accel.sh@72 -- # IFS== 00:09:50.992 14:02:36 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:50.992 14:02:36 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:50.992 14:02:36 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:50.992 14:02:36 accel -- accel/accel.sh@72 -- # IFS== 00:09:50.992 14:02:36 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:50.992 14:02:36 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:50.992 14:02:36 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:09:50.992 14:02:36 accel -- accel/accel.sh@72 -- # IFS== 00:09:50.992 14:02:36 accel -- accel/accel.sh@72 -- # read -r opc module 00:09:50.992 14:02:36 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:09:50.992 14:02:36 accel -- accel/accel.sh@75 -- # killprocess 181250 00:09:50.992 14:02:36 accel -- common/autotest_common.sh@948 -- # '[' -z 181250 ']' 00:09:50.992 14:02:36 accel -- common/autotest_common.sh@952 -- # kill -0 181250 00:09:50.992 14:02:36 accel -- common/autotest_common.sh@953 -- # uname 00:09:50.992 14:02:36 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:50.992 14:02:36 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 181250 00:09:50.992 14:02:36 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:50.992 14:02:36 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:50.992 14:02:36 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 181250' 00:09:50.992 killing process with pid 181250 00:09:50.992 14:02:36 accel -- common/autotest_common.sh@967 -- # kill 181250 00:09:50.992 14:02:36 accel -- common/autotest_common.sh@972 -- # wait 181250 00:09:53.536 14:02:39 accel -- accel/accel.sh@76 -- # trap - ERR 00:09:53.536 14:02:39 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:09:53.536 14:02:39 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:53.536 14:02:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:53.536 14:02:39 accel -- common/autotest_common.sh@10 -- # set +x 00:09:53.537 14:02:39 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:09:53.537 14:02:39 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:09:53.537 14:02:39 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:09:53.537 14:02:39 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:53.537 14:02:39 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:53.537 14:02:39 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:53.537 14:02:39 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:53.537 14:02:39 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:53.537 14:02:39 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:09:53.537 14:02:39 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:09:53.537 14:02:39 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:53.537 14:02:39 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:09:53.537 14:02:39 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:53.537 14:02:39 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:09:53.537 14:02:39 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:09:53.537 14:02:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:53.537 14:02:39 accel -- common/autotest_common.sh@10 -- # set +x 00:09:53.537 ************************************ 00:09:53.537 START TEST accel_missing_filename 00:09:53.537 ************************************ 00:09:53.537 14:02:39 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:09:53.537 14:02:39 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:09:53.537 14:02:39 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:09:53.537 14:02:39 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:09:53.537 14:02:39 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:53.537 14:02:39 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:09:53.537 14:02:39 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:53.537 14:02:39 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:09:53.537 14:02:39 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:09:53.537 14:02:39 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:09:53.537 14:02:39 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:53.537 14:02:39 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:53.537 14:02:39 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:53.537 14:02:39 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:53.537 14:02:39 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:53.537 14:02:39 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:09:53.537 14:02:39 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:09:53.537 [2024-07-15 14:02:39.332786] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:09:53.537 [2024-07-15 14:02:39.333066] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid181340 ] 00:09:53.537 [2024-07-15 14:02:39.486168] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.796 [2024-07-15 14:02:39.703857] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:54.053 [2024-07-15 14:02:39.902968] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:54.620 [2024-07-15 14:02:40.425701] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:09:54.879 A filename is required. 00:09:54.879 14:02:40 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:09:54.879 14:02:40 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:54.879 14:02:40 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:09:54.879 14:02:40 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:09:54.879 14:02:40 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:09:54.879 14:02:40 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:54.879 00:09:54.879 real 0m1.537s 00:09:54.879 user 0m1.298s 00:09:54.879 sys 0m0.177s 00:09:54.879 14:02:40 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:54.879 14:02:40 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:09:54.879 ************************************ 00:09:54.879 END TEST accel_missing_filename 00:09:54.879 ************************************ 00:09:54.879 14:02:40 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:54.879 14:02:40 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:54.879 14:02:40 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:09:54.879 14:02:40 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:54.879 14:02:40 accel -- common/autotest_common.sh@10 -- # set +x 00:09:55.137 ************************************ 00:09:55.137 START TEST accel_compress_verify 00:09:55.137 ************************************ 00:09:55.137 14:02:40 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:55.137 14:02:40 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:09:55.137 14:02:40 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:55.137 14:02:40 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:09:55.137 14:02:40 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:55.137 14:02:40 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:09:55.137 14:02:40 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:55.137 14:02:40 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:55.138 14:02:40 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:55.138 14:02:40 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:09:55.138 14:02:40 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:55.138 14:02:40 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:55.138 14:02:40 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:55.138 14:02:40 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:55.138 14:02:40 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:55.138 14:02:40 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:09:55.138 14:02:40 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:09:55.138 [2024-07-15 14:02:40.932701] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:09:55.138 [2024-07-15 14:02:40.933684] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid181385 ] 00:09:55.138 [2024-07-15 14:02:41.100677] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:55.396 [2024-07-15 14:02:41.313333] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.654 [2024-07-15 14:02:41.518261] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:56.221 [2024-07-15 14:02:42.020162] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:09:56.480 00:09:56.480 Compression does not support the verify option, aborting. 00:09:56.480 14:02:42 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:09:56.480 14:02:42 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:56.480 14:02:42 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:09:56.480 14:02:42 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:09:56.480 14:02:42 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:09:56.480 14:02:42 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:56.480 00:09:56.480 real 0m1.527s 00:09:56.480 user 0m1.272s 00:09:56.480 sys 0m0.191s 00:09:56.480 14:02:42 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:56.480 14:02:42 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:09:56.480 ************************************ 00:09:56.480 END TEST accel_compress_verify 00:09:56.480 ************************************ 00:09:56.480 14:02:42 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:56.480 14:02:42 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:09:56.480 14:02:42 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:09:56.480 14:02:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:56.480 14:02:42 accel -- common/autotest_common.sh@10 -- # set +x 00:09:56.480 ************************************ 00:09:56.480 START TEST accel_wrong_workload 00:09:56.480 ************************************ 00:09:56.480 14:02:42 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:09:56.480 14:02:42 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:09:56.480 14:02:42 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:09:56.480 14:02:42 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:09:56.480 14:02:42 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:56.480 14:02:42 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:09:56.480 14:02:42 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:56.480 14:02:42 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:09:56.480 14:02:42 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:09:56.480 14:02:42 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:09:56.480 14:02:42 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:56.480 14:02:42 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:56.480 14:02:42 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:56.480 14:02:42 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:56.740 14:02:42 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:56.740 14:02:42 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:09:56.740 14:02:42 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:09:56.740 Unsupported workload type: foobar 00:09:56.740 [2024-07-15 14:02:42.511788] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:09:56.740 accel_perf options: 00:09:56.740 [-h help message] 00:09:56.740 [-q queue depth per core] 00:09:56.740 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:09:56.740 [-T number of threads per core 00:09:56.740 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:09:56.740 [-t time in seconds] 00:09:56.740 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:09:56.740 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:09:56.740 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:09:56.740 [-l for compress/decompress workloads, name of uncompressed input file 00:09:56.740 [-S for crc32c workload, use this seed value (default 0) 00:09:56.740 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:09:56.740 [-f for fill workload, use this BYTE value (default 255) 00:09:56.740 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:09:56.740 [-y verify result if this switch is on] 00:09:56.740 [-a tasks to allocate per core (default: same value as -q)] 00:09:56.740 Can be used to spread operations across a wider range of memory. 00:09:56.740 14:02:42 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:09:56.740 14:02:42 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:56.740 14:02:42 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:56.740 14:02:42 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:56.740 00:09:56.740 real 0m0.060s 00:09:56.740 user 0m0.037s 00:09:56.740 sys 0m0.021s 00:09:56.740 14:02:42 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:56.740 14:02:42 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:09:56.740 ************************************ 00:09:56.740 END TEST accel_wrong_workload 00:09:56.740 ************************************ 00:09:56.740 14:02:42 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:56.740 14:02:42 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:09:56.740 14:02:42 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:09:56.740 14:02:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:56.740 14:02:42 accel -- common/autotest_common.sh@10 -- # set +x 00:09:56.740 ************************************ 00:09:56.740 START TEST accel_negative_buffers 00:09:56.740 ************************************ 00:09:56.740 14:02:42 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:09:56.740 14:02:42 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:09:56.740 14:02:42 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:09:56.740 14:02:42 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:09:56.740 14:02:42 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:56.740 14:02:42 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:09:56.740 14:02:42 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:56.740 14:02:42 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:09:56.740 14:02:42 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:09:56.740 14:02:42 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:09:56.740 14:02:42 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:56.740 14:02:42 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:56.740 14:02:42 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:56.740 14:02:42 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:56.740 14:02:42 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:56.740 14:02:42 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:09:56.740 14:02:42 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:09:56.740 -x option must be non-negative. 00:09:56.740 [2024-07-15 14:02:42.620993] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:09:56.740 accel_perf options: 00:09:56.740 [-h help message] 00:09:56.740 [-q queue depth per core] 00:09:56.740 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:09:56.740 [-T number of threads per core 00:09:56.740 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:09:56.740 [-t time in seconds] 00:09:56.740 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:09:56.740 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:09:56.740 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:09:56.741 [-l for compress/decompress workloads, name of uncompressed input file 00:09:56.741 [-S for crc32c workload, use this seed value (default 0) 00:09:56.741 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:09:56.741 [-f for fill workload, use this BYTE value (default 255) 00:09:56.741 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:09:56.741 [-y verify result if this switch is on] 00:09:56.741 [-a tasks to allocate per core (default: same value as -q)] 00:09:56.741 Can be used to spread operations across a wider range of memory. 00:09:56.741 14:02:42 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:09:56.741 14:02:42 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:56.741 14:02:42 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:56.741 14:02:42 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:56.741 00:09:56.741 real 0m0.059s 00:09:56.741 user 0m0.077s 00:09:56.741 sys 0m0.031s 00:09:56.741 14:02:42 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:56.741 14:02:42 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:09:56.741 ************************************ 00:09:56.741 END TEST accel_negative_buffers 00:09:56.741 ************************************ 00:09:56.741 14:02:42 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:56.741 14:02:42 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:09:56.741 14:02:42 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:09:56.741 14:02:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:56.741 14:02:42 accel -- common/autotest_common.sh@10 -- # set +x 00:09:56.741 ************************************ 00:09:56.741 START TEST accel_crc32c 00:09:56.741 ************************************ 00:09:56.741 14:02:42 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:09:56.741 14:02:42 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:09:56.741 14:02:42 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:09:56.741 14:02:42 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:56.741 14:02:42 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:56.741 14:02:42 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:09:56.741 14:02:42 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:09:56.741 14:02:42 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:09:56.741 14:02:42 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:56.741 14:02:42 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:56.741 14:02:42 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:56.741 14:02:42 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:56.741 14:02:42 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:56.741 14:02:42 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:09:56.741 14:02:42 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:09:56.741 [2024-07-15 14:02:42.736383] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:09:56.741 [2024-07-15 14:02:42.736850] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid181486 ] 00:09:57.000 [2024-07-15 14:02:42.904480] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:57.311 [2024-07-15 14:02:43.165832] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:57.572 14:02:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:57.572 14:02:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:57.572 14:02:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:57.572 14:02:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:57.572 14:02:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:57.572 14:02:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:57.572 14:02:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:57.572 14:02:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:57.572 14:02:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:09:57.572 14:02:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:57.572 14:02:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:57.572 14:02:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:57.572 14:02:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:57.572 14:02:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:57.572 14:02:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:57.572 14:02:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:57.572 14:02:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:57.572 14:02:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:57.572 14:02:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:57.572 14:02:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:57.572 14:02:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:09:57.572 14:02:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:57.572 14:02:43 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:09:57.572 14:02:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:57.572 14:02:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:57.572 14:02:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:09:57.572 14:02:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:57.572 14:02:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:57.572 14:02:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:57.572 14:02:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:57.572 14:02:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:57.572 14:02:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:57.572 14:02:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:57.572 14:02:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:57.572 14:02:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:57.572 14:02:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:57.572 14:02:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:57.572 14:02:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:09:57.572 14:02:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:57.572 14:02:43 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:09:57.572 14:02:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:57.572 14:02:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:57.572 14:02:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:09:57.572 14:02:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:57.572 14:02:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:57.572 14:02:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:57.572 14:02:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:09:57.572 14:02:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:57.572 14:02:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:57.572 14:02:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:57.572 14:02:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:09:57.572 14:02:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:57.572 14:02:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:57.572 14:02:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:57.572 14:02:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:09:57.572 14:02:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:57.572 14:02:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:57.572 14:02:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:57.572 14:02:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:09:57.572 14:02:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:57.572 14:02:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:57.572 14:02:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:57.572 14:02:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:57.572 14:02:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:57.572 14:02:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:57.572 14:02:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:57.572 14:02:43 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:57.572 14:02:43 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:57.572 14:02:43 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:57.572 14:02:43 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:59.474 14:02:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:59.474 14:02:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:59.474 14:02:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:59.474 14:02:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:59.474 14:02:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:59.474 14:02:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:59.474 14:02:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:59.474 14:02:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:59.474 14:02:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:59.474 14:02:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:59.474 14:02:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:59.474 14:02:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:59.474 14:02:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:59.474 14:02:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:59.474 14:02:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:59.474 14:02:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:59.474 14:02:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:59.474 14:02:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:59.474 14:02:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:59.474 14:02:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:59.474 14:02:45 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:09:59.474 14:02:45 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:09:59.474 14:02:45 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:09:59.474 14:02:45 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:09:59.474 14:02:45 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:59.474 14:02:45 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:09:59.474 14:02:45 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:59.474 00:09:59.474 real 0m2.580s 00:09:59.474 user 0m2.305s 00:09:59.474 sys 0m0.210s 00:09:59.474 14:02:45 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:59.474 14:02:45 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:09:59.474 ************************************ 00:09:59.474 END TEST accel_crc32c 00:09:59.474 ************************************ 00:09:59.474 14:02:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:09:59.474 14:02:45 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:09:59.474 14:02:45 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:09:59.474 14:02:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:59.474 14:02:45 accel -- common/autotest_common.sh@10 -- # set +x 00:09:59.474 ************************************ 00:09:59.474 START TEST accel_crc32c_C2 00:09:59.474 ************************************ 00:09:59.474 14:02:45 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:09:59.474 14:02:45 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:09:59.474 14:02:45 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:09:59.474 14:02:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:09:59.474 14:02:45 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:09:59.474 14:02:45 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:09:59.474 14:02:45 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:09:59.474 14:02:45 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:09:59.474 14:02:45 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:59.474 14:02:45 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:59.474 14:02:45 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:59.474 14:02:45 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:59.474 14:02:45 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:59.474 14:02:45 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:09:59.474 14:02:45 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:09:59.474 [2024-07-15 14:02:45.357691] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:09:59.474 [2024-07-15 14:02:45.358352] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid181537 ] 00:09:59.732 [2024-07-15 14:02:45.512081] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:59.990 [2024-07-15 14:02:45.781488] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.248 14:02:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:00.248 14:02:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:00.248 14:02:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:00.248 14:02:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:00.248 14:02:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:00.248 14:02:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:00.248 14:02:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:00.248 14:02:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:00.248 14:02:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:10:00.248 14:02:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:00.248 14:02:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:00.248 14:02:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:00.248 14:02:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:00.248 14:02:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:00.248 14:02:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:00.248 14:02:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:00.248 14:02:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:00.248 14:02:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:00.248 14:02:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:00.248 14:02:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:00.248 14:02:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:10:00.248 14:02:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:00.248 14:02:46 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:10:00.248 14:02:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:00.248 14:02:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:00.248 14:02:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:10:00.248 14:02:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:00.248 14:02:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:00.248 14:02:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:00.248 14:02:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:00.248 14:02:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:00.248 14:02:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:00.248 14:02:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:00.248 14:02:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:00.248 14:02:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:00.248 14:02:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:00.248 14:02:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:00.248 14:02:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:10:00.248 14:02:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:00.248 14:02:46 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:10:00.248 14:02:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:00.248 14:02:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:00.248 14:02:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:10:00.248 14:02:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:00.248 14:02:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:00.248 14:02:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:00.248 14:02:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:10:00.248 14:02:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:00.249 14:02:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:00.249 14:02:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:00.249 14:02:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:10:00.249 14:02:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:00.249 14:02:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:00.249 14:02:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:00.249 14:02:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:10:00.249 14:02:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:00.249 14:02:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:00.249 14:02:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:00.249 14:02:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:10:00.249 14:02:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:00.249 14:02:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:00.249 14:02:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:00.249 14:02:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:00.249 14:02:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:00.249 14:02:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:00.249 14:02:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:00.249 14:02:46 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:00.249 14:02:46 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:00.249 14:02:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:00.249 14:02:46 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:02.215 14:02:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:02.215 14:02:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:02.215 14:02:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:02.215 14:02:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:02.215 14:02:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:02.215 14:02:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:02.215 14:02:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:02.215 14:02:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:02.215 14:02:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:02.215 14:02:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:02.215 14:02:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:02.215 14:02:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:02.215 14:02:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:02.215 14:02:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:02.215 14:02:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:02.215 14:02:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:02.215 14:02:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:02.215 14:02:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:02.215 14:02:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:02.215 14:02:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:02.215 14:02:47 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:02.215 14:02:47 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:02.215 14:02:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:02.215 14:02:47 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:02.215 14:02:47 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:02.215 14:02:47 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:10:02.215 14:02:47 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:02.215 00:10:02.215 real 0m2.613s 00:10:02.215 user 0m2.360s 00:10:02.215 sys 0m0.189s 00:10:02.215 14:02:47 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:02.215 14:02:47 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:10:02.215 ************************************ 00:10:02.215 END TEST accel_crc32c_C2 00:10:02.215 ************************************ 00:10:02.215 14:02:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:02.215 14:02:47 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:10:02.215 14:02:47 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:10:02.215 14:02:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:02.215 14:02:47 accel -- common/autotest_common.sh@10 -- # set +x 00:10:02.215 ************************************ 00:10:02.215 START TEST accel_copy 00:10:02.215 ************************************ 00:10:02.215 14:02:47 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:10:02.215 14:02:47 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:10:02.215 14:02:47 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:10:02.215 14:02:47 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:02.215 14:02:47 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:02.215 14:02:47 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:10:02.215 14:02:47 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:10:02.215 14:02:47 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:10:02.215 14:02:47 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:02.215 14:02:47 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:02.215 14:02:47 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:02.215 14:02:47 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:02.215 14:02:47 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:02.215 14:02:47 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:10:02.215 14:02:47 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:10:02.215 [2024-07-15 14:02:48.039838] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:10:02.215 [2024-07-15 14:02:48.040338] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid181595 ] 00:10:02.519 [2024-07-15 14:02:48.213714] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:02.519 [2024-07-15 14:02:48.444849] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.777 14:02:48 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:02.777 14:02:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:02.777 14:02:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:02.777 14:02:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:02.777 14:02:48 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:02.777 14:02:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:02.777 14:02:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:02.777 14:02:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:02.777 14:02:48 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:10:02.777 14:02:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:02.777 14:02:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:02.777 14:02:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:02.777 14:02:48 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:02.777 14:02:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:02.777 14:02:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:02.777 14:02:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:02.777 14:02:48 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:02.777 14:02:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:02.777 14:02:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:02.777 14:02:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:02.777 14:02:48 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:10:02.777 14:02:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:02.777 14:02:48 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:10:02.777 14:02:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:02.777 14:02:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:02.777 14:02:48 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:02.777 14:02:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:02.777 14:02:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:02.777 14:02:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:02.777 14:02:48 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:02.777 14:02:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:02.777 14:02:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:02.777 14:02:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:02.777 14:02:48 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:10:02.777 14:02:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:02.777 14:02:48 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:10:02.777 14:02:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:02.777 14:02:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:02.777 14:02:48 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:10:02.777 14:02:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:02.777 14:02:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:02.777 14:02:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:02.777 14:02:48 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:10:02.777 14:02:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:02.777 14:02:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:02.777 14:02:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:02.777 14:02:48 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:10:02.777 14:02:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:02.777 14:02:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:02.777 14:02:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:02.777 14:02:48 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:10:02.777 14:02:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:02.777 14:02:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:02.777 14:02:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:02.777 14:02:48 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:10:02.777 14:02:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:02.777 14:02:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:02.777 14:02:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:02.777 14:02:48 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:02.777 14:02:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:02.777 14:02:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:02.777 14:02:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:02.777 14:02:48 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:02.777 14:02:48 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:02.777 14:02:48 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:02.777 14:02:48 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:04.676 14:02:50 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:04.676 14:02:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:04.676 14:02:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:04.676 14:02:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:04.676 14:02:50 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:04.676 14:02:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:04.676 14:02:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:04.676 14:02:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:04.676 14:02:50 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:04.676 14:02:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:04.676 14:02:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:04.676 14:02:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:04.676 14:02:50 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:04.676 14:02:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:04.676 14:02:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:04.676 14:02:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:04.676 14:02:50 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:04.676 14:02:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:04.676 14:02:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:04.676 14:02:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:04.676 14:02:50 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:10:04.676 14:02:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:04.676 14:02:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:10:04.676 14:02:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:10:04.676 14:02:50 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:04.676 14:02:50 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:10:04.676 14:02:50 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:04.676 00:10:04.676 real 0m2.585s 00:10:04.676 user 0m2.301s 00:10:04.676 sys 0m0.203s 00:10:04.676 14:02:50 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:04.676 14:02:50 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:10:04.676 ************************************ 00:10:04.676 END TEST accel_copy 00:10:04.676 ************************************ 00:10:04.676 14:02:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:04.677 14:02:50 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:04.677 14:02:50 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:10:04.677 14:02:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:04.677 14:02:50 accel -- common/autotest_common.sh@10 -- # set +x 00:10:04.677 ************************************ 00:10:04.677 START TEST accel_fill 00:10:04.677 ************************************ 00:10:04.677 14:02:50 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:04.677 14:02:50 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:10:04.677 14:02:50 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:10:04.677 14:02:50 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:04.677 14:02:50 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:04.677 14:02:50 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:04.677 14:02:50 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:04.677 14:02:50 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:10:04.677 14:02:50 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:04.677 14:02:50 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:04.677 14:02:50 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:04.677 14:02:50 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:04.677 14:02:50 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:04.677 14:02:50 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:10:04.677 14:02:50 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:10:04.677 [2024-07-15 14:02:50.672795] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:10:04.677 [2024-07-15 14:02:50.673496] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid181651 ] 00:10:04.936 [2024-07-15 14:02:50.829838] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:05.193 [2024-07-15 14:02:51.061832] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.450 14:02:51 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:05.450 14:02:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:05.450 14:02:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:05.450 14:02:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:05.450 14:02:51 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:05.450 14:02:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:05.450 14:02:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:05.450 14:02:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:05.450 14:02:51 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:10:05.450 14:02:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:05.450 14:02:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:05.450 14:02:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:05.450 14:02:51 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:05.450 14:02:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:05.450 14:02:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:05.450 14:02:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:05.450 14:02:51 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:05.450 14:02:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:05.450 14:02:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:05.450 14:02:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:05.450 14:02:51 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:10:05.450 14:02:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:05.450 14:02:51 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:10:05.450 14:02:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:05.450 14:02:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:05.450 14:02:51 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:10:05.450 14:02:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:05.450 14:02:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:05.450 14:02:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:05.450 14:02:51 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:05.450 14:02:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:05.450 14:02:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:05.450 14:02:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:05.450 14:02:51 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:05.450 14:02:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:05.450 14:02:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:05.450 14:02:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:05.450 14:02:51 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:10:05.450 14:02:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:05.450 14:02:51 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:10:05.450 14:02:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:05.450 14:02:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:05.450 14:02:51 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:10:05.450 14:02:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:05.450 14:02:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:05.450 14:02:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:05.450 14:02:51 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:10:05.450 14:02:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:05.450 14:02:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:05.450 14:02:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:05.450 14:02:51 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:10:05.450 14:02:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:05.450 14:02:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:05.450 14:02:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:05.450 14:02:51 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:10:05.450 14:02:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:05.450 14:02:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:05.450 14:02:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:05.450 14:02:51 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:10:05.450 14:02:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:05.450 14:02:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:05.450 14:02:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:05.450 14:02:51 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:05.450 14:02:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:05.450 14:02:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:05.450 14:02:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:05.450 14:02:51 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:05.450 14:02:51 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:05.450 14:02:51 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:05.450 14:02:51 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:07.389 14:02:53 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:07.389 14:02:53 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:07.389 14:02:53 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:07.389 14:02:53 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:07.389 14:02:53 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:07.389 14:02:53 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:07.389 14:02:53 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:07.389 14:02:53 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:07.389 14:02:53 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:07.389 14:02:53 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:07.389 14:02:53 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:07.389 14:02:53 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:07.389 14:02:53 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:07.389 14:02:53 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:07.389 14:02:53 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:07.389 14:02:53 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:07.389 14:02:53 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:07.389 14:02:53 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:07.389 14:02:53 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:07.389 14:02:53 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:07.389 14:02:53 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:10:07.389 14:02:53 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:10:07.389 14:02:53 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:10:07.389 14:02:53 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:10:07.389 14:02:53 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:07.389 14:02:53 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:10:07.389 14:02:53 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:07.389 00:10:07.390 real 0m2.526s 00:10:07.390 user 0m2.266s 00:10:07.390 sys 0m0.194s 00:10:07.390 14:02:53 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:07.390 14:02:53 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:10:07.390 ************************************ 00:10:07.390 END TEST accel_fill 00:10:07.390 ************************************ 00:10:07.390 14:02:53 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:07.390 14:02:53 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:10:07.390 14:02:53 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:10:07.390 14:02:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:07.390 14:02:53 accel -- common/autotest_common.sh@10 -- # set +x 00:10:07.390 ************************************ 00:10:07.390 START TEST accel_copy_crc32c 00:10:07.390 ************************************ 00:10:07.390 14:02:53 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:10:07.390 14:02:53 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:10:07.390 14:02:53 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:10:07.390 14:02:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:07.390 14:02:53 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:10:07.390 14:02:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:07.390 14:02:53 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:10:07.390 14:02:53 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:10:07.390 14:02:53 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:07.390 14:02:53 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:07.390 14:02:53 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:07.390 14:02:53 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:07.390 14:02:53 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:07.390 14:02:53 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:10:07.390 14:02:53 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:10:07.390 [2024-07-15 14:02:53.259647] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:10:07.390 [2024-07-15 14:02:53.262263] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid181709 ] 00:10:07.650 [2024-07-15 14:02:53.421007] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:07.650 [2024-07-15 14:02:53.651525] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.909 14:02:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:10:07.909 14:02:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:07.909 14:02:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:07.909 14:02:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:07.909 14:02:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:10:07.909 14:02:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:07.909 14:02:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:07.909 14:02:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:07.909 14:02:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:10:07.909 14:02:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:07.909 14:02:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:07.909 14:02:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:07.909 14:02:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:10:07.909 14:02:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:07.909 14:02:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:07.909 14:02:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:07.909 14:02:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:10:07.909 14:02:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:07.909 14:02:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:07.909 14:02:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:07.909 14:02:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:10:07.909 14:02:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:07.909 14:02:53 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:10:07.909 14:02:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:07.909 14:02:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:07.909 14:02:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:10:07.909 14:02:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:07.909 14:02:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:07.909 14:02:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:07.909 14:02:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:07.909 14:02:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:07.909 14:02:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:07.909 14:02:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:07.909 14:02:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:07.909 14:02:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:07.909 14:02:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:07.909 14:02:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:07.909 14:02:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:10:07.909 14:02:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:07.909 14:02:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:07.909 14:02:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:07.909 14:02:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:10:07.909 14:02:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:07.909 14:02:53 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:10:07.909 14:02:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:07.909 14:02:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:07.909 14:02:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:10:07.909 14:02:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:07.909 14:02:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:07.909 14:02:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:07.909 14:02:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:10:07.909 14:02:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:07.909 14:02:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:07.909 14:02:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:07.909 14:02:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:10:07.909 14:02:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:07.909 14:02:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:07.909 14:02:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:07.909 14:02:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:10:07.909 14:02:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:07.909 14:02:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:07.909 14:02:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:07.909 14:02:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:10:07.909 14:02:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:07.909 14:02:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:07.909 14:02:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:07.909 14:02:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:10:07.909 14:02:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:07.909 14:02:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:07.909 14:02:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:07.909 14:02:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:10:07.910 14:02:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:07.910 14:02:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:07.910 14:02:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:10.441 14:02:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:10:10.441 14:02:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:10.441 14:02:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:10.441 14:02:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:10.441 14:02:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:10:10.441 14:02:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:10.441 14:02:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:10.441 14:02:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:10.441 14:02:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:10:10.441 14:02:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:10.441 14:02:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:10.441 14:02:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:10.441 14:02:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:10:10.441 14:02:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:10.441 14:02:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:10.441 14:02:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:10.441 14:02:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:10:10.441 14:02:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:10.441 14:02:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:10.441 14:02:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:10.441 14:02:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:10:10.441 14:02:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:10:10.441 14:02:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:10:10.441 14:02:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:10:10.441 14:02:55 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:10.441 14:02:55 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:10:10.441 14:02:55 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:10.441 00:10:10.441 real 0m2.655s 00:10:10.441 user 0m2.372s 00:10:10.441 sys 0m0.206s 00:10:10.441 14:02:55 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:10.442 14:02:55 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:10:10.442 ************************************ 00:10:10.442 END TEST accel_copy_crc32c 00:10:10.442 ************************************ 00:10:10.442 14:02:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:10.442 14:02:55 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:10:10.442 14:02:55 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:10:10.442 14:02:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:10.442 14:02:55 accel -- common/autotest_common.sh@10 -- # set +x 00:10:10.442 ************************************ 00:10:10.442 START TEST accel_copy_crc32c_C2 00:10:10.442 ************************************ 00:10:10.442 14:02:55 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:10:10.442 14:02:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:10:10.442 14:02:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:10:10.442 14:02:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:10.442 14:02:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:10:10.442 14:02:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:10.442 14:02:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:10:10.442 14:02:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:10:10.442 14:02:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:10.442 14:02:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:10.442 14:02:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:10.442 14:02:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:10.442 14:02:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:10.442 14:02:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:10:10.442 14:02:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:10:10.442 [2024-07-15 14:02:55.969934] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:10:10.442 [2024-07-15 14:02:55.971075] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid181772 ] 00:10:10.442 [2024-07-15 14:02:56.138717] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:10.442 [2024-07-15 14:02:56.393420] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:10.701 14:02:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:10.701 14:02:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:10.701 14:02:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:10.701 14:02:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:10.701 14:02:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:10.701 14:02:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:10.701 14:02:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:10.701 14:02:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:10.701 14:02:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:10:10.701 14:02:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:10.701 14:02:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:10.701 14:02:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:10.701 14:02:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:10.701 14:02:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:10.701 14:02:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:10.701 14:02:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:10.701 14:02:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:10.701 14:02:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:10.701 14:02:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:10.701 14:02:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:10.701 14:02:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:10:10.701 14:02:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:10.701 14:02:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:10:10.701 14:02:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:10.701 14:02:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:10.701 14:02:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:10:10.701 14:02:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:10.701 14:02:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:10.701 14:02:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:10.701 14:02:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:10.701 14:02:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:10.701 14:02:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:10.701 14:02:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:10.701 14:02:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:10:10.701 14:02:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:10.701 14:02:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:10.701 14:02:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:10.701 14:02:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:10.701 14:02:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:10.701 14:02:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:10.701 14:02:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:10.701 14:02:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:10:10.701 14:02:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:10.701 14:02:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:10:10.701 14:02:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:10.701 14:02:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:10.701 14:02:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:10:10.701 14:02:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:10.701 14:02:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:10.701 14:02:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:10.701 14:02:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:10:10.701 14:02:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:10.701 14:02:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:10.701 14:02:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:10.701 14:02:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:10:10.701 14:02:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:10.701 14:02:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:10.701 14:02:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:10.701 14:02:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:10:10.701 14:02:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:10.701 14:02:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:10.701 14:02:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:10.701 14:02:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:10:10.701 14:02:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:10.701 14:02:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:10.701 14:02:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:10.701 14:02:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:10.701 14:02:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:10.701 14:02:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:10.701 14:02:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:10.701 14:02:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:10.701 14:02:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:10.701 14:02:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:10.701 14:02:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:12.604 14:02:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:12.604 14:02:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:12.604 14:02:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:12.604 14:02:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:12.604 14:02:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:12.604 14:02:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:12.604 14:02:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:12.604 14:02:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:12.604 14:02:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:12.604 14:02:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:12.604 14:02:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:12.604 14:02:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:12.604 14:02:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:12.604 14:02:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:12.604 14:02:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:12.604 14:02:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:12.604 14:02:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:12.604 14:02:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:12.604 14:02:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:12.604 14:02:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:12.604 14:02:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:10:12.604 14:02:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:10:12.604 14:02:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:10:12.604 14:02:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:10:12.604 14:02:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:12.604 14:02:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:10:12.604 14:02:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:12.604 00:10:12.604 real 0m2.579s 00:10:12.604 user 0m2.287s 00:10:12.604 sys 0m0.231s 00:10:12.604 14:02:58 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:12.604 14:02:58 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:10:12.604 ************************************ 00:10:12.604 END TEST accel_copy_crc32c_C2 00:10:12.604 ************************************ 00:10:12.604 14:02:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:12.604 14:02:58 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:10:12.604 14:02:58 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:10:12.604 14:02:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:12.604 14:02:58 accel -- common/autotest_common.sh@10 -- # set +x 00:10:12.604 ************************************ 00:10:12.604 START TEST accel_dualcast 00:10:12.604 ************************************ 00:10:12.604 14:02:58 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:10:12.604 14:02:58 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:10:12.604 14:02:58 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:10:12.604 14:02:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:12.604 14:02:58 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:10:12.604 14:02:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:12.604 14:02:58 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:10:12.604 14:02:58 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:10:12.604 14:02:58 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:12.604 14:02:58 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:12.604 14:02:58 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:12.604 14:02:58 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:12.604 14:02:58 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:12.604 14:02:58 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:10:12.604 14:02:58 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:10:12.604 [2024-07-15 14:02:58.584243] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:10:12.604 [2024-07-15 14:02:58.584552] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid181823 ] 00:10:12.863 [2024-07-15 14:02:58.735939] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:13.122 [2024-07-15 14:02:58.983933] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.380 14:02:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:13.380 14:02:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:13.380 14:02:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:13.380 14:02:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:13.380 14:02:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:13.380 14:02:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:13.380 14:02:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:13.380 14:02:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:13.380 14:02:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:10:13.380 14:02:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:13.380 14:02:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:13.380 14:02:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:13.380 14:02:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:13.380 14:02:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:13.380 14:02:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:13.380 14:02:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:13.380 14:02:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:13.380 14:02:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:13.380 14:02:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:13.381 14:02:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:13.381 14:02:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:10:13.381 14:02:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:13.381 14:02:59 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:10:13.381 14:02:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:13.381 14:02:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:13.381 14:02:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:13.381 14:02:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:13.381 14:02:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:13.381 14:02:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:13.381 14:02:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:13.381 14:02:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:13.381 14:02:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:13.381 14:02:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:13.381 14:02:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:10:13.381 14:02:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:13.381 14:02:59 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:10:13.381 14:02:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:13.381 14:02:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:13.381 14:02:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:10:13.381 14:02:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:13.381 14:02:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:13.381 14:02:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:13.381 14:02:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:10:13.381 14:02:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:13.381 14:02:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:13.381 14:02:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:13.381 14:02:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:10:13.381 14:02:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:13.381 14:02:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:13.381 14:02:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:13.381 14:02:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:10:13.381 14:02:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:13.381 14:02:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:13.381 14:02:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:13.381 14:02:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:10:13.381 14:02:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:13.381 14:02:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:13.381 14:02:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:13.381 14:02:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:13.381 14:02:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:13.381 14:02:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:13.381 14:02:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:13.381 14:02:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:13.381 14:02:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:13.381 14:02:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:13.381 14:02:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:15.283 14:03:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:15.283 14:03:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:15.283 14:03:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:15.283 14:03:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:15.283 14:03:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:15.283 14:03:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:15.283 14:03:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:15.283 14:03:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:15.283 14:03:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:15.283 14:03:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:15.283 14:03:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:15.283 14:03:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:15.283 14:03:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:15.283 14:03:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:15.283 14:03:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:15.283 14:03:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:15.283 14:03:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:15.283 14:03:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:15.283 14:03:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:15.283 14:03:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:15.283 14:03:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:10:15.283 14:03:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:10:15.283 14:03:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:10:15.283 14:03:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:10:15.283 14:03:01 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:15.283 14:03:01 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:10:15.283 14:03:01 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:15.283 00:10:15.283 real 0m2.564s 00:10:15.283 user 0m2.318s 00:10:15.283 sys 0m0.179s 00:10:15.283 14:03:01 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:15.283 14:03:01 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:10:15.283 ************************************ 00:10:15.283 END TEST accel_dualcast 00:10:15.283 ************************************ 00:10:15.283 14:03:01 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:15.283 14:03:01 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:10:15.283 14:03:01 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:10:15.283 14:03:01 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:15.283 14:03:01 accel -- common/autotest_common.sh@10 -- # set +x 00:10:15.283 ************************************ 00:10:15.283 START TEST accel_compare 00:10:15.283 ************************************ 00:10:15.283 14:03:01 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:10:15.283 14:03:01 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:10:15.283 14:03:01 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:10:15.283 14:03:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:15.283 14:03:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:15.283 14:03:01 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:10:15.283 14:03:01 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:10:15.283 14:03:01 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:10:15.283 14:03:01 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:15.283 14:03:01 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:15.283 14:03:01 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:15.283 14:03:01 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:15.283 14:03:01 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:15.283 14:03:01 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:10:15.283 14:03:01 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:10:15.283 [2024-07-15 14:03:01.199615] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:10:15.283 [2024-07-15 14:03:01.200468] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid181881 ] 00:10:15.541 [2024-07-15 14:03:01.374058] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:15.818 [2024-07-15 14:03:01.602645] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.078 14:03:01 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:16.078 14:03:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:16.078 14:03:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:16.078 14:03:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:16.078 14:03:01 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:16.078 14:03:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:16.078 14:03:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:16.078 14:03:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:16.078 14:03:01 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:10:16.078 14:03:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:16.078 14:03:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:16.078 14:03:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:16.078 14:03:01 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:16.078 14:03:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:16.078 14:03:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:16.078 14:03:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:16.078 14:03:01 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:16.078 14:03:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:16.078 14:03:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:16.078 14:03:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:16.078 14:03:01 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:10:16.078 14:03:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:16.078 14:03:01 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:10:16.078 14:03:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:16.078 14:03:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:16.078 14:03:01 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:16.078 14:03:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:16.078 14:03:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:16.078 14:03:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:16.078 14:03:01 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:16.078 14:03:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:16.078 14:03:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:16.078 14:03:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:16.078 14:03:01 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:10:16.078 14:03:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:16.078 14:03:01 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:10:16.078 14:03:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:16.078 14:03:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:16.078 14:03:01 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:10:16.078 14:03:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:16.078 14:03:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:16.078 14:03:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:16.078 14:03:01 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:10:16.078 14:03:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:16.078 14:03:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:16.078 14:03:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:16.078 14:03:01 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:10:16.078 14:03:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:16.078 14:03:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:16.078 14:03:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:16.078 14:03:01 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:10:16.078 14:03:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:16.078 14:03:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:16.078 14:03:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:16.078 14:03:01 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:10:16.078 14:03:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:16.078 14:03:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:16.078 14:03:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:16.078 14:03:01 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:16.078 14:03:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:16.078 14:03:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:16.078 14:03:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:16.078 14:03:01 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:16.078 14:03:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:16.078 14:03:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:16.078 14:03:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:17.980 14:03:03 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:17.980 14:03:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:17.980 14:03:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:17.980 14:03:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:17.980 14:03:03 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:17.980 14:03:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:17.980 14:03:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:17.980 14:03:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:17.980 14:03:03 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:17.980 14:03:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:17.980 14:03:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:17.980 14:03:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:17.980 14:03:03 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:17.980 14:03:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:17.980 14:03:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:17.980 14:03:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:17.980 14:03:03 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:17.980 14:03:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:17.980 14:03:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:17.980 14:03:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:17.980 14:03:03 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:10:17.980 14:03:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:10:17.980 14:03:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:10:17.980 14:03:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:10:17.980 14:03:03 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:17.980 14:03:03 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:10:17.980 14:03:03 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:17.980 00:10:17.980 real 0m2.532s 00:10:17.980 user 0m2.266s 00:10:17.980 sys 0m0.183s 00:10:17.980 14:03:03 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:17.980 14:03:03 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:10:17.980 ************************************ 00:10:17.980 END TEST accel_compare 00:10:17.980 ************************************ 00:10:17.980 14:03:03 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:17.980 14:03:03 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:10:17.980 14:03:03 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:10:17.980 14:03:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:17.980 14:03:03 accel -- common/autotest_common.sh@10 -- # set +x 00:10:17.980 ************************************ 00:10:17.980 START TEST accel_xor 00:10:17.980 ************************************ 00:10:17.980 14:03:03 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:10:17.980 14:03:03 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:10:17.980 14:03:03 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:10:17.980 14:03:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:17.980 14:03:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:17.980 14:03:03 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:10:17.980 14:03:03 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:10:17.980 14:03:03 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:10:17.980 14:03:03 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:17.980 14:03:03 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:17.980 14:03:03 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:17.980 14:03:03 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:17.980 14:03:03 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:17.980 14:03:03 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:10:17.980 14:03:03 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:10:17.980 [2024-07-15 14:03:03.779550] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:10:17.980 [2024-07-15 14:03:03.780269] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid181937 ] 00:10:17.980 [2024-07-15 14:03:03.931353] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:18.237 [2024-07-15 14:03:04.199451] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.494 14:03:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:18.494 14:03:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:18.494 14:03:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:18.494 14:03:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:18.494 14:03:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:18.494 14:03:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:18.494 14:03:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:18.494 14:03:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:18.494 14:03:04 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:10:18.494 14:03:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:18.494 14:03:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:18.494 14:03:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:18.494 14:03:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:18.494 14:03:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:18.494 14:03:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:18.494 14:03:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:18.494 14:03:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:18.494 14:03:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:18.494 14:03:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:18.494 14:03:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:18.494 14:03:04 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:10:18.494 14:03:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:18.494 14:03:04 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:10:18.494 14:03:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:18.494 14:03:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:18.494 14:03:04 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:10:18.494 14:03:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:18.494 14:03:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:18.494 14:03:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:18.494 14:03:04 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:18.494 14:03:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:18.494 14:03:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:18.494 14:03:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:18.494 14:03:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:18.494 14:03:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:18.494 14:03:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:18.494 14:03:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:18.494 14:03:04 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:10:18.494 14:03:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:18.494 14:03:04 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:10:18.494 14:03:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:18.494 14:03:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:18.494 14:03:04 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:10:18.495 14:03:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:18.495 14:03:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:18.495 14:03:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:18.495 14:03:04 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:10:18.495 14:03:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:18.495 14:03:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:18.495 14:03:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:18.495 14:03:04 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:10:18.495 14:03:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:18.495 14:03:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:18.495 14:03:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:18.495 14:03:04 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:10:18.495 14:03:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:18.495 14:03:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:18.495 14:03:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:18.495 14:03:04 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:10:18.495 14:03:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:18.495 14:03:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:18.495 14:03:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:18.495 14:03:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:18.495 14:03:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:18.495 14:03:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:18.495 14:03:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:18.495 14:03:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:18.495 14:03:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:18.495 14:03:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:18.495 14:03:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:20.394 14:03:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:20.394 14:03:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:20.394 14:03:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:20.394 14:03:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:20.394 14:03:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:20.394 14:03:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:20.394 14:03:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:20.394 14:03:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:20.394 14:03:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:20.394 14:03:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:20.394 14:03:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:20.394 14:03:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:20.394 14:03:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:20.394 14:03:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:20.394 14:03:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:20.394 14:03:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:20.394 14:03:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:20.394 14:03:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:20.394 14:03:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:20.394 14:03:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:20.394 14:03:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:20.395 14:03:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:20.395 14:03:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:20.395 14:03:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:20.395 14:03:06 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:20.395 14:03:06 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:10:20.395 14:03:06 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:20.395 00:10:20.395 real 0m2.547s 00:10:20.395 user 0m2.286s 00:10:20.395 sys 0m0.192s 00:10:20.395 14:03:06 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:20.395 14:03:06 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:10:20.395 ************************************ 00:10:20.395 END TEST accel_xor 00:10:20.395 ************************************ 00:10:20.395 14:03:06 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:20.395 14:03:06 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:10:20.395 14:03:06 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:10:20.395 14:03:06 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:20.395 14:03:06 accel -- common/autotest_common.sh@10 -- # set +x 00:10:20.395 ************************************ 00:10:20.395 START TEST accel_xor 00:10:20.395 ************************************ 00:10:20.395 14:03:06 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:10:20.395 14:03:06 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:10:20.395 14:03:06 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:10:20.395 14:03:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:20.395 14:03:06 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:10:20.395 14:03:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:20.395 14:03:06 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:10:20.395 14:03:06 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:10:20.395 14:03:06 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:20.395 14:03:06 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:20.395 14:03:06 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:20.395 14:03:06 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:20.395 14:03:06 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:20.395 14:03:06 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:10:20.395 14:03:06 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:10:20.395 [2024-07-15 14:03:06.383961] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:10:20.395 [2024-07-15 14:03:06.384776] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid181995 ] 00:10:20.652 [2024-07-15 14:03:06.543151] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:20.910 [2024-07-15 14:03:06.768369] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.167 14:03:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:21.167 14:03:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:21.167 14:03:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:21.167 14:03:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:21.167 14:03:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:21.167 14:03:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:21.167 14:03:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:21.167 14:03:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:21.167 14:03:06 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:10:21.167 14:03:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:21.167 14:03:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:21.167 14:03:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:21.167 14:03:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:21.167 14:03:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:21.167 14:03:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:21.167 14:03:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:21.167 14:03:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:21.167 14:03:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:21.167 14:03:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:21.167 14:03:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:21.167 14:03:06 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:10:21.167 14:03:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:21.167 14:03:06 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:10:21.167 14:03:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:21.167 14:03:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:21.167 14:03:06 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:10:21.167 14:03:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:21.167 14:03:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:21.167 14:03:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:21.167 14:03:06 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:21.167 14:03:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:21.167 14:03:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:21.167 14:03:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:21.167 14:03:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:21.167 14:03:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:21.167 14:03:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:21.167 14:03:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:21.167 14:03:06 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:10:21.167 14:03:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:21.167 14:03:06 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:10:21.167 14:03:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:21.167 14:03:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:21.167 14:03:06 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:10:21.167 14:03:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:21.167 14:03:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:21.167 14:03:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:21.167 14:03:06 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:10:21.167 14:03:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:21.167 14:03:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:21.167 14:03:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:21.167 14:03:06 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:10:21.167 14:03:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:21.167 14:03:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:21.167 14:03:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:21.167 14:03:06 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:10:21.167 14:03:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:21.167 14:03:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:21.167 14:03:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:21.167 14:03:06 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:10:21.167 14:03:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:21.167 14:03:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:21.167 14:03:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:21.167 14:03:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:21.167 14:03:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:21.167 14:03:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:21.167 14:03:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:21.167 14:03:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:21.167 14:03:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:21.167 14:03:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:21.167 14:03:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:23.066 14:03:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:23.066 14:03:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:23.066 14:03:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:23.066 14:03:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:23.066 14:03:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:23.066 14:03:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:23.066 14:03:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:23.066 14:03:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:23.066 14:03:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:23.066 14:03:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:23.066 14:03:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:23.066 14:03:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:23.066 14:03:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:23.066 14:03:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:23.066 14:03:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:23.066 14:03:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:23.066 14:03:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:23.066 14:03:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:23.066 14:03:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:23.066 14:03:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:23.066 14:03:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:10:23.066 14:03:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:10:23.066 14:03:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:10:23.066 14:03:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:10:23.066 14:03:08 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:23.066 14:03:08 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:10:23.066 14:03:08 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:23.066 00:10:23.066 real 0m2.515s 00:10:23.066 user 0m2.254s 00:10:23.066 sys 0m0.191s 00:10:23.066 14:03:08 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:23.066 14:03:08 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:10:23.066 ************************************ 00:10:23.066 END TEST accel_xor 00:10:23.066 ************************************ 00:10:23.066 14:03:08 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:23.066 14:03:08 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:10:23.066 14:03:08 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:10:23.066 14:03:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:23.066 14:03:08 accel -- common/autotest_common.sh@10 -- # set +x 00:10:23.066 ************************************ 00:10:23.066 START TEST accel_dif_verify 00:10:23.066 ************************************ 00:10:23.066 14:03:08 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:10:23.066 14:03:08 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:10:23.066 14:03:08 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:10:23.066 14:03:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:23.066 14:03:08 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:10:23.066 14:03:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:23.066 14:03:08 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:10:23.066 14:03:08 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:10:23.066 14:03:08 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:23.066 14:03:08 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:23.066 14:03:08 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:23.066 14:03:08 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:23.066 14:03:08 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:23.066 14:03:08 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:10:23.066 14:03:08 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:10:23.066 [2024-07-15 14:03:08.949500] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:10:23.066 [2024-07-15 14:03:08.949913] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid182050 ] 00:10:23.379 [2024-07-15 14:03:09.107368] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:23.379 [2024-07-15 14:03:09.326970] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.655 14:03:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:23.655 14:03:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:23.655 14:03:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:23.655 14:03:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:23.655 14:03:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:23.655 14:03:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:23.655 14:03:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:23.655 14:03:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:23.655 14:03:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:10:23.655 14:03:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:23.655 14:03:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:23.655 14:03:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:23.655 14:03:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:23.655 14:03:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:23.655 14:03:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:23.655 14:03:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:23.655 14:03:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:23.655 14:03:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:23.655 14:03:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:23.655 14:03:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:23.655 14:03:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:10:23.655 14:03:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:23.655 14:03:09 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:10:23.655 14:03:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:23.655 14:03:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:23.655 14:03:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:23.655 14:03:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:23.655 14:03:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:23.655 14:03:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:23.655 14:03:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:23.655 14:03:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:23.655 14:03:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:23.655 14:03:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:23.655 14:03:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:10:23.655 14:03:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:23.655 14:03:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:23.655 14:03:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:23.655 14:03:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:10:23.655 14:03:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:23.655 14:03:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:23.655 14:03:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:23.655 14:03:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:23.655 14:03:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:23.655 14:03:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:23.655 14:03:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:23.655 14:03:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:10:23.655 14:03:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:23.655 14:03:09 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:10:23.655 14:03:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:23.655 14:03:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:23.655 14:03:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:10:23.655 14:03:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:23.655 14:03:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:23.655 14:03:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:23.655 14:03:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:10:23.655 14:03:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:23.655 14:03:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:23.655 14:03:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:23.655 14:03:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:10:23.655 14:03:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:23.656 14:03:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:23.656 14:03:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:23.656 14:03:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:10:23.656 14:03:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:23.656 14:03:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:23.656 14:03:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:23.656 14:03:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:10:23.656 14:03:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:23.656 14:03:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:23.656 14:03:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:23.656 14:03:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:23.656 14:03:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:23.656 14:03:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:23.656 14:03:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:23.656 14:03:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:23.656 14:03:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:23.656 14:03:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:23.656 14:03:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:25.558 14:03:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:25.558 14:03:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:25.558 14:03:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:25.558 14:03:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:25.558 14:03:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:25.558 14:03:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:25.558 14:03:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:25.558 14:03:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:25.558 14:03:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:25.558 14:03:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:25.558 14:03:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:25.558 14:03:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:25.558 14:03:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:25.558 14:03:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:25.558 14:03:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:25.558 14:03:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:25.558 14:03:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:25.558 14:03:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:25.558 14:03:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:25.558 14:03:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:25.558 14:03:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:10:25.558 14:03:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:10:25.558 14:03:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:10:25.558 14:03:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:10:25.558 14:03:11 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:25.558 14:03:11 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:10:25.558 14:03:11 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:25.558 00:10:25.558 real 0m2.494s 00:10:25.558 user 0m2.269s 00:10:25.558 sys 0m0.171s 00:10:25.558 14:03:11 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:25.558 14:03:11 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:10:25.558 ************************************ 00:10:25.558 END TEST accel_dif_verify 00:10:25.558 ************************************ 00:10:25.558 14:03:11 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:25.558 14:03:11 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:10:25.558 14:03:11 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:10:25.558 14:03:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:25.558 14:03:11 accel -- common/autotest_common.sh@10 -- # set +x 00:10:25.558 ************************************ 00:10:25.558 START TEST accel_dif_generate 00:10:25.558 ************************************ 00:10:25.558 14:03:11 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:10:25.558 14:03:11 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:10:25.558 14:03:11 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:10:25.558 14:03:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:25.558 14:03:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:25.558 14:03:11 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:10:25.558 14:03:11 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:10:25.558 14:03:11 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:10:25.558 14:03:11 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:25.558 14:03:11 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:25.558 14:03:11 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:25.558 14:03:11 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:25.558 14:03:11 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:25.558 14:03:11 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:10:25.558 14:03:11 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:10:25.558 [2024-07-15 14:03:11.484939] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:10:25.558 [2024-07-15 14:03:11.485244] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid182107 ] 00:10:25.816 [2024-07-15 14:03:11.638087] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:26.074 [2024-07-15 14:03:11.850977] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:26.074 14:03:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:26.074 14:03:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:26.074 14:03:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:26.074 14:03:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:26.074 14:03:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:26.074 14:03:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:26.074 14:03:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:26.074 14:03:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:26.074 14:03:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:10:26.074 14:03:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:26.074 14:03:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:26.074 14:03:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:26.074 14:03:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:26.074 14:03:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:26.074 14:03:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:26.074 14:03:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:26.074 14:03:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:26.074 14:03:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:26.074 14:03:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:26.074 14:03:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:26.075 14:03:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:10:26.075 14:03:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:26.075 14:03:12 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:10:26.075 14:03:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:26.075 14:03:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:26.333 14:03:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:26.333 14:03:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:26.333 14:03:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:26.333 14:03:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:26.333 14:03:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:26.333 14:03:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:26.333 14:03:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:26.333 14:03:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:26.333 14:03:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:10:26.333 14:03:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:26.333 14:03:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:26.333 14:03:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:26.333 14:03:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:10:26.333 14:03:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:26.333 14:03:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:26.333 14:03:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:26.333 14:03:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:26.333 14:03:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:26.333 14:03:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:26.333 14:03:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:26.333 14:03:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:10:26.333 14:03:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:26.333 14:03:12 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:10:26.333 14:03:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:26.333 14:03:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:26.333 14:03:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:10:26.333 14:03:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:26.333 14:03:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:26.333 14:03:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:26.333 14:03:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:10:26.333 14:03:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:26.333 14:03:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:26.333 14:03:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:26.333 14:03:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:10:26.333 14:03:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:26.333 14:03:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:26.333 14:03:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:26.333 14:03:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:10:26.333 14:03:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:26.333 14:03:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:26.333 14:03:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:26.333 14:03:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:10:26.333 14:03:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:26.333 14:03:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:26.333 14:03:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:26.333 14:03:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:26.333 14:03:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:26.333 14:03:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:26.333 14:03:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:26.333 14:03:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:26.333 14:03:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:26.333 14:03:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:26.333 14:03:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:28.242 14:03:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:28.242 14:03:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:28.242 14:03:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:28.242 14:03:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:28.242 14:03:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:28.242 14:03:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:28.242 14:03:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:28.242 14:03:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:28.242 14:03:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:28.242 14:03:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:28.242 14:03:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:28.242 14:03:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:28.242 14:03:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:28.242 14:03:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:28.242 14:03:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:28.242 14:03:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:28.242 14:03:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:28.242 14:03:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:28.242 14:03:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:28.242 14:03:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:28.242 14:03:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:10:28.242 14:03:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:10:28.242 14:03:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:10:28.242 14:03:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:10:28.242 14:03:13 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:28.242 14:03:13 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:10:28.242 14:03:13 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:28.242 00:10:28.242 real 0m2.500s 00:10:28.242 user 0m2.272s 00:10:28.242 sys 0m0.174s 00:10:28.242 14:03:13 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:28.242 14:03:13 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:10:28.242 ************************************ 00:10:28.242 END TEST accel_dif_generate 00:10:28.242 ************************************ 00:10:28.242 14:03:13 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:28.242 14:03:13 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:10:28.242 14:03:13 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:10:28.242 14:03:13 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:28.242 14:03:13 accel -- common/autotest_common.sh@10 -- # set +x 00:10:28.242 ************************************ 00:10:28.242 START TEST accel_dif_generate_copy 00:10:28.242 ************************************ 00:10:28.242 14:03:14 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:10:28.242 14:03:14 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:10:28.242 14:03:14 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:10:28.242 14:03:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:28.242 14:03:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:28.242 14:03:14 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:10:28.242 14:03:14 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:10:28.242 14:03:14 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:10:28.242 14:03:14 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:28.242 14:03:14 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:28.242 14:03:14 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:28.242 14:03:14 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:28.242 14:03:14 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:28.242 14:03:14 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:10:28.242 14:03:14 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:10:28.242 [2024-07-15 14:03:14.037515] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:10:28.242 [2024-07-15 14:03:14.037903] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid182158 ] 00:10:28.242 [2024-07-15 14:03:14.200521] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:28.500 [2024-07-15 14:03:14.458109] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:28.758 14:03:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:28.758 14:03:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:28.758 14:03:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:28.758 14:03:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:28.758 14:03:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:28.758 14:03:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:28.758 14:03:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:28.758 14:03:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:28.758 14:03:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:10:28.758 14:03:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:28.758 14:03:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:28.758 14:03:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:28.758 14:03:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:28.758 14:03:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:28.758 14:03:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:28.758 14:03:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:28.758 14:03:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:28.758 14:03:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:28.758 14:03:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:28.758 14:03:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:28.758 14:03:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:10:28.758 14:03:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:28.758 14:03:14 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:10:28.758 14:03:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:28.758 14:03:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:28.758 14:03:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:28.758 14:03:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:28.758 14:03:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:28.758 14:03:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:28.758 14:03:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:28.758 14:03:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:28.758 14:03:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:28.758 14:03:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:28.758 14:03:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:28.758 14:03:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:28.758 14:03:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:28.758 14:03:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:28.758 14:03:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:10:28.758 14:03:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:28.758 14:03:14 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:10:28.758 14:03:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:28.758 14:03:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:28.758 14:03:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:10:28.758 14:03:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:28.759 14:03:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:28.759 14:03:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:28.759 14:03:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:10:28.759 14:03:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:28.759 14:03:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:28.759 14:03:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:28.759 14:03:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:10:28.759 14:03:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:28.759 14:03:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:28.759 14:03:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:28.759 14:03:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:10:28.759 14:03:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:28.759 14:03:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:28.759 14:03:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:28.759 14:03:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:10:28.759 14:03:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:28.759 14:03:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:28.759 14:03:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:28.759 14:03:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:28.759 14:03:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:28.759 14:03:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:28.759 14:03:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:28.759 14:03:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:28.759 14:03:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:28.759 14:03:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:28.759 14:03:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:30.658 14:03:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:30.658 14:03:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:30.658 14:03:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:30.658 14:03:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:30.658 14:03:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:30.658 14:03:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:30.658 14:03:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:30.658 14:03:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:30.658 14:03:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:30.658 14:03:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:30.658 14:03:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:30.658 14:03:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:30.658 14:03:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:30.658 14:03:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:30.658 14:03:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:30.658 14:03:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:30.658 14:03:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:30.658 14:03:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:30.658 14:03:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:30.658 14:03:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:30.658 14:03:16 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:10:30.658 14:03:16 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:10:30.658 14:03:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:10:30.658 14:03:16 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:10:30.658 14:03:16 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:30.658 14:03:16 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:10:30.658 14:03:16 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:30.658 00:10:30.658 real 0m2.548s 00:10:30.658 user 0m2.293s 00:10:30.658 sys 0m0.188s 00:10:30.658 14:03:16 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:30.658 14:03:16 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:10:30.658 ************************************ 00:10:30.658 END TEST accel_dif_generate_copy 00:10:30.658 ************************************ 00:10:30.658 14:03:16 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:30.658 14:03:16 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:10:30.659 14:03:16 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:30.659 14:03:16 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:10:30.659 14:03:16 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:30.659 14:03:16 accel -- common/autotest_common.sh@10 -- # set +x 00:10:30.659 ************************************ 00:10:30.659 START TEST accel_comp 00:10:30.659 ************************************ 00:10:30.659 14:03:16 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:30.659 14:03:16 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:10:30.659 14:03:16 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:10:30.659 14:03:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:30.659 14:03:16 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:30.659 14:03:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:30.659 14:03:16 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:30.659 14:03:16 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:10:30.659 14:03:16 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:30.659 14:03:16 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:30.659 14:03:16 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:30.659 14:03:16 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:30.659 14:03:16 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:30.659 14:03:16 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:10:30.659 14:03:16 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:10:30.659 [2024-07-15 14:03:16.642028] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:10:30.659 [2024-07-15 14:03:16.642374] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid182222 ] 00:10:30.917 [2024-07-15 14:03:16.805173] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:31.176 [2024-07-15 14:03:17.061946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:31.435 14:03:17 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:31.435 14:03:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:31.435 14:03:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:31.435 14:03:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:31.435 14:03:17 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:31.435 14:03:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:31.435 14:03:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:31.435 14:03:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:31.435 14:03:17 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:31.435 14:03:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:31.435 14:03:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:31.435 14:03:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:31.435 14:03:17 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:10:31.435 14:03:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:31.435 14:03:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:31.435 14:03:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:31.435 14:03:17 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:31.435 14:03:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:31.435 14:03:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:31.435 14:03:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:31.435 14:03:17 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:31.435 14:03:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:31.435 14:03:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:31.435 14:03:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:31.435 14:03:17 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:10:31.435 14:03:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:31.435 14:03:17 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:10:31.435 14:03:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:31.435 14:03:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:31.435 14:03:17 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:31.435 14:03:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:31.435 14:03:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:31.435 14:03:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:31.435 14:03:17 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:31.435 14:03:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:31.435 14:03:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:31.435 14:03:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:31.435 14:03:17 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:10:31.435 14:03:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:31.435 14:03:17 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:10:31.435 14:03:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:31.435 14:03:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:31.435 14:03:17 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:31.435 14:03:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:31.435 14:03:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:31.435 14:03:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:31.435 14:03:17 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:10:31.435 14:03:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:31.435 14:03:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:31.435 14:03:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:31.435 14:03:17 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:10:31.435 14:03:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:31.435 14:03:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:31.435 14:03:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:31.435 14:03:17 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:10:31.435 14:03:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:31.435 14:03:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:31.435 14:03:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:31.435 14:03:17 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:10:31.435 14:03:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:31.435 14:03:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:31.435 14:03:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:31.435 14:03:17 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:10:31.435 14:03:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:31.435 14:03:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:31.435 14:03:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:31.435 14:03:17 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:31.435 14:03:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:31.435 14:03:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:31.435 14:03:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:31.435 14:03:17 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:31.435 14:03:17 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:31.435 14:03:17 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:31.435 14:03:17 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:33.365 14:03:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:33.365 14:03:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:33.365 14:03:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:33.365 14:03:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:33.365 14:03:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:33.365 14:03:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:33.365 14:03:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:33.365 14:03:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:33.365 14:03:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:33.365 14:03:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:33.365 14:03:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:33.365 14:03:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:33.365 14:03:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:33.365 14:03:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:33.365 14:03:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:33.365 14:03:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:33.365 14:03:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:33.365 14:03:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:33.365 14:03:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:33.365 14:03:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:33.365 14:03:19 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:33.365 14:03:19 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:33.365 14:03:19 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:33.365 14:03:19 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:33.365 14:03:19 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:33.365 14:03:19 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:10:33.365 14:03:19 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:33.365 00:10:33.365 real 0m2.599s 00:10:33.365 user 0m2.329s 00:10:33.365 sys 0m0.211s 00:10:33.365 14:03:19 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:33.365 14:03:19 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:10:33.365 ************************************ 00:10:33.365 END TEST accel_comp 00:10:33.365 ************************************ 00:10:33.365 14:03:19 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:33.365 14:03:19 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:33.365 14:03:19 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:10:33.366 14:03:19 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:33.366 14:03:19 accel -- common/autotest_common.sh@10 -- # set +x 00:10:33.366 ************************************ 00:10:33.366 START TEST accel_decomp 00:10:33.366 ************************************ 00:10:33.366 14:03:19 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:33.366 14:03:19 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:10:33.366 14:03:19 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:10:33.366 14:03:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:33.366 14:03:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:33.366 14:03:19 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:33.366 14:03:19 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:33.366 14:03:19 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:10:33.366 14:03:19 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:33.366 14:03:19 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:33.366 14:03:19 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:33.366 14:03:19 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:33.366 14:03:19 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:33.366 14:03:19 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:10:33.366 14:03:19 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:10:33.366 [2024-07-15 14:03:19.292397] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:10:33.366 [2024-07-15 14:03:19.292788] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid182272 ] 00:10:33.624 [2024-07-15 14:03:19.455621] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:33.884 [2024-07-15 14:03:19.728547] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:34.142 14:03:19 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:34.142 14:03:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:34.142 14:03:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:34.142 14:03:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:34.142 14:03:19 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:34.142 14:03:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:34.142 14:03:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:34.142 14:03:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:34.142 14:03:19 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:34.142 14:03:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:34.142 14:03:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:34.142 14:03:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:34.142 14:03:19 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:10:34.142 14:03:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:34.143 14:03:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:34.143 14:03:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:34.143 14:03:19 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:34.143 14:03:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:34.143 14:03:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:34.143 14:03:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:34.143 14:03:19 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:34.143 14:03:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:34.143 14:03:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:34.143 14:03:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:34.143 14:03:19 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:10:34.143 14:03:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:34.143 14:03:19 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:34.143 14:03:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:34.143 14:03:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:34.143 14:03:19 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:34.143 14:03:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:34.143 14:03:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:34.143 14:03:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:34.143 14:03:19 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:34.143 14:03:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:34.143 14:03:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:34.143 14:03:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:34.143 14:03:19 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:10:34.143 14:03:19 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:34.143 14:03:19 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:10:34.143 14:03:19 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:34.143 14:03:19 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:34.143 14:03:19 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:34.143 14:03:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:34.143 14:03:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:34.143 14:03:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:34.143 14:03:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:10:34.143 14:03:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:34.143 14:03:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:34.143 14:03:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:34.143 14:03:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:10:34.143 14:03:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:34.143 14:03:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:34.143 14:03:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:34.143 14:03:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:10:34.143 14:03:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:34.143 14:03:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:34.143 14:03:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:34.143 14:03:20 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:10:34.143 14:03:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:34.143 14:03:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:34.143 14:03:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:34.143 14:03:20 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:10:34.143 14:03:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:34.143 14:03:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:34.143 14:03:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:34.143 14:03:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:34.143 14:03:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:34.143 14:03:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:34.143 14:03:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:34.143 14:03:20 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:34.143 14:03:20 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:34.143 14:03:20 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:34.143 14:03:20 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:36.079 14:03:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:36.079 14:03:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:36.079 14:03:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:36.079 14:03:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:36.079 14:03:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:36.079 14:03:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:36.079 14:03:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:36.079 14:03:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:36.079 14:03:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:36.079 14:03:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:36.079 14:03:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:36.079 14:03:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:36.079 14:03:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:36.079 14:03:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:36.079 14:03:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:36.079 14:03:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:36.079 14:03:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:36.079 14:03:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:36.079 14:03:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:36.079 14:03:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:36.079 14:03:21 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:36.079 14:03:21 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:36.079 14:03:21 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:36.079 14:03:21 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:36.079 14:03:21 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:36.079 14:03:21 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:36.079 14:03:21 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:36.079 00:10:36.079 real 0m2.620s 00:10:36.079 user 0m2.342s 00:10:36.079 sys 0m0.206s 00:10:36.079 14:03:21 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:36.079 14:03:21 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:10:36.079 ************************************ 00:10:36.079 END TEST accel_decomp 00:10:36.079 ************************************ 00:10:36.079 14:03:21 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:36.079 14:03:21 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:10:36.079 14:03:21 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:10:36.079 14:03:21 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:36.079 14:03:21 accel -- common/autotest_common.sh@10 -- # set +x 00:10:36.079 ************************************ 00:10:36.079 START TEST accel_decomp_full 00:10:36.079 ************************************ 00:10:36.079 14:03:21 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:10:36.079 14:03:21 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:10:36.079 14:03:21 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:10:36.079 14:03:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:36.079 14:03:21 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:36.079 14:03:21 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:10:36.079 14:03:21 accel.accel_decomp_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:10:36.079 14:03:21 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:10:36.079 14:03:21 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:36.079 14:03:21 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:36.079 14:03:21 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:36.079 14:03:21 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:36.079 14:03:21 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:36.079 14:03:21 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:10:36.079 14:03:21 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:10:36.079 [2024-07-15 14:03:21.968537] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:10:36.079 [2024-07-15 14:03:21.968957] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid182331 ] 00:10:36.337 [2024-07-15 14:03:22.128550] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:36.595 [2024-07-15 14:03:22.352866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.595 14:03:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:36.595 14:03:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:36.595 14:03:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:36.595 14:03:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:36.595 14:03:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:36.595 14:03:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:36.595 14:03:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:36.595 14:03:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:36.595 14:03:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:36.595 14:03:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:36.595 14:03:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:36.595 14:03:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:36.595 14:03:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:10:36.595 14:03:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:36.596 14:03:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:36.596 14:03:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:36.596 14:03:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:36.596 14:03:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:36.596 14:03:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:36.596 14:03:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:36.596 14:03:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:36.596 14:03:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:36.596 14:03:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:36.596 14:03:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:36.596 14:03:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:10:36.596 14:03:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:36.596 14:03:22 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:36.596 14:03:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:36.596 14:03:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:36.596 14:03:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:10:36.596 14:03:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:36.596 14:03:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:36.596 14:03:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:36.596 14:03:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:36.596 14:03:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:36.596 14:03:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:36.596 14:03:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:36.596 14:03:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:10:36.596 14:03:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:36.596 14:03:22 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:10:36.596 14:03:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:36.596 14:03:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:36.596 14:03:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:36.596 14:03:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:36.596 14:03:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:36.596 14:03:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:36.596 14:03:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:10:36.596 14:03:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:36.596 14:03:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:36.596 14:03:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:36.596 14:03:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:10:36.596 14:03:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:36.596 14:03:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:36.596 14:03:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:36.596 14:03:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:10:36.596 14:03:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:36.596 14:03:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:36.596 14:03:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:36.596 14:03:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:10:36.596 14:03:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:36.596 14:03:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:36.596 14:03:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:36.596 14:03:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:10:36.596 14:03:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:36.596 14:03:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:36.596 14:03:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:36.596 14:03:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:36.596 14:03:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:36.596 14:03:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:36.596 14:03:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:36.596 14:03:22 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:36.596 14:03:22 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:36.596 14:03:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:36.596 14:03:22 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:38.500 14:03:24 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:38.500 14:03:24 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:38.500 14:03:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:38.500 14:03:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:38.500 14:03:24 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:38.500 14:03:24 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:38.500 14:03:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:38.500 14:03:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:38.500 14:03:24 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:38.500 14:03:24 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:38.500 14:03:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:38.500 14:03:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:38.500 14:03:24 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:38.500 14:03:24 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:38.500 14:03:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:38.500 14:03:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:38.500 14:03:24 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:38.500 14:03:24 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:38.500 14:03:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:38.500 14:03:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:38.501 14:03:24 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:10:38.501 14:03:24 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:10:38.501 14:03:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:10:38.501 14:03:24 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:10:38.501 14:03:24 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:38.501 14:03:24 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:38.501 14:03:24 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:38.501 00:10:38.501 real 0m2.537s 00:10:38.501 user 0m2.287s 00:10:38.501 sys 0m0.178s 00:10:38.501 14:03:24 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:38.501 14:03:24 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:10:38.501 ************************************ 00:10:38.501 END TEST accel_decomp_full 00:10:38.501 ************************************ 00:10:38.501 14:03:24 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:38.501 14:03:24 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:10:38.501 14:03:24 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:10:38.501 14:03:24 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:38.501 14:03:24 accel -- common/autotest_common.sh@10 -- # set +x 00:10:38.768 ************************************ 00:10:38.768 START TEST accel_decomp_mcore 00:10:38.768 ************************************ 00:10:38.768 14:03:24 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:10:38.768 14:03:24 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:10:38.768 14:03:24 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:10:38.768 14:03:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:38.768 14:03:24 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:38.768 14:03:24 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:10:38.768 14:03:24 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:10:38.768 14:03:24 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:10:38.768 14:03:24 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:38.768 14:03:24 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:38.768 14:03:24 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:38.768 14:03:24 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:38.768 14:03:24 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:38.768 14:03:24 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:10:38.769 14:03:24 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:10:38.769 [2024-07-15 14:03:24.548539] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:10:38.769 [2024-07-15 14:03:24.548917] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid182393 ] 00:10:38.769 [2024-07-15 14:03:24.750707] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:39.031 [2024-07-15 14:03:25.005810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:39.031 [2024-07-15 14:03:25.005917] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:39.031 [2024-07-15 14:03:25.006032] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:39.032 [2024-07-15 14:03:25.006035] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.289 14:03:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:39.289 14:03:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:39.289 14:03:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:39.289 14:03:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:39.289 14:03:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:39.289 14:03:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:39.289 14:03:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:39.289 14:03:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:39.289 14:03:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:39.289 14:03:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:39.289 14:03:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:39.289 14:03:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:39.289 14:03:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:10:39.289 14:03:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:39.289 14:03:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:39.289 14:03:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:39.289 14:03:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:39.289 14:03:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:39.289 14:03:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:39.289 14:03:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:39.289 14:03:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:39.289 14:03:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:39.289 14:03:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:39.289 14:03:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:39.289 14:03:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:10:39.289 14:03:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:39.289 14:03:25 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:39.289 14:03:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:39.289 14:03:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:39.289 14:03:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:39.289 14:03:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:39.289 14:03:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:39.289 14:03:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:39.289 14:03:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:39.289 14:03:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:39.289 14:03:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:39.289 14:03:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:39.289 14:03:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:10:39.289 14:03:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:39.289 14:03:25 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:10:39.289 14:03:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:39.289 14:03:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:39.289 14:03:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:39.289 14:03:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:39.289 14:03:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:39.289 14:03:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:39.289 14:03:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:10:39.289 14:03:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:39.289 14:03:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:39.289 14:03:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:39.289 14:03:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:10:39.289 14:03:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:39.289 14:03:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:39.289 14:03:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:39.289 14:03:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:10:39.289 14:03:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:39.289 14:03:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:39.289 14:03:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:39.289 14:03:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:10:39.289 14:03:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:39.289 14:03:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:39.289 14:03:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:39.289 14:03:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:10:39.289 14:03:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:39.289 14:03:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:39.289 14:03:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:39.289 14:03:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:39.289 14:03:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:39.289 14:03:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:39.289 14:03:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:39.289 14:03:25 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:39.289 14:03:25 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:39.289 14:03:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:39.290 14:03:25 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:41.208 14:03:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:41.208 14:03:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:41.208 14:03:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:41.208 14:03:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:41.208 14:03:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:41.208 14:03:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:41.208 14:03:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:41.208 14:03:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:41.208 14:03:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:41.208 14:03:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:41.208 14:03:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:41.208 14:03:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:41.208 14:03:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:41.208 14:03:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:41.208 14:03:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:41.208 14:03:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:41.208 14:03:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:41.208 14:03:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:41.208 14:03:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:41.208 14:03:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:41.209 14:03:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:41.209 14:03:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:41.209 14:03:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:41.209 14:03:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:41.209 14:03:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:41.209 14:03:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:41.209 14:03:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:41.209 14:03:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:41.209 14:03:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:41.209 14:03:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:41.209 14:03:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:41.209 14:03:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:41.209 14:03:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:41.209 14:03:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:41.209 14:03:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:41.209 14:03:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:41.209 14:03:27 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:41.209 14:03:27 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:41.209 14:03:27 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:41.209 00:10:41.209 real 0m2.616s 00:10:41.209 user 0m7.416s 00:10:41.209 sys 0m0.237s 00:10:41.209 14:03:27 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:41.209 14:03:27 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:10:41.209 ************************************ 00:10:41.209 END TEST accel_decomp_mcore 00:10:41.209 ************************************ 00:10:41.209 14:03:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:41.209 14:03:27 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:41.209 14:03:27 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:10:41.209 14:03:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:41.209 14:03:27 accel -- common/autotest_common.sh@10 -- # set +x 00:10:41.209 ************************************ 00:10:41.209 START TEST accel_decomp_full_mcore 00:10:41.209 ************************************ 00:10:41.209 14:03:27 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:41.209 14:03:27 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:10:41.209 14:03:27 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:10:41.209 14:03:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:41.209 14:03:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:41.209 14:03:27 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:41.209 14:03:27 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:41.209 14:03:27 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:10:41.209 14:03:27 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:41.209 14:03:27 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:41.209 14:03:27 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:41.209 14:03:27 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:41.209 14:03:27 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:41.209 14:03:27 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:10:41.209 14:03:27 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:10:41.466 [2024-07-15 14:03:27.217555] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:10:41.466 [2024-07-15 14:03:27.218161] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid182447 ] 00:10:41.466 [2024-07-15 14:03:27.413262] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:41.724 [2024-07-15 14:03:27.651438] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:41.724 [2024-07-15 14:03:27.651567] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:41.724 [2024-07-15 14:03:27.651667] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:41.724 [2024-07-15 14:03:27.651664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:41.981 14:03:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:41.981 14:03:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:41.981 14:03:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:41.981 14:03:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:41.981 14:03:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:41.981 14:03:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:41.981 14:03:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:41.981 14:03:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:41.981 14:03:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:41.981 14:03:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:41.981 14:03:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:41.981 14:03:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:41.981 14:03:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:10:41.981 14:03:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:41.981 14:03:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:41.982 14:03:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:41.982 14:03:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:41.982 14:03:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:41.982 14:03:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:41.982 14:03:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:41.982 14:03:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:41.982 14:03:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:41.982 14:03:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:41.982 14:03:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:41.982 14:03:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:10:41.982 14:03:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:41.982 14:03:27 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:41.982 14:03:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:41.982 14:03:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:41.982 14:03:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:10:41.982 14:03:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:41.982 14:03:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:41.982 14:03:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:41.982 14:03:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:41.982 14:03:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:41.982 14:03:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:41.982 14:03:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:41.982 14:03:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:10:41.982 14:03:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:41.982 14:03:27 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:10:41.982 14:03:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:41.982 14:03:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:41.982 14:03:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:41.982 14:03:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:41.982 14:03:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:41.982 14:03:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:41.982 14:03:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:10:41.982 14:03:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:41.982 14:03:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:41.982 14:03:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:41.982 14:03:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:10:41.982 14:03:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:41.982 14:03:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:41.982 14:03:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:41.982 14:03:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:10:41.982 14:03:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:41.982 14:03:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:41.982 14:03:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:41.982 14:03:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:10:41.982 14:03:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:41.982 14:03:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:41.982 14:03:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:41.982 14:03:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:10:41.982 14:03:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:41.982 14:03:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:41.982 14:03:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:41.982 14:03:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:41.982 14:03:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:41.982 14:03:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:41.982 14:03:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:41.982 14:03:27 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:41.982 14:03:27 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:41.982 14:03:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:41.982 14:03:27 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:43.882 14:03:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:43.882 14:03:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:43.882 14:03:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:43.882 14:03:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:43.882 14:03:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:43.882 14:03:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:43.882 14:03:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:43.882 14:03:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:43.882 14:03:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:43.882 14:03:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:43.882 14:03:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:43.882 14:03:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:43.882 14:03:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:43.882 14:03:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:43.882 14:03:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:43.882 14:03:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:43.882 14:03:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:43.882 14:03:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:43.882 14:03:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:43.882 14:03:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:43.882 14:03:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:43.882 14:03:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:43.882 14:03:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:43.882 14:03:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:43.882 14:03:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:43.882 14:03:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:43.882 14:03:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:43.882 14:03:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:43.882 14:03:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:43.882 14:03:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:43.882 14:03:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:43.882 14:03:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:43.882 14:03:29 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:43.882 14:03:29 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:43.882 14:03:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:43.882 14:03:29 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:43.882 14:03:29 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:43.882 14:03:29 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:43.882 14:03:29 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:43.882 00:10:43.882 real 0m2.646s 00:10:43.882 user 0m7.629s 00:10:43.882 sys 0m0.220s 00:10:43.882 14:03:29 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:43.882 14:03:29 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:10:43.882 ************************************ 00:10:43.882 END TEST accel_decomp_full_mcore 00:10:43.882 ************************************ 00:10:43.882 14:03:29 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:43.882 14:03:29 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:10:43.882 14:03:29 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:10:43.882 14:03:29 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:43.882 14:03:29 accel -- common/autotest_common.sh@10 -- # set +x 00:10:43.882 ************************************ 00:10:43.882 START TEST accel_decomp_mthread 00:10:43.882 ************************************ 00:10:43.882 14:03:29 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:10:43.882 14:03:29 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:10:43.882 14:03:29 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:10:43.882 14:03:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:43.882 14:03:29 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:10:43.882 14:03:29 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:43.882 14:03:29 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:10:43.882 14:03:29 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:10:43.882 14:03:29 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:43.882 14:03:29 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:43.882 14:03:29 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:43.882 14:03:29 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:43.882 14:03:29 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:43.882 14:03:29 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:10:43.882 14:03:29 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:10:44.148 [2024-07-15 14:03:29.911947] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:10:44.148 [2024-07-15 14:03:29.912344] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid182513 ] 00:10:44.148 [2024-07-15 14:03:30.076732] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:44.407 [2024-07-15 14:03:30.309584] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.665 14:03:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:44.665 14:03:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:44.665 14:03:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:44.665 14:03:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:44.665 14:03:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:44.665 14:03:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:44.665 14:03:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:44.665 14:03:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:44.665 14:03:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:44.665 14:03:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:44.665 14:03:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:44.665 14:03:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:44.665 14:03:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:10:44.665 14:03:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:44.665 14:03:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:44.665 14:03:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:44.665 14:03:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:44.665 14:03:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:44.665 14:03:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:44.665 14:03:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:44.665 14:03:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:44.665 14:03:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:44.665 14:03:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:44.665 14:03:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:44.665 14:03:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:10:44.665 14:03:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:44.665 14:03:30 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:44.665 14:03:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:44.665 14:03:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:44.665 14:03:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:44.665 14:03:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:44.665 14:03:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:44.665 14:03:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:44.665 14:03:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:44.665 14:03:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:44.665 14:03:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:44.665 14:03:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:44.665 14:03:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:10:44.665 14:03:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:44.665 14:03:30 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:10:44.665 14:03:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:44.665 14:03:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:44.665 14:03:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:44.665 14:03:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:44.665 14:03:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:44.665 14:03:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:44.665 14:03:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:10:44.665 14:03:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:44.665 14:03:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:44.665 14:03:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:44.665 14:03:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:10:44.665 14:03:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:44.665 14:03:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:44.665 14:03:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:44.665 14:03:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:10:44.665 14:03:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:44.665 14:03:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:44.665 14:03:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:44.665 14:03:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:10:44.665 14:03:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:44.665 14:03:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:44.665 14:03:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:44.665 14:03:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:10:44.665 14:03:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:44.665 14:03:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:44.665 14:03:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:44.665 14:03:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:44.665 14:03:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:44.665 14:03:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:44.665 14:03:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:44.665 14:03:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:44.665 14:03:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:44.665 14:03:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:44.665 14:03:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:46.567 14:03:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:46.567 14:03:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:46.567 14:03:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:46.567 14:03:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:46.567 14:03:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:46.567 14:03:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:46.567 14:03:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:46.567 14:03:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:46.567 14:03:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:46.567 14:03:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:46.567 14:03:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:46.567 14:03:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:46.567 14:03:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:46.567 14:03:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:46.567 14:03:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:46.567 14:03:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:46.567 14:03:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:46.567 14:03:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:46.567 14:03:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:46.567 14:03:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:46.567 14:03:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:46.567 14:03:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:46.567 14:03:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:46.567 14:03:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:46.567 14:03:32 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:46.567 14:03:32 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:46.567 14:03:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:46.567 14:03:32 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:46.567 14:03:32 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:46.567 14:03:32 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:46.567 14:03:32 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:46.567 00:10:46.567 real 0m2.539s 00:10:46.567 user 0m2.273s 00:10:46.567 sys 0m0.190s 00:10:46.567 14:03:32 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:46.567 14:03:32 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:10:46.567 ************************************ 00:10:46.567 END TEST accel_decomp_mthread 00:10:46.567 ************************************ 00:10:46.567 14:03:32 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:46.567 14:03:32 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:10:46.567 14:03:32 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:10:46.567 14:03:32 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:46.567 14:03:32 accel -- common/autotest_common.sh@10 -- # set +x 00:10:46.567 ************************************ 00:10:46.567 START TEST accel_decomp_full_mthread 00:10:46.567 ************************************ 00:10:46.567 14:03:32 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:10:46.567 14:03:32 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:10:46.567 14:03:32 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:10:46.567 14:03:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:46.567 14:03:32 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:10:46.567 14:03:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:46.567 14:03:32 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:10:46.567 14:03:32 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:10:46.567 14:03:32 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:46.567 14:03:32 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:46.567 14:03:32 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:46.567 14:03:32 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:46.567 14:03:32 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:46.567 14:03:32 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:10:46.567 14:03:32 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:10:46.567 [2024-07-15 14:03:32.497694] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:10:46.567 [2024-07-15 14:03:32.497913] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid182564 ] 00:10:46.826 [2024-07-15 14:03:32.660585] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.085 [2024-07-15 14:03:32.927314] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:47.360 14:03:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:47.360 14:03:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:47.360 14:03:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:47.360 14:03:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:47.360 14:03:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:47.360 14:03:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:47.360 14:03:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:47.360 14:03:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:47.360 14:03:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:47.360 14:03:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:47.360 14:03:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:47.360 14:03:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:47.360 14:03:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:10:47.360 14:03:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:47.360 14:03:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:47.360 14:03:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:47.360 14:03:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:47.361 14:03:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:47.361 14:03:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:47.361 14:03:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:47.361 14:03:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:47.361 14:03:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:47.361 14:03:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:47.361 14:03:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:47.361 14:03:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:10:47.361 14:03:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:47.361 14:03:33 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:47.361 14:03:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:47.361 14:03:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:47.361 14:03:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:10:47.361 14:03:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:47.361 14:03:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:47.361 14:03:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:47.361 14:03:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:47.361 14:03:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:47.361 14:03:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:47.361 14:03:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:47.361 14:03:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:10:47.361 14:03:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:47.361 14:03:33 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:10:47.361 14:03:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:47.361 14:03:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:47.361 14:03:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:47.361 14:03:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:47.361 14:03:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:47.361 14:03:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:47.361 14:03:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:10:47.361 14:03:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:47.361 14:03:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:47.361 14:03:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:47.361 14:03:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:10:47.361 14:03:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:47.361 14:03:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:47.361 14:03:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:47.361 14:03:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:10:47.361 14:03:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:47.361 14:03:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:47.361 14:03:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:47.361 14:03:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:10:47.361 14:03:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:47.361 14:03:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:47.361 14:03:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:47.361 14:03:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:10:47.361 14:03:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:47.361 14:03:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:47.361 14:03:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:47.361 14:03:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:47.361 14:03:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:47.361 14:03:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:47.361 14:03:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:47.361 14:03:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:47.361 14:03:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:47.361 14:03:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:47.361 14:03:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:49.263 14:03:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:49.263 14:03:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:49.263 14:03:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:49.263 14:03:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:49.263 14:03:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:49.263 14:03:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:49.263 14:03:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:49.263 14:03:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:49.263 14:03:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:49.263 14:03:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:49.263 14:03:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:49.263 14:03:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:49.263 14:03:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:49.263 14:03:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:49.263 14:03:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:49.263 14:03:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:49.263 14:03:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:49.263 14:03:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:49.263 14:03:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:49.263 14:03:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:49.263 14:03:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:49.263 14:03:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:49.263 14:03:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:49.263 14:03:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:49.263 14:03:35 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:49.263 14:03:35 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:49.263 14:03:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:49.263 14:03:35 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:49.263 14:03:35 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:10:49.263 14:03:35 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:49.263 14:03:35 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:49.263 00:10:49.263 real 0m2.622s 00:10:49.263 user 0m2.354s 00:10:49.263 sys 0m0.202s 00:10:49.263 14:03:35 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:49.263 14:03:35 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:10:49.263 ************************************ 00:10:49.263 END TEST accel_decomp_full_mthread 00:10:49.263 ************************************ 00:10:49.263 14:03:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:49.263 14:03:35 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:10:49.263 14:03:35 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:10:49.263 14:03:35 accel -- accel/accel.sh@137 -- # build_accel_config 00:10:49.263 14:03:35 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:49.263 14:03:35 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:10:49.263 14:03:35 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:49.263 14:03:35 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:49.263 14:03:35 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:49.263 14:03:35 accel -- accel/accel.sh@40 -- # local IFS=, 00:10:49.263 14:03:35 accel -- accel/accel.sh@41 -- # jq -r . 00:10:49.263 14:03:35 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:10:49.263 14:03:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:49.263 14:03:35 accel -- common/autotest_common.sh@10 -- # set +x 00:10:49.263 ************************************ 00:10:49.263 START TEST accel_dif_functional_tests 00:10:49.263 ************************************ 00:10:49.263 14:03:35 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:10:49.263 [2024-07-15 14:03:35.179331] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:10:49.263 [2024-07-15 14:03:35.179539] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid182622 ] 00:10:49.522 [2024-07-15 14:03:35.352788] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:49.781 [2024-07-15 14:03:35.573108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:49.781 [2024-07-15 14:03:35.573155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:49.781 [2024-07-15 14:03:35.573159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:50.039 00:10:50.039 00:10:50.039 CUnit - A unit testing framework for C - Version 2.1-3 00:10:50.039 http://cunit.sourceforge.net/ 00:10:50.039 00:10:50.039 00:10:50.039 Suite: accel_dif 00:10:50.039 Test: verify: DIF generated, GUARD check ...passed 00:10:50.039 Test: verify: DIF generated, APPTAG check ...passed 00:10:50.039 Test: verify: DIF generated, REFTAG check ...passed 00:10:50.040 Test: verify: DIF not generated, GUARD check ...[2024-07-15 14:03:35.881122] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:10:50.040 passed 00:10:50.040 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 14:03:35.881455] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:10:50.040 passed 00:10:50.040 Test: verify: DIF not generated, REFTAG check ...[2024-07-15 14:03:35.881594] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:10:50.040 passed 00:10:50.040 Test: verify: APPTAG correct, APPTAG check ...passed 00:10:50.040 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-15 14:03:35.881768] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:10:50.040 passed 00:10:50.040 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:10:50.040 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:10:50.040 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:10:50.040 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-15 14:03:35.882054] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:10:50.040 passed 00:10:50.040 Test: verify copy: DIF generated, GUARD check ...passed 00:10:50.040 Test: verify copy: DIF generated, APPTAG check ...passed 00:10:50.040 Test: verify copy: DIF generated, REFTAG check ...passed 00:10:50.040 Test: verify copy: DIF not generated, GUARD check ...[2024-07-15 14:03:35.882367] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:10:50.040 passed 00:10:50.040 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 14:03:35.882694] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:10:50.040 passed 00:10:50.040 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-15 14:03:35.883005] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:10:50.040 passed 00:10:50.040 Test: generate copy: DIF generated, GUARD check ...passed 00:10:50.040 Test: generate copy: DIF generated, APTTAG check ...passed 00:10:50.040 Test: generate copy: DIF generated, REFTAG check ...passed 00:10:50.040 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:10:50.040 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:10:50.040 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:10:50.040 Test: generate copy: iovecs-len validate ...[2024-07-15 14:03:35.883647] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:10:50.040 passed 00:10:50.040 Test: generate copy: buffer alignment validate ...passed 00:10:50.040 00:10:50.040 Run Summary: Type Total Ran Passed Failed Inactive 00:10:50.040 suites 1 1 n/a 0 0 00:10:50.040 tests 26 26 26 0 0 00:10:50.040 asserts 115 115 115 0 n/a 00:10:50.040 00:10:50.040 Elapsed time = 0.007 seconds 00:10:51.416 00:10:51.416 real 0m2.059s 00:10:51.416 user 0m4.097s 00:10:51.416 sys 0m0.248s 00:10:51.416 14:03:37 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:51.416 14:03:37 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:10:51.416 ************************************ 00:10:51.416 END TEST accel_dif_functional_tests 00:10:51.416 ************************************ 00:10:51.416 14:03:37 accel -- common/autotest_common.sh@1142 -- # return 0 00:10:51.416 00:10:51.416 real 1m1.788s 00:10:51.416 user 1m7.582s 00:10:51.416 sys 0m5.934s 00:10:51.416 14:03:37 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:51.416 ************************************ 00:10:51.416 END TEST accel 00:10:51.416 14:03:37 accel -- common/autotest_common.sh@10 -- # set +x 00:10:51.416 ************************************ 00:10:51.416 14:03:37 -- common/autotest_common.sh@1142 -- # return 0 00:10:51.416 14:03:37 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:10:51.416 14:03:37 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:51.416 14:03:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:51.416 14:03:37 -- common/autotest_common.sh@10 -- # set +x 00:10:51.416 ************************************ 00:10:51.416 START TEST accel_rpc 00:10:51.416 ************************************ 00:10:51.416 14:03:37 accel_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:10:51.416 * Looking for test storage... 00:10:51.416 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:10:51.416 14:03:37 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:10:51.416 14:03:37 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=182714 00:10:51.416 14:03:37 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:10:51.416 14:03:37 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 182714 00:10:51.416 14:03:37 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 182714 ']' 00:10:51.416 14:03:37 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:51.416 14:03:37 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:51.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:51.416 14:03:37 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:51.416 14:03:37 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:51.416 14:03:37 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:51.416 [2024-07-15 14:03:37.397221] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:10:51.416 [2024-07-15 14:03:37.397395] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid182714 ] 00:10:51.675 [2024-07-15 14:03:37.547678] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:51.934 [2024-07-15 14:03:37.764381] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.501 14:03:38 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:52.501 14:03:38 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:10:52.501 14:03:38 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:10:52.501 14:03:38 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:10:52.501 14:03:38 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:10:52.501 14:03:38 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:10:52.501 14:03:38 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:10:52.501 14:03:38 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:52.501 14:03:38 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:52.501 14:03:38 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.501 ************************************ 00:10:52.501 START TEST accel_assign_opcode 00:10:52.501 ************************************ 00:10:52.501 14:03:38 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:10:52.501 14:03:38 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:10:52.501 14:03:38 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.501 14:03:38 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:10:52.501 [2024-07-15 14:03:38.449803] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:10:52.501 14:03:38 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.501 14:03:38 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:10:52.501 14:03:38 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.501 14:03:38 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:10:52.501 [2024-07-15 14:03:38.457721] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:10:52.502 14:03:38 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.502 14:03:38 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:10:52.502 14:03:38 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.502 14:03:38 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:10:53.437 14:03:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:53.437 14:03:39 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:10:53.437 14:03:39 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:10:53.437 14:03:39 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:10:53.437 14:03:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:53.437 14:03:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:10:53.437 14:03:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:53.437 software 00:10:53.437 00:10:53.437 real 0m0.858s 00:10:53.437 user 0m0.052s 00:10:53.437 sys 0m0.010s 00:10:53.437 14:03:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:53.437 ************************************ 00:10:53.437 END TEST accel_assign_opcode 00:10:53.437 14:03:39 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:10:53.437 ************************************ 00:10:53.437 14:03:39 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:10:53.437 14:03:39 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 182714 00:10:53.437 14:03:39 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 182714 ']' 00:10:53.437 14:03:39 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 182714 00:10:53.437 14:03:39 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:10:53.437 14:03:39 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:53.437 14:03:39 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 182714 00:10:53.437 14:03:39 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:53.437 killing process with pid 182714 00:10:53.437 14:03:39 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:53.437 14:03:39 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 182714' 00:10:53.437 14:03:39 accel_rpc -- common/autotest_common.sh@967 -- # kill 182714 00:10:53.437 14:03:39 accel_rpc -- common/autotest_common.sh@972 -- # wait 182714 00:10:55.967 00:10:55.967 real 0m4.283s 00:10:55.967 user 0m4.325s 00:10:55.967 sys 0m0.510s 00:10:55.967 14:03:41 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:55.967 14:03:41 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:55.967 ************************************ 00:10:55.967 END TEST accel_rpc 00:10:55.967 ************************************ 00:10:55.967 14:03:41 -- common/autotest_common.sh@1142 -- # return 0 00:10:55.967 14:03:41 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:10:55.967 14:03:41 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:55.967 14:03:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:55.967 14:03:41 -- common/autotest_common.sh@10 -- # set +x 00:10:55.967 ************************************ 00:10:55.967 START TEST app_cmdline 00:10:55.967 ************************************ 00:10:55.967 14:03:41 app_cmdline -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:10:55.967 * Looking for test storage... 00:10:55.967 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:10:55.967 14:03:41 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:10:55.967 14:03:41 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=182848 00:10:55.967 14:03:41 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 182848 00:10:55.967 14:03:41 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:10:55.967 14:03:41 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 182848 ']' 00:10:55.967 14:03:41 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:55.967 14:03:41 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:55.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:55.967 14:03:41 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:55.967 14:03:41 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:55.967 14:03:41 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:55.967 [2024-07-15 14:03:41.744534] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:10:55.967 [2024-07-15 14:03:41.744752] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid182848 ] 00:10:55.967 [2024-07-15 14:03:41.912938] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:56.226 [2024-07-15 14:03:42.171019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.161 14:03:42 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:57.161 14:03:42 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:10:57.161 14:03:42 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:10:57.426 { 00:10:57.426 "version": "SPDK v24.09-pre git sha1 255871c19", 00:10:57.426 "fields": { 00:10:57.426 "major": 24, 00:10:57.426 "minor": 9, 00:10:57.426 "patch": 0, 00:10:57.426 "suffix": "-pre", 00:10:57.426 "commit": "255871c19" 00:10:57.426 } 00:10:57.426 } 00:10:57.426 14:03:43 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:10:57.426 14:03:43 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:10:57.426 14:03:43 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:10:57.426 14:03:43 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:10:57.426 14:03:43 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:10:57.426 14:03:43 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:57.426 14:03:43 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:57.426 14:03:43 app_cmdline -- app/cmdline.sh@26 -- # sort 00:10:57.426 14:03:43 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:10:57.426 14:03:43 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:57.426 14:03:43 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:10:57.426 14:03:43 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:10:57.426 14:03:43 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:57.426 14:03:43 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:10:57.426 14:03:43 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:57.426 14:03:43 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:57.426 14:03:43 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:57.426 14:03:43 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:57.426 14:03:43 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:57.426 14:03:43 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:57.426 14:03:43 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:57.426 14:03:43 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:57.426 14:03:43 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:57.426 14:03:43 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:57.683 request: 00:10:57.684 { 00:10:57.684 "method": "env_dpdk_get_mem_stats", 00:10:57.684 "req_id": 1 00:10:57.684 } 00:10:57.684 Got JSON-RPC error response 00:10:57.684 response: 00:10:57.684 { 00:10:57.684 "code": -32601, 00:10:57.684 "message": "Method not found" 00:10:57.684 } 00:10:57.684 14:03:43 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:10:57.684 14:03:43 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:57.684 14:03:43 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:57.684 14:03:43 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:57.684 14:03:43 app_cmdline -- app/cmdline.sh@1 -- # killprocess 182848 00:10:57.684 14:03:43 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 182848 ']' 00:10:57.684 14:03:43 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 182848 00:10:57.684 14:03:43 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:10:57.684 14:03:43 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:57.684 14:03:43 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 182848 00:10:57.684 14:03:43 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:57.684 14:03:43 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:57.684 14:03:43 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 182848' 00:10:57.684 killing process with pid 182848 00:10:57.684 14:03:43 app_cmdline -- common/autotest_common.sh@967 -- # kill 182848 00:10:57.684 14:03:43 app_cmdline -- common/autotest_common.sh@972 -- # wait 182848 00:11:00.213 00:11:00.213 real 0m4.180s 00:11:00.213 user 0m4.669s 00:11:00.213 sys 0m0.591s 00:11:00.213 14:03:45 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:00.213 14:03:45 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:11:00.213 ************************************ 00:11:00.213 END TEST app_cmdline 00:11:00.213 ************************************ 00:11:00.213 14:03:45 -- common/autotest_common.sh@1142 -- # return 0 00:11:00.213 14:03:45 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:11:00.213 14:03:45 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:00.213 14:03:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:00.213 14:03:45 -- common/autotest_common.sh@10 -- # set +x 00:11:00.213 ************************************ 00:11:00.213 START TEST version 00:11:00.213 ************************************ 00:11:00.213 14:03:45 version -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:11:00.213 * Looking for test storage... 00:11:00.213 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:11:00.213 14:03:45 version -- app/version.sh@17 -- # get_header_version major 00:11:00.213 14:03:45 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:00.213 14:03:45 version -- app/version.sh@14 -- # cut -f2 00:11:00.213 14:03:45 version -- app/version.sh@14 -- # tr -d '"' 00:11:00.213 14:03:45 version -- app/version.sh@17 -- # major=24 00:11:00.213 14:03:45 version -- app/version.sh@18 -- # get_header_version minor 00:11:00.213 14:03:45 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:00.213 14:03:45 version -- app/version.sh@14 -- # cut -f2 00:11:00.213 14:03:45 version -- app/version.sh@14 -- # tr -d '"' 00:11:00.213 14:03:45 version -- app/version.sh@18 -- # minor=9 00:11:00.213 14:03:45 version -- app/version.sh@19 -- # get_header_version patch 00:11:00.213 14:03:45 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:00.213 14:03:45 version -- app/version.sh@14 -- # cut -f2 00:11:00.213 14:03:45 version -- app/version.sh@14 -- # tr -d '"' 00:11:00.213 14:03:45 version -- app/version.sh@19 -- # patch=0 00:11:00.213 14:03:45 version -- app/version.sh@20 -- # get_header_version suffix 00:11:00.213 14:03:45 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:00.213 14:03:45 version -- app/version.sh@14 -- # cut -f2 00:11:00.213 14:03:45 version -- app/version.sh@14 -- # tr -d '"' 00:11:00.213 14:03:45 version -- app/version.sh@20 -- # suffix=-pre 00:11:00.213 14:03:45 version -- app/version.sh@22 -- # version=24.9 00:11:00.213 14:03:45 version -- app/version.sh@25 -- # (( patch != 0 )) 00:11:00.213 14:03:45 version -- app/version.sh@28 -- # version=24.9rc0 00:11:00.213 14:03:45 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:11:00.213 14:03:45 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:11:00.213 14:03:46 version -- app/version.sh@30 -- # py_version=24.9rc0 00:11:00.213 14:03:46 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:11:00.213 00:11:00.213 real 0m0.165s 00:11:00.213 user 0m0.103s 00:11:00.213 sys 0m0.091s 00:11:00.213 14:03:46 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:00.213 14:03:46 version -- common/autotest_common.sh@10 -- # set +x 00:11:00.213 ************************************ 00:11:00.213 END TEST version 00:11:00.213 ************************************ 00:11:00.213 14:03:46 -- common/autotest_common.sh@1142 -- # return 0 00:11:00.213 14:03:46 -- spdk/autotest.sh@188 -- # '[' 1 -eq 1 ']' 00:11:00.213 14:03:46 -- spdk/autotest.sh@189 -- # run_test blockdev_general /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:11:00.213 14:03:46 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:00.213 14:03:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:00.213 14:03:46 -- common/autotest_common.sh@10 -- # set +x 00:11:00.213 ************************************ 00:11:00.213 START TEST blockdev_general 00:11:00.213 ************************************ 00:11:00.213 14:03:46 blockdev_general -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:11:00.213 * Looking for test storage... 00:11:00.213 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:11:00.213 14:03:46 blockdev_general -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:11:00.213 14:03:46 blockdev_general -- bdev/nbd_common.sh@6 -- # set -e 00:11:00.213 14:03:46 blockdev_general -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:11:00.213 14:03:46 blockdev_general -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:00.213 14:03:46 blockdev_general -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:11:00.213 14:03:46 blockdev_general -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:11:00.213 14:03:46 blockdev_general -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:11:00.213 14:03:46 blockdev_general -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:11:00.213 14:03:46 blockdev_general -- bdev/blockdev.sh@20 -- # : 00:11:00.213 14:03:46 blockdev_general -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:11:00.213 14:03:46 blockdev_general -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:11:00.213 14:03:46 blockdev_general -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:11:00.213 14:03:46 blockdev_general -- bdev/blockdev.sh@674 -- # uname -s 00:11:00.213 14:03:46 blockdev_general -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:11:00.214 14:03:46 blockdev_general -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:11:00.214 14:03:46 blockdev_general -- bdev/blockdev.sh@682 -- # test_type=bdev 00:11:00.214 14:03:46 blockdev_general -- bdev/blockdev.sh@683 -- # crypto_device= 00:11:00.214 14:03:46 blockdev_general -- bdev/blockdev.sh@684 -- # dek= 00:11:00.214 14:03:46 blockdev_general -- bdev/blockdev.sh@685 -- # env_ctx= 00:11:00.214 14:03:46 blockdev_general -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:11:00.214 14:03:46 blockdev_general -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:11:00.214 14:03:46 blockdev_general -- bdev/blockdev.sh@690 -- # [[ bdev == bdev ]] 00:11:00.214 14:03:46 blockdev_general -- bdev/blockdev.sh@691 -- # wait_for_rpc=--wait-for-rpc 00:11:00.214 14:03:46 blockdev_general -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:11:00.214 14:03:46 blockdev_general -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=183038 00:11:00.214 14:03:46 blockdev_general -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:11:00.214 14:03:46 blockdev_general -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:00.214 14:03:46 blockdev_general -- bdev/blockdev.sh@49 -- # waitforlisten 183038 00:11:00.214 14:03:46 blockdev_general -- common/autotest_common.sh@829 -- # '[' -z 183038 ']' 00:11:00.214 14:03:46 blockdev_general -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:00.214 14:03:46 blockdev_general -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:00.214 14:03:46 blockdev_general -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:00.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:00.214 14:03:46 blockdev_general -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:00.214 14:03:46 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:00.214 [2024-07-15 14:03:46.204561] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:11:00.214 [2024-07-15 14:03:46.205008] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid183038 ] 00:11:00.474 [2024-07-15 14:03:46.357422] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:00.731 [2024-07-15 14:03:46.655530] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:01.296 14:03:47 blockdev_general -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:01.296 14:03:47 blockdev_general -- common/autotest_common.sh@862 -- # return 0 00:11:01.296 14:03:47 blockdev_general -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:11:01.296 14:03:47 blockdev_general -- bdev/blockdev.sh@696 -- # setup_bdev_conf 00:11:01.296 14:03:47 blockdev_general -- bdev/blockdev.sh@53 -- # rpc_cmd 00:11:01.296 14:03:47 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:01.296 14:03:47 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:02.252 [2024-07-15 14:03:48.103514] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:02.252 [2024-07-15 14:03:48.104478] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:02.252 00:11:02.252 [2024-07-15 14:03:48.111569] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:02.252 [2024-07-15 14:03:48.112085] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:02.252 00:11:02.252 Malloc0 00:11:02.252 Malloc1 00:11:02.252 Malloc2 00:11:02.509 Malloc3 00:11:02.509 Malloc4 00:11:02.509 Malloc5 00:11:02.509 Malloc6 00:11:02.509 Malloc7 00:11:02.509 Malloc8 00:11:02.779 Malloc9 00:11:02.779 [2024-07-15 14:03:48.555750] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:02.779 [2024-07-15 14:03:48.556392] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:02.779 [2024-07-15 14:03:48.556655] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:11:02.779 [2024-07-15 14:03:48.556956] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:02.779 [2024-07-15 14:03:48.559257] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:02.779 [2024-07-15 14:03:48.559507] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:11:02.779 TestPT 00:11:02.779 14:03:48 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.779 14:03:48 blockdev_general -- bdev/blockdev.sh@76 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/bdev/aiofile bs=2048 count=5000 00:11:02.779 5000+0 records in 00:11:02.779 5000+0 records out 00:11:02.779 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0215527 s, 475 MB/s 00:11:02.779 14:03:48 blockdev_general -- bdev/blockdev.sh@77 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/bdev/aiofile AIO0 2048 00:11:02.779 14:03:48 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.779 14:03:48 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:02.779 AIO0 00:11:02.779 14:03:48 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.779 14:03:48 blockdev_general -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:11:02.779 14:03:48 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.779 14:03:48 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:02.779 14:03:48 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.779 14:03:48 blockdev_general -- bdev/blockdev.sh@740 -- # cat 00:11:02.779 14:03:48 blockdev_general -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:11:02.779 14:03:48 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.779 14:03:48 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:02.779 14:03:48 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.779 14:03:48 blockdev_general -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:11:02.779 14:03:48 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.779 14:03:48 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:02.779 14:03:48 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.779 14:03:48 blockdev_general -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:11:02.779 14:03:48 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.779 14:03:48 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:02.779 14:03:48 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.779 14:03:48 blockdev_general -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:11:02.779 14:03:48 blockdev_general -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:11:02.779 14:03:48 blockdev_general -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:11:02.779 14:03:48 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.779 14:03:48 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:03.038 14:03:48 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:03.038 14:03:48 blockdev_general -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:11:03.038 14:03:48 blockdev_general -- bdev/blockdev.sh@749 -- # jq -r .name 00:11:03.039 14:03:48 blockdev_general -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "d02aa269-ae2f-4cbb-ac80-2afa3ffaa0e2"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "d02aa269-ae2f-4cbb-ac80-2afa3ffaa0e2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "3ab29fa7-2995-5599-a4eb-acc58a058ca0"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "3ab29fa7-2995-5599-a4eb-acc58a058ca0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "c2461ec8-13b5-57b5-beca-f48d19444804"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "c2461ec8-13b5-57b5-beca-f48d19444804",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "c8c09a0c-bc3f-5a99-adfe-cc25537c2a4c"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "c8c09a0c-bc3f-5a99-adfe-cc25537c2a4c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "06c46d9c-9000-5e14-9782-7a0f7454357b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "06c46d9c-9000-5e14-9782-7a0f7454357b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "de5d9cfe-7834-5b8b-ba7c-3c9e05d606f7"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "de5d9cfe-7834-5b8b-ba7c-3c9e05d606f7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "9820adfd-bd9a-5946-8588-cd4d1808911c"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "9820adfd-bd9a-5946-8588-cd4d1808911c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "cfca1155-dd52-564a-85bc-6b698daece95"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "cfca1155-dd52-564a-85bc-6b698daece95",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "20b8f8f8-8b1e-514e-842f-5f1ad1e62135"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "20b8f8f8-8b1e-514e-842f-5f1ad1e62135",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "1357ee3a-ebab-5b7c-bd88-9005fb61cde8"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "1357ee3a-ebab-5b7c-bd88-9005fb61cde8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "5cfb737f-2cc0-5b14-999e-8e285b635145"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "5cfb737f-2cc0-5b14-999e-8e285b635145",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "9699ae8d-77e7-54f6-8ecf-1b37edbd3ccd"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "9699ae8d-77e7-54f6-8ecf-1b37edbd3ccd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "6a8469d3-f60c-47d8-8740-963cf5e2e931"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "6a8469d3-f60c-47d8-8740-963cf5e2e931",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "6a8469d3-f60c-47d8-8740-963cf5e2e931",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "d44b941d-5dd5-4fa0-89df-401bc33626b0",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "add801b3-df31-45ca-ad40-c9efbe971f5d",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "046493ce-43fc-4be4-b04b-b47267d7e6b1"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "046493ce-43fc-4be4-b04b-b47267d7e6b1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "046493ce-43fc-4be4-b04b-b47267d7e6b1",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "3952e20d-61c6-48fd-8c7e-effa1899f995",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "e119af84-3c5e-4662-b223-1b1bd2229dfa",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "1d89eff0-9057-4dc7-af62-a370bbe0a535"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "1d89eff0-9057-4dc7-af62-a370bbe0a535",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "1d89eff0-9057-4dc7-af62-a370bbe0a535",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "94b20597-34e3-4fb8-877e-4f98b85b025b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "7955d88b-042c-48ac-ad17-2829f4e9f3dc",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "64efa5b5-6251-406b-8d94-3d32bdfd81a7"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "64efa5b5-6251-406b-8d94-3d32bdfd81a7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:11:03.039 14:03:48 blockdev_general -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:11:03.039 14:03:48 blockdev_general -- bdev/blockdev.sh@752 -- # hello_world_bdev=Malloc0 00:11:03.039 14:03:48 blockdev_general -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:11:03.039 14:03:48 blockdev_general -- bdev/blockdev.sh@754 -- # killprocess 183038 00:11:03.039 14:03:48 blockdev_general -- common/autotest_common.sh@948 -- # '[' -z 183038 ']' 00:11:03.039 14:03:48 blockdev_general -- common/autotest_common.sh@952 -- # kill -0 183038 00:11:03.039 14:03:48 blockdev_general -- common/autotest_common.sh@953 -- # uname 00:11:03.039 14:03:48 blockdev_general -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:03.039 14:03:48 blockdev_general -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 183038 00:11:03.039 14:03:48 blockdev_general -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:03.039 14:03:48 blockdev_general -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:03.039 14:03:48 blockdev_general -- common/autotest_common.sh@966 -- # echo 'killing process with pid 183038' 00:11:03.039 killing process with pid 183038 00:11:03.039 14:03:48 blockdev_general -- common/autotest_common.sh@967 -- # kill 183038 00:11:03.039 14:03:48 blockdev_general -- common/autotest_common.sh@972 -- # wait 183038 00:11:06.333 14:03:52 blockdev_general -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:11:06.333 14:03:52 blockdev_general -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:11:06.333 14:03:52 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:11:06.333 14:03:52 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:06.333 14:03:52 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:06.333 ************************************ 00:11:06.333 START TEST bdev_hello_world 00:11:06.333 ************************************ 00:11:06.333 14:03:52 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:11:06.333 [2024-07-15 14:03:52.145812] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:11:06.333 [2024-07-15 14:03:52.146544] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid183140 ] 00:11:06.333 [2024-07-15 14:03:52.296058] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:06.593 [2024-07-15 14:03:52.511767] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.159 [2024-07-15 14:03:52.896584] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:07.159 [2024-07-15 14:03:52.897338] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:07.159 [2024-07-15 14:03:52.904528] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:07.159 [2024-07-15 14:03:52.904868] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:07.159 [2024-07-15 14:03:52.912580] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:07.159 [2024-07-15 14:03:52.912960] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:11:07.159 [2024-07-15 14:03:52.913251] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:11:07.159 [2024-07-15 14:03:53.143552] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:07.159 [2024-07-15 14:03:53.144267] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:07.159 [2024-07-15 14:03:53.144591] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:11:07.159 [2024-07-15 14:03:53.144913] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:07.159 [2024-07-15 14:03:53.147675] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:07.159 [2024-07-15 14:03:53.148340] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:11:07.789 [2024-07-15 14:03:53.507550] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:11:07.789 [2024-07-15 14:03:53.508209] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Malloc0 00:11:07.789 [2024-07-15 14:03:53.508580] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:11:07.789 [2024-07-15 14:03:53.508996] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:11:07.789 [2024-07-15 14:03:53.509407] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:11:07.789 [2024-07-15 14:03:53.509722] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:11:07.789 [2024-07-15 14:03:53.510092] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:11:07.789 00:11:07.789 [2024-07-15 14:03:53.510419] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:11:10.372 00:11:10.372 real 0m3.624s 00:11:10.372 user 0m3.058s 00:11:10.372 sys 0m0.404s 00:11:10.372 14:03:55 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:10.372 14:03:55 blockdev_general.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:11:10.372 ************************************ 00:11:10.372 END TEST bdev_hello_world 00:11:10.372 ************************************ 00:11:10.372 14:03:55 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:11:10.372 14:03:55 blockdev_general -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:11:10.372 14:03:55 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:10.372 14:03:55 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:10.372 14:03:55 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:10.372 ************************************ 00:11:10.372 START TEST bdev_bounds 00:11:10.372 ************************************ 00:11:10.372 14:03:55 blockdev_general.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:11:10.372 14:03:55 blockdev_general.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=183202 00:11:10.372 14:03:55 blockdev_general.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:11:10.372 14:03:55 blockdev_general.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:11:10.372 14:03:55 blockdev_general.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 183202' 00:11:10.372 Process bdevio pid: 183202 00:11:10.372 14:03:55 blockdev_general.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 183202 00:11:10.372 14:03:55 blockdev_general.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 183202 ']' 00:11:10.372 14:03:55 blockdev_general.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:10.372 14:03:55 blockdev_general.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:10.372 14:03:55 blockdev_general.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:10.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:10.372 14:03:55 blockdev_general.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:10.372 14:03:55 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:11:10.372 [2024-07-15 14:03:55.835063] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:11:10.372 [2024-07-15 14:03:55.836099] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid183202 ] 00:11:10.372 [2024-07-15 14:03:56.011079] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:10.372 [2024-07-15 14:03:56.262992] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:10.372 [2024-07-15 14:03:56.263110] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:10.372 [2024-07-15 14:03:56.263125] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:10.938 [2024-07-15 14:03:56.681555] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:10.938 [2024-07-15 14:03:56.682240] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:10.938 [2024-07-15 14:03:56.689499] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:10.938 [2024-07-15 14:03:56.689804] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:10.938 [2024-07-15 14:03:56.697570] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:10.938 [2024-07-15 14:03:56.697881] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:11:10.938 [2024-07-15 14:03:56.698128] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:11:10.938 [2024-07-15 14:03:56.900286] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:10.938 [2024-07-15 14:03:56.901089] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:10.938 [2024-07-15 14:03:56.901363] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:11:10.938 [2024-07-15 14:03:56.901590] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:10.938 [2024-07-15 14:03:56.904040] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:10.938 [2024-07-15 14:03:56.904290] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:11:11.503 14:03:57 blockdev_general.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:11.503 14:03:57 blockdev_general.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:11:11.503 14:03:57 blockdev_general.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:11:11.503 I/O targets: 00:11:11.503 Malloc0: 65536 blocks of 512 bytes (32 MiB) 00:11:11.503 Malloc1p0: 32768 blocks of 512 bytes (16 MiB) 00:11:11.503 Malloc1p1: 32768 blocks of 512 bytes (16 MiB) 00:11:11.503 Malloc2p0: 8192 blocks of 512 bytes (4 MiB) 00:11:11.503 Malloc2p1: 8192 blocks of 512 bytes (4 MiB) 00:11:11.503 Malloc2p2: 8192 blocks of 512 bytes (4 MiB) 00:11:11.503 Malloc2p3: 8192 blocks of 512 bytes (4 MiB) 00:11:11.503 Malloc2p4: 8192 blocks of 512 bytes (4 MiB) 00:11:11.503 Malloc2p5: 8192 blocks of 512 bytes (4 MiB) 00:11:11.503 Malloc2p6: 8192 blocks of 512 bytes (4 MiB) 00:11:11.503 Malloc2p7: 8192 blocks of 512 bytes (4 MiB) 00:11:11.503 TestPT: 65536 blocks of 512 bytes (32 MiB) 00:11:11.503 raid0: 131072 blocks of 512 bytes (64 MiB) 00:11:11.503 concat0: 131072 blocks of 512 bytes (64 MiB) 00:11:11.503 raid1: 65536 blocks of 512 bytes (32 MiB) 00:11:11.503 AIO0: 5000 blocks of 2048 bytes (10 MiB) 00:11:11.503 00:11:11.503 00:11:11.503 CUnit - A unit testing framework for C - Version 2.1-3 00:11:11.503 http://cunit.sourceforge.net/ 00:11:11.503 00:11:11.503 00:11:11.503 Suite: bdevio tests on: AIO0 00:11:11.503 Test: blockdev write read block ...passed 00:11:11.503 Test: blockdev write zeroes read block ...passed 00:11:11.503 Test: blockdev write zeroes read no split ...passed 00:11:11.503 Test: blockdev write zeroes read split ...passed 00:11:11.503 Test: blockdev write zeroes read split partial ...passed 00:11:11.503 Test: blockdev reset ...passed 00:11:11.503 Test: blockdev write read 8 blocks ...passed 00:11:11.503 Test: blockdev write read size > 128k ...passed 00:11:11.503 Test: blockdev write read invalid size ...passed 00:11:11.503 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:11.503 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:11.503 Test: blockdev write read max offset ...passed 00:11:11.503 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:11.503 Test: blockdev writev readv 8 blocks ...passed 00:11:11.503 Test: blockdev writev readv 30 x 1block ...passed 00:11:11.503 Test: blockdev writev readv block ...passed 00:11:11.503 Test: blockdev writev readv size > 128k ...passed 00:11:11.503 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:11.503 Test: blockdev comparev and writev ...passed 00:11:11.503 Test: blockdev nvme passthru rw ...passed 00:11:11.503 Test: blockdev nvme passthru vendor specific ...passed 00:11:11.503 Test: blockdev nvme admin passthru ...passed 00:11:11.503 Test: blockdev copy ...passed 00:11:11.503 Suite: bdevio tests on: raid1 00:11:11.503 Test: blockdev write read block ...passed 00:11:11.503 Test: blockdev write zeroes read block ...passed 00:11:11.503 Test: blockdev write zeroes read no split ...passed 00:11:11.503 Test: blockdev write zeroes read split ...passed 00:11:11.760 Test: blockdev write zeroes read split partial ...passed 00:11:11.760 Test: blockdev reset ...passed 00:11:11.760 Test: blockdev write read 8 blocks ...passed 00:11:11.760 Test: blockdev write read size > 128k ...passed 00:11:11.760 Test: blockdev write read invalid size ...passed 00:11:11.760 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:11.760 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:11.760 Test: blockdev write read max offset ...passed 00:11:11.760 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:11.760 Test: blockdev writev readv 8 blocks ...passed 00:11:11.760 Test: blockdev writev readv 30 x 1block ...passed 00:11:11.760 Test: blockdev writev readv block ...passed 00:11:11.760 Test: blockdev writev readv size > 128k ...passed 00:11:11.760 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:11.760 Test: blockdev comparev and writev ...passed 00:11:11.760 Test: blockdev nvme passthru rw ...passed 00:11:11.760 Test: blockdev nvme passthru vendor specific ...passed 00:11:11.760 Test: blockdev nvme admin passthru ...passed 00:11:11.760 Test: blockdev copy ...passed 00:11:11.760 Suite: bdevio tests on: concat0 00:11:11.760 Test: blockdev write read block ...passed 00:11:11.760 Test: blockdev write zeroes read block ...passed 00:11:11.760 Test: blockdev write zeroes read no split ...passed 00:11:11.760 Test: blockdev write zeroes read split ...passed 00:11:11.760 Test: blockdev write zeroes read split partial ...passed 00:11:11.760 Test: blockdev reset ...passed 00:11:11.760 Test: blockdev write read 8 blocks ...passed 00:11:11.760 Test: blockdev write read size > 128k ...passed 00:11:11.760 Test: blockdev write read invalid size ...passed 00:11:11.760 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:11.760 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:11.760 Test: blockdev write read max offset ...passed 00:11:11.760 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:11.760 Test: blockdev writev readv 8 blocks ...passed 00:11:11.760 Test: blockdev writev readv 30 x 1block ...passed 00:11:11.760 Test: blockdev writev readv block ...passed 00:11:11.760 Test: blockdev writev readv size > 128k ...passed 00:11:11.760 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:11.760 Test: blockdev comparev and writev ...passed 00:11:11.760 Test: blockdev nvme passthru rw ...passed 00:11:11.760 Test: blockdev nvme passthru vendor specific ...passed 00:11:11.760 Test: blockdev nvme admin passthru ...passed 00:11:11.760 Test: blockdev copy ...passed 00:11:11.760 Suite: bdevio tests on: raid0 00:11:11.760 Test: blockdev write read block ...passed 00:11:11.760 Test: blockdev write zeroes read block ...passed 00:11:11.760 Test: blockdev write zeroes read no split ...passed 00:11:11.760 Test: blockdev write zeroes read split ...passed 00:11:11.760 Test: blockdev write zeroes read split partial ...passed 00:11:11.760 Test: blockdev reset ...passed 00:11:11.760 Test: blockdev write read 8 blocks ...passed 00:11:11.760 Test: blockdev write read size > 128k ...passed 00:11:11.760 Test: blockdev write read invalid size ...passed 00:11:11.760 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:11.760 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:11.760 Test: blockdev write read max offset ...passed 00:11:11.760 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:11.760 Test: blockdev writev readv 8 blocks ...passed 00:11:11.760 Test: blockdev writev readv 30 x 1block ...passed 00:11:11.760 Test: blockdev writev readv block ...passed 00:11:11.760 Test: blockdev writev readv size > 128k ...passed 00:11:11.760 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:11.760 Test: blockdev comparev and writev ...passed 00:11:11.760 Test: blockdev nvme passthru rw ...passed 00:11:11.760 Test: blockdev nvme passthru vendor specific ...passed 00:11:11.760 Test: blockdev nvme admin passthru ...passed 00:11:11.760 Test: blockdev copy ...passed 00:11:11.760 Suite: bdevio tests on: TestPT 00:11:11.760 Test: blockdev write read block ...passed 00:11:11.760 Test: blockdev write zeroes read block ...passed 00:11:11.760 Test: blockdev write zeroes read no split ...passed 00:11:11.760 Test: blockdev write zeroes read split ...passed 00:11:11.760 Test: blockdev write zeroes read split partial ...passed 00:11:11.760 Test: blockdev reset ...passed 00:11:11.760 Test: blockdev write read 8 blocks ...passed 00:11:11.760 Test: blockdev write read size > 128k ...passed 00:11:11.760 Test: blockdev write read invalid size ...passed 00:11:11.760 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:11.760 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:11.760 Test: blockdev write read max offset ...passed 00:11:11.760 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:11.760 Test: blockdev writev readv 8 blocks ...passed 00:11:11.760 Test: blockdev writev readv 30 x 1block ...passed 00:11:11.760 Test: blockdev writev readv block ...passed 00:11:11.760 Test: blockdev writev readv size > 128k ...passed 00:11:11.760 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:11.760 Test: blockdev comparev and writev ...passed 00:11:11.760 Test: blockdev nvme passthru rw ...passed 00:11:11.760 Test: blockdev nvme passthru vendor specific ...passed 00:11:11.760 Test: blockdev nvme admin passthru ...passed 00:11:11.760 Test: blockdev copy ...passed 00:11:11.760 Suite: bdevio tests on: Malloc2p7 00:11:11.760 Test: blockdev write read block ...passed 00:11:11.760 Test: blockdev write zeroes read block ...passed 00:11:11.760 Test: blockdev write zeroes read no split ...passed 00:11:12.018 Test: blockdev write zeroes read split ...passed 00:11:12.018 Test: blockdev write zeroes read split partial ...passed 00:11:12.018 Test: blockdev reset ...passed 00:11:12.018 Test: blockdev write read 8 blocks ...passed 00:11:12.018 Test: blockdev write read size > 128k ...passed 00:11:12.018 Test: blockdev write read invalid size ...passed 00:11:12.018 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:12.018 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:12.018 Test: blockdev write read max offset ...passed 00:11:12.018 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:12.018 Test: blockdev writev readv 8 blocks ...passed 00:11:12.018 Test: blockdev writev readv 30 x 1block ...passed 00:11:12.018 Test: blockdev writev readv block ...passed 00:11:12.018 Test: blockdev writev readv size > 128k ...passed 00:11:12.018 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:12.018 Test: blockdev comparev and writev ...passed 00:11:12.018 Test: blockdev nvme passthru rw ...passed 00:11:12.018 Test: blockdev nvme passthru vendor specific ...passed 00:11:12.018 Test: blockdev nvme admin passthru ...passed 00:11:12.018 Test: blockdev copy ...passed 00:11:12.018 Suite: bdevio tests on: Malloc2p6 00:11:12.018 Test: blockdev write read block ...passed 00:11:12.018 Test: blockdev write zeroes read block ...passed 00:11:12.018 Test: blockdev write zeroes read no split ...passed 00:11:12.018 Test: blockdev write zeroes read split ...passed 00:11:12.018 Test: blockdev write zeroes read split partial ...passed 00:11:12.018 Test: blockdev reset ...passed 00:11:12.018 Test: blockdev write read 8 blocks ...passed 00:11:12.018 Test: blockdev write read size > 128k ...passed 00:11:12.018 Test: blockdev write read invalid size ...passed 00:11:12.018 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:12.018 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:12.018 Test: blockdev write read max offset ...passed 00:11:12.018 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:12.018 Test: blockdev writev readv 8 blocks ...passed 00:11:12.018 Test: blockdev writev readv 30 x 1block ...passed 00:11:12.018 Test: blockdev writev readv block ...passed 00:11:12.018 Test: blockdev writev readv size > 128k ...passed 00:11:12.018 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:12.018 Test: blockdev comparev and writev ...passed 00:11:12.018 Test: blockdev nvme passthru rw ...passed 00:11:12.018 Test: blockdev nvme passthru vendor specific ...passed 00:11:12.018 Test: blockdev nvme admin passthru ...passed 00:11:12.018 Test: blockdev copy ...passed 00:11:12.018 Suite: bdevio tests on: Malloc2p5 00:11:12.018 Test: blockdev write read block ...passed 00:11:12.018 Test: blockdev write zeroes read block ...passed 00:11:12.018 Test: blockdev write zeroes read no split ...passed 00:11:12.018 Test: blockdev write zeroes read split ...passed 00:11:12.018 Test: blockdev write zeroes read split partial ...passed 00:11:12.018 Test: blockdev reset ...passed 00:11:12.018 Test: blockdev write read 8 blocks ...passed 00:11:12.018 Test: blockdev write read size > 128k ...passed 00:11:12.018 Test: blockdev write read invalid size ...passed 00:11:12.018 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:12.018 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:12.018 Test: blockdev write read max offset ...passed 00:11:12.018 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:12.018 Test: blockdev writev readv 8 blocks ...passed 00:11:12.018 Test: blockdev writev readv 30 x 1block ...passed 00:11:12.018 Test: blockdev writev readv block ...passed 00:11:12.018 Test: blockdev writev readv size > 128k ...passed 00:11:12.018 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:12.018 Test: blockdev comparev and writev ...passed 00:11:12.018 Test: blockdev nvme passthru rw ...passed 00:11:12.018 Test: blockdev nvme passthru vendor specific ...passed 00:11:12.018 Test: blockdev nvme admin passthru ...passed 00:11:12.018 Test: blockdev copy ...passed 00:11:12.018 Suite: bdevio tests on: Malloc2p4 00:11:12.018 Test: blockdev write read block ...passed 00:11:12.018 Test: blockdev write zeroes read block ...passed 00:11:12.018 Test: blockdev write zeroes read no split ...passed 00:11:12.018 Test: blockdev write zeroes read split ...passed 00:11:12.018 Test: blockdev write zeroes read split partial ...passed 00:11:12.018 Test: blockdev reset ...passed 00:11:12.018 Test: blockdev write read 8 blocks ...passed 00:11:12.018 Test: blockdev write read size > 128k ...passed 00:11:12.018 Test: blockdev write read invalid size ...passed 00:11:12.018 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:12.018 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:12.018 Test: blockdev write read max offset ...passed 00:11:12.018 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:12.018 Test: blockdev writev readv 8 blocks ...passed 00:11:12.018 Test: blockdev writev readv 30 x 1block ...passed 00:11:12.018 Test: blockdev writev readv block ...passed 00:11:12.018 Test: blockdev writev readv size > 128k ...passed 00:11:12.018 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:12.018 Test: blockdev comparev and writev ...passed 00:11:12.018 Test: blockdev nvme passthru rw ...passed 00:11:12.018 Test: blockdev nvme passthru vendor specific ...passed 00:11:12.018 Test: blockdev nvme admin passthru ...passed 00:11:12.018 Test: blockdev copy ...passed 00:11:12.018 Suite: bdevio tests on: Malloc2p3 00:11:12.019 Test: blockdev write read block ...passed 00:11:12.019 Test: blockdev write zeroes read block ...passed 00:11:12.019 Test: blockdev write zeroes read no split ...passed 00:11:12.019 Test: blockdev write zeroes read split ...passed 00:11:12.019 Test: blockdev write zeroes read split partial ...passed 00:11:12.019 Test: blockdev reset ...passed 00:11:12.019 Test: blockdev write read 8 blocks ...passed 00:11:12.019 Test: blockdev write read size > 128k ...passed 00:11:12.019 Test: blockdev write read invalid size ...passed 00:11:12.019 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:12.019 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:12.019 Test: blockdev write read max offset ...passed 00:11:12.019 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:12.019 Test: blockdev writev readv 8 blocks ...passed 00:11:12.019 Test: blockdev writev readv 30 x 1block ...passed 00:11:12.019 Test: blockdev writev readv block ...passed 00:11:12.019 Test: blockdev writev readv size > 128k ...passed 00:11:12.019 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:12.019 Test: blockdev comparev and writev ...passed 00:11:12.277 Test: blockdev nvme passthru rw ...passed 00:11:12.277 Test: blockdev nvme passthru vendor specific ...passed 00:11:12.278 Test: blockdev nvme admin passthru ...passed 00:11:12.278 Test: blockdev copy ...passed 00:11:12.278 Suite: bdevio tests on: Malloc2p2 00:11:12.278 Test: blockdev write read block ...passed 00:11:12.278 Test: blockdev write zeroes read block ...passed 00:11:12.278 Test: blockdev write zeroes read no split ...passed 00:11:12.278 Test: blockdev write zeroes read split ...passed 00:11:12.278 Test: blockdev write zeroes read split partial ...passed 00:11:12.278 Test: blockdev reset ...passed 00:11:12.278 Test: blockdev write read 8 blocks ...passed 00:11:12.278 Test: blockdev write read size > 128k ...passed 00:11:12.278 Test: blockdev write read invalid size ...passed 00:11:12.278 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:12.278 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:12.278 Test: blockdev write read max offset ...passed 00:11:12.278 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:12.278 Test: blockdev writev readv 8 blocks ...passed 00:11:12.278 Test: blockdev writev readv 30 x 1block ...passed 00:11:12.278 Test: blockdev writev readv block ...passed 00:11:12.278 Test: blockdev writev readv size > 128k ...passed 00:11:12.278 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:12.278 Test: blockdev comparev and writev ...passed 00:11:12.278 Test: blockdev nvme passthru rw ...passed 00:11:12.278 Test: blockdev nvme passthru vendor specific ...passed 00:11:12.278 Test: blockdev nvme admin passthru ...passed 00:11:12.278 Test: blockdev copy ...passed 00:11:12.278 Suite: bdevio tests on: Malloc2p1 00:11:12.278 Test: blockdev write read block ...passed 00:11:12.278 Test: blockdev write zeroes read block ...passed 00:11:12.278 Test: blockdev write zeroes read no split ...passed 00:11:12.278 Test: blockdev write zeroes read split ...passed 00:11:12.278 Test: blockdev write zeroes read split partial ...passed 00:11:12.278 Test: blockdev reset ...passed 00:11:12.278 Test: blockdev write read 8 blocks ...passed 00:11:12.278 Test: blockdev write read size > 128k ...passed 00:11:12.278 Test: blockdev write read invalid size ...passed 00:11:12.278 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:12.278 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:12.278 Test: blockdev write read max offset ...passed 00:11:12.278 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:12.278 Test: blockdev writev readv 8 blocks ...passed 00:11:12.278 Test: blockdev writev readv 30 x 1block ...passed 00:11:12.278 Test: blockdev writev readv block ...passed 00:11:12.278 Test: blockdev writev readv size > 128k ...passed 00:11:12.278 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:12.278 Test: blockdev comparev and writev ...passed 00:11:12.278 Test: blockdev nvme passthru rw ...passed 00:11:12.278 Test: blockdev nvme passthru vendor specific ...passed 00:11:12.278 Test: blockdev nvme admin passthru ...passed 00:11:12.278 Test: blockdev copy ...passed 00:11:12.278 Suite: bdevio tests on: Malloc2p0 00:11:12.278 Test: blockdev write read block ...passed 00:11:12.278 Test: blockdev write zeroes read block ...passed 00:11:12.278 Test: blockdev write zeroes read no split ...passed 00:11:12.278 Test: blockdev write zeroes read split ...passed 00:11:12.278 Test: blockdev write zeroes read split partial ...passed 00:11:12.278 Test: blockdev reset ...passed 00:11:12.278 Test: blockdev write read 8 blocks ...passed 00:11:12.278 Test: blockdev write read size > 128k ...passed 00:11:12.278 Test: blockdev write read invalid size ...passed 00:11:12.278 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:12.278 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:12.278 Test: blockdev write read max offset ...passed 00:11:12.278 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:12.278 Test: blockdev writev readv 8 blocks ...passed 00:11:12.278 Test: blockdev writev readv 30 x 1block ...passed 00:11:12.278 Test: blockdev writev readv block ...passed 00:11:12.278 Test: blockdev writev readv size > 128k ...passed 00:11:12.278 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:12.278 Test: blockdev comparev and writev ...passed 00:11:12.278 Test: blockdev nvme passthru rw ...passed 00:11:12.278 Test: blockdev nvme passthru vendor specific ...passed 00:11:12.278 Test: blockdev nvme admin passthru ...passed 00:11:12.278 Test: blockdev copy ...passed 00:11:12.278 Suite: bdevio tests on: Malloc1p1 00:11:12.278 Test: blockdev write read block ...passed 00:11:12.278 Test: blockdev write zeroes read block ...passed 00:11:12.278 Test: blockdev write zeroes read no split ...passed 00:11:12.278 Test: blockdev write zeroes read split ...passed 00:11:12.278 Test: blockdev write zeroes read split partial ...passed 00:11:12.278 Test: blockdev reset ...passed 00:11:12.278 Test: blockdev write read 8 blocks ...passed 00:11:12.278 Test: blockdev write read size > 128k ...passed 00:11:12.278 Test: blockdev write read invalid size ...passed 00:11:12.278 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:12.278 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:12.278 Test: blockdev write read max offset ...passed 00:11:12.278 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:12.278 Test: blockdev writev readv 8 blocks ...passed 00:11:12.278 Test: blockdev writev readv 30 x 1block ...passed 00:11:12.278 Test: blockdev writev readv block ...passed 00:11:12.278 Test: blockdev writev readv size > 128k ...passed 00:11:12.278 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:12.278 Test: blockdev comparev and writev ...passed 00:11:12.278 Test: blockdev nvme passthru rw ...passed 00:11:12.278 Test: blockdev nvme passthru vendor specific ...passed 00:11:12.278 Test: blockdev nvme admin passthru ...passed 00:11:12.278 Test: blockdev copy ...passed 00:11:12.278 Suite: bdevio tests on: Malloc1p0 00:11:12.278 Test: blockdev write read block ...passed 00:11:12.278 Test: blockdev write zeroes read block ...passed 00:11:12.278 Test: blockdev write zeroes read no split ...passed 00:11:12.537 Test: blockdev write zeroes read split ...passed 00:11:12.537 Test: blockdev write zeroes read split partial ...passed 00:11:12.537 Test: blockdev reset ...passed 00:11:12.537 Test: blockdev write read 8 blocks ...passed 00:11:12.537 Test: blockdev write read size > 128k ...passed 00:11:12.537 Test: blockdev write read invalid size ...passed 00:11:12.537 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:12.537 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:12.537 Test: blockdev write read max offset ...passed 00:11:12.537 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:12.537 Test: blockdev writev readv 8 blocks ...passed 00:11:12.537 Test: blockdev writev readv 30 x 1block ...passed 00:11:12.537 Test: blockdev writev readv block ...passed 00:11:12.537 Test: blockdev writev readv size > 128k ...passed 00:11:12.538 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:12.538 Test: blockdev comparev and writev ...passed 00:11:12.538 Test: blockdev nvme passthru rw ...passed 00:11:12.538 Test: blockdev nvme passthru vendor specific ...passed 00:11:12.538 Test: blockdev nvme admin passthru ...passed 00:11:12.538 Test: blockdev copy ...passed 00:11:12.538 Suite: bdevio tests on: Malloc0 00:11:12.538 Test: blockdev write read block ...passed 00:11:12.538 Test: blockdev write zeroes read block ...passed 00:11:12.538 Test: blockdev write zeroes read no split ...passed 00:11:12.538 Test: blockdev write zeroes read split ...passed 00:11:12.538 Test: blockdev write zeroes read split partial ...passed 00:11:12.538 Test: blockdev reset ...passed 00:11:12.538 Test: blockdev write read 8 blocks ...passed 00:11:12.538 Test: blockdev write read size > 128k ...passed 00:11:12.538 Test: blockdev write read invalid size ...passed 00:11:12.538 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:12.538 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:12.538 Test: blockdev write read max offset ...passed 00:11:12.538 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:12.538 Test: blockdev writev readv 8 blocks ...passed 00:11:12.538 Test: blockdev writev readv 30 x 1block ...passed 00:11:12.538 Test: blockdev writev readv block ...passed 00:11:12.538 Test: blockdev writev readv size > 128k ...passed 00:11:12.538 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:12.538 Test: blockdev comparev and writev ...passed 00:11:12.538 Test: blockdev nvme passthru rw ...passed 00:11:12.538 Test: blockdev nvme passthru vendor specific ...passed 00:11:12.538 Test: blockdev nvme admin passthru ...passed 00:11:12.538 Test: blockdev copy ...passed 00:11:12.538 00:11:12.538 Run Summary: Type Total Ran Passed Failed Inactive 00:11:12.538 suites 16 16 n/a 0 0 00:11:12.538 tests 368 368 368 0 0 00:11:12.538 asserts 2224 2224 2224 0 n/a 00:11:12.538 00:11:12.538 Elapsed time = 2.816 seconds 00:11:12.538 0 00:11:12.538 14:03:58 blockdev_general.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 183202 00:11:12.538 14:03:58 blockdev_general.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 183202 ']' 00:11:12.538 14:03:58 blockdev_general.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 183202 00:11:12.538 14:03:58 blockdev_general.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:11:12.538 14:03:58 blockdev_general.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:12.538 14:03:58 blockdev_general.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 183202 00:11:12.538 14:03:58 blockdev_general.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:12.538 14:03:58 blockdev_general.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:12.538 14:03:58 blockdev_general.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 183202' 00:11:12.538 killing process with pid 183202 00:11:12.538 14:03:58 blockdev_general.bdev_bounds -- common/autotest_common.sh@967 -- # kill 183202 00:11:12.538 14:03:58 blockdev_general.bdev_bounds -- common/autotest_common.sh@972 -- # wait 183202 00:11:14.441 14:04:00 blockdev_general.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:11:14.441 00:11:14.441 real 0m4.649s 00:11:14.441 user 0m11.728s 00:11:14.441 sys 0m0.627s 00:11:14.441 14:04:00 blockdev_general.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:14.441 14:04:00 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:11:14.441 ************************************ 00:11:14.441 END TEST bdev_bounds 00:11:14.441 ************************************ 00:11:14.699 14:04:00 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:11:14.699 14:04:00 blockdev_general -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:11:14.699 14:04:00 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:11:14.699 14:04:00 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:14.699 14:04:00 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:14.699 ************************************ 00:11:14.699 START TEST bdev_nbd 00:11:14.699 ************************************ 00:11:14.699 14:04:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:11:14.699 14:04:00 blockdev_general.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:11:14.699 14:04:00 blockdev_general.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:11:14.699 14:04:00 blockdev_general.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:14.699 14:04:00 blockdev_general.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:14.699 14:04:00 blockdev_general.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:11:14.699 14:04:00 blockdev_general.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:11:14.699 14:04:00 blockdev_general.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=16 00:11:14.699 14:04:00 blockdev_general.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:11:14.699 14:04:00 blockdev_general.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:14.699 14:04:00 blockdev_general.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:11:14.699 14:04:00 blockdev_general.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=16 00:11:14.699 14:04:00 blockdev_general.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:14.699 14:04:00 blockdev_general.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:11:14.699 14:04:00 blockdev_general.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:11:14.699 14:04:00 blockdev_general.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:11:14.700 14:04:00 blockdev_general.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=183296 00:11:14.700 14:04:00 blockdev_general.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:11:14.700 14:04:00 blockdev_general.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 183296 /var/tmp/spdk-nbd.sock 00:11:14.700 14:04:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 183296 ']' 00:11:14.700 14:04:00 blockdev_general.bdev_nbd -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:11:14.700 14:04:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:14.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:14.700 14:04:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:14.700 14:04:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:14.700 14:04:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:14.700 14:04:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:11:14.700 [2024-07-15 14:04:00.530004] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:11:14.700 [2024-07-15 14:04:00.530458] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:14.700 [2024-07-15 14:04:00.685240] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:14.958 [2024-07-15 14:04:00.922064] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:15.523 [2024-07-15 14:04:01.307452] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:15.523 [2024-07-15 14:04:01.307903] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:15.523 [2024-07-15 14:04:01.315423] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:15.523 [2024-07-15 14:04:01.315679] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:15.523 [2024-07-15 14:04:01.323458] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:15.523 [2024-07-15 14:04:01.323736] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:11:15.523 [2024-07-15 14:04:01.323972] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:11:15.523 [2024-07-15 14:04:01.511046] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:15.523 [2024-07-15 14:04:01.511483] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:15.523 [2024-07-15 14:04:01.511719] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:11:15.523 [2024-07-15 14:04:01.511985] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:15.523 [2024-07-15 14:04:01.513960] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:15.523 [2024-07-15 14:04:01.514188] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:11:16.088 14:04:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:16.088 14:04:01 blockdev_general.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:11:16.088 14:04:01 blockdev_general.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:11:16.088 14:04:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:16.088 14:04:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:11:16.088 14:04:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:11:16.088 14:04:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:11:16.088 14:04:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:16.088 14:04:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:11:16.088 14:04:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:11:16.088 14:04:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:11:16.088 14:04:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:11:16.088 14:04:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:11:16.088 14:04:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:16.088 14:04:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 00:11:16.346 14:04:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:11:16.346 14:04:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:11:16.346 14:04:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:11:16.346 14:04:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:11:16.346 14:04:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:11:16.346 14:04:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:16.346 14:04:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:16.346 14:04:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:11:16.346 14:04:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:11:16.346 14:04:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:16.346 14:04:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:16.346 14:04:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:16.346 1+0 records in 00:11:16.346 1+0 records out 00:11:16.346 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00042056 s, 9.7 MB/s 00:11:16.346 14:04:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:16.346 14:04:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:11:16.346 14:04:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:16.346 14:04:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:16.346 14:04:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:11:16.346 14:04:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:16.346 14:04:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:16.346 14:04:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 00:11:16.606 14:04:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:11:16.606 14:04:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:11:16.606 14:04:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:11:16.606 14:04:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:11:16.606 14:04:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:11:16.606 14:04:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:16.606 14:04:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:16.606 14:04:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:11:16.606 14:04:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:11:16.606 14:04:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:16.606 14:04:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:16.606 14:04:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:16.606 1+0 records in 00:11:16.606 1+0 records out 00:11:16.606 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000581001 s, 7.0 MB/s 00:11:16.606 14:04:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:16.606 14:04:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:11:16.606 14:04:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:16.606 14:04:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:16.606 14:04:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:11:16.606 14:04:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:16.606 14:04:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:16.606 14:04:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 00:11:16.864 14:04:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:11:16.865 14:04:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:11:16.865 14:04:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:11:16.865 14:04:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:11:16.865 14:04:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:11:16.865 14:04:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:16.865 14:04:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:16.865 14:04:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:11:16.865 14:04:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:11:16.865 14:04:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:16.865 14:04:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:16.865 14:04:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:16.865 1+0 records in 00:11:16.865 1+0 records out 00:11:16.865 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000451629 s, 9.1 MB/s 00:11:16.865 14:04:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:16.865 14:04:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:11:16.865 14:04:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:16.865 14:04:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:16.865 14:04:02 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:11:16.865 14:04:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:16.865 14:04:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:16.865 14:04:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 00:11:17.123 14:04:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:11:17.123 14:04:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:11:17.123 14:04:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:11:17.123 14:04:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:11:17.123 14:04:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:11:17.123 14:04:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:17.123 14:04:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:17.123 14:04:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:11:17.123 14:04:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:11:17.123 14:04:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:17.123 14:04:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:17.123 14:04:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:17.123 1+0 records in 00:11:17.123 1+0 records out 00:11:17.123 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000510927 s, 8.0 MB/s 00:11:17.123 14:04:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:17.123 14:04:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:11:17.123 14:04:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:17.123 14:04:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:17.123 14:04:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:11:17.123 14:04:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:17.123 14:04:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:17.123 14:04:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 00:11:17.382 14:04:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:11:17.382 14:04:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:11:17.382 14:04:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:11:17.382 14:04:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:11:17.382 14:04:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:11:17.382 14:04:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:17.382 14:04:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:17.382 14:04:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:11:17.382 14:04:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:11:17.382 14:04:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:17.382 14:04:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:17.382 14:04:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:17.382 1+0 records in 00:11:17.382 1+0 records out 00:11:17.382 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000439161 s, 9.3 MB/s 00:11:17.382 14:04:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:17.382 14:04:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:11:17.382 14:04:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:17.382 14:04:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:17.382 14:04:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:11:17.382 14:04:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:17.382 14:04:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:17.382 14:04:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 00:11:17.641 14:04:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:11:17.641 14:04:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:11:17.641 14:04:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:11:17.641 14:04:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:11:17.641 14:04:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:11:17.641 14:04:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:17.641 14:04:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:17.641 14:04:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:11:17.641 14:04:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:11:17.641 14:04:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:17.641 14:04:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:17.641 14:04:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:17.641 1+0 records in 00:11:17.641 1+0 records out 00:11:17.641 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000621152 s, 6.6 MB/s 00:11:17.641 14:04:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:17.641 14:04:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:11:17.641 14:04:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:17.641 14:04:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:17.641 14:04:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:11:17.641 14:04:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:17.641 14:04:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:17.641 14:04:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 00:11:17.899 14:04:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:11:17.899 14:04:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:11:17.899 14:04:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:11:17.899 14:04:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd6 00:11:17.899 14:04:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:11:17.899 14:04:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:17.899 14:04:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:17.899 14:04:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd6 /proc/partitions 00:11:17.899 14:04:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:11:17.899 14:04:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:17.899 14:04:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:17.899 14:04:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:17.899 1+0 records in 00:11:17.899 1+0 records out 00:11:17.899 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000618906 s, 6.6 MB/s 00:11:17.899 14:04:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:17.899 14:04:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:11:17.899 14:04:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:17.899 14:04:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:17.899 14:04:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:11:17.899 14:04:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:17.899 14:04:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:17.899 14:04:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 00:11:18.464 14:04:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd7 00:11:18.464 14:04:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd7 00:11:18.464 14:04:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd7 00:11:18.464 14:04:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd7 00:11:18.464 14:04:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:11:18.464 14:04:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:18.464 14:04:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:18.464 14:04:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd7 /proc/partitions 00:11:18.464 14:04:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:11:18.464 14:04:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:18.464 14:04:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:18.464 14:04:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:18.464 1+0 records in 00:11:18.464 1+0 records out 00:11:18.464 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00064233 s, 6.4 MB/s 00:11:18.464 14:04:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:18.464 14:04:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:11:18.464 14:04:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:18.464 14:04:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:18.464 14:04:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:11:18.464 14:04:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:18.464 14:04:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:18.464 14:04:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 00:11:18.721 14:04:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd8 00:11:18.721 14:04:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd8 00:11:18.721 14:04:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd8 00:11:18.721 14:04:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd8 00:11:18.721 14:04:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:11:18.721 14:04:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:18.721 14:04:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:18.721 14:04:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd8 /proc/partitions 00:11:18.721 14:04:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:11:18.721 14:04:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:18.721 14:04:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:18.721 14:04:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:18.721 1+0 records in 00:11:18.721 1+0 records out 00:11:18.721 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000363226 s, 11.3 MB/s 00:11:18.721 14:04:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:18.721 14:04:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:11:18.721 14:04:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:18.721 14:04:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:18.721 14:04:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:11:18.721 14:04:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:18.721 14:04:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:18.721 14:04:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 00:11:18.979 14:04:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd9 00:11:18.979 14:04:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd9 00:11:18.979 14:04:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd9 00:11:18.979 14:04:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd9 00:11:18.979 14:04:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:11:18.979 14:04:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:18.979 14:04:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:18.979 14:04:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd9 /proc/partitions 00:11:18.979 14:04:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:11:18.979 14:04:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:18.979 14:04:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:18.979 14:04:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:18.979 1+0 records in 00:11:18.979 1+0 records out 00:11:18.979 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000703553 s, 5.8 MB/s 00:11:18.979 14:04:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:18.979 14:04:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:11:18.979 14:04:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:18.979 14:04:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:18.979 14:04:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:11:18.979 14:04:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:18.979 14:04:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:18.979 14:04:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 00:11:19.237 14:04:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd10 00:11:19.237 14:04:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd10 00:11:19.237 14:04:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd10 00:11:19.237 14:04:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:11:19.237 14:04:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:11:19.237 14:04:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:19.237 14:04:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:19.237 14:04:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:11:19.237 14:04:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:11:19.237 14:04:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:19.237 14:04:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:19.237 14:04:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:19.237 1+0 records in 00:11:19.237 1+0 records out 00:11:19.237 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000627031 s, 6.5 MB/s 00:11:19.237 14:04:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:19.237 14:04:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:11:19.237 14:04:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:19.237 14:04:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:19.237 14:04:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:11:19.237 14:04:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:19.237 14:04:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:19.237 14:04:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT 00:11:19.495 14:04:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd11 00:11:19.495 14:04:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd11 00:11:19.495 14:04:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd11 00:11:19.495 14:04:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:11:19.495 14:04:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:11:19.495 14:04:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:19.495 14:04:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:19.495 14:04:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:11:19.495 14:04:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:11:19.495 14:04:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:19.495 14:04:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:19.495 14:04:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:19.495 1+0 records in 00:11:19.495 1+0 records out 00:11:19.495 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000753528 s, 5.4 MB/s 00:11:19.495 14:04:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:19.495 14:04:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:11:19.495 14:04:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:19.495 14:04:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:19.495 14:04:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:11:19.495 14:04:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:19.495 14:04:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:19.495 14:04:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 00:11:19.754 14:04:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd12 00:11:19.754 14:04:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd12 00:11:19.754 14:04:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd12 00:11:19.754 14:04:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:11:19.754 14:04:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:11:19.754 14:04:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:19.754 14:04:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:19.754 14:04:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:11:19.754 14:04:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:11:19.754 14:04:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:19.754 14:04:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:19.754 14:04:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:19.754 1+0 records in 00:11:19.754 1+0 records out 00:11:19.754 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000582381 s, 7.0 MB/s 00:11:19.754 14:04:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:19.754 14:04:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:11:19.754 14:04:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:19.754 14:04:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:19.754 14:04:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:11:19.754 14:04:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:19.754 14:04:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:19.754 14:04:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 00:11:20.012 14:04:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd13 00:11:20.012 14:04:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd13 00:11:20.012 14:04:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd13 00:11:20.012 14:04:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:11:20.012 14:04:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:11:20.012 14:04:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:20.012 14:04:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:20.012 14:04:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:11:20.012 14:04:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:11:20.012 14:04:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:20.012 14:04:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:20.012 14:04:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:20.012 1+0 records in 00:11:20.012 1+0 records out 00:11:20.012 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000735882 s, 5.6 MB/s 00:11:20.012 14:04:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:20.012 14:04:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:11:20.012 14:04:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:20.012 14:04:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:20.012 14:04:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:11:20.012 14:04:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:20.012 14:04:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:20.012 14:04:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 00:11:20.269 14:04:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd14 00:11:20.269 14:04:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd14 00:11:20.269 14:04:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd14 00:11:20.269 14:04:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd14 00:11:20.269 14:04:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:11:20.269 14:04:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:20.269 14:04:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:20.269 14:04:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd14 /proc/partitions 00:11:20.269 14:04:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:11:20.269 14:04:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:20.269 14:04:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:20.269 14:04:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:20.269 1+0 records in 00:11:20.269 1+0 records out 00:11:20.269 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000834954 s, 4.9 MB/s 00:11:20.269 14:04:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:20.269 14:04:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:11:20.269 14:04:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:20.269 14:04:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:20.269 14:04:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:11:20.269 14:04:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:20.269 14:04:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:20.269 14:04:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 00:11:20.527 14:04:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd15 00:11:20.527 14:04:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd15 00:11:20.527 14:04:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd15 00:11:20.527 14:04:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd15 00:11:20.527 14:04:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:11:20.527 14:04:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:20.527 14:04:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:20.527 14:04:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd15 /proc/partitions 00:11:20.527 14:04:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:11:20.527 14:04:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:20.527 14:04:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:20.527 14:04:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:20.527 1+0 records in 00:11:20.527 1+0 records out 00:11:20.527 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000991387 s, 4.1 MB/s 00:11:20.527 14:04:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:20.527 14:04:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:11:20.527 14:04:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:20.527 14:04:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:20.527 14:04:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:11:20.527 14:04:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:20.527 14:04:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:20.527 14:04:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:20.784 14:04:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:11:20.784 { 00:11:20.784 "nbd_device": "/dev/nbd0", 00:11:20.784 "bdev_name": "Malloc0" 00:11:20.784 }, 00:11:20.784 { 00:11:20.784 "nbd_device": "/dev/nbd1", 00:11:20.784 "bdev_name": "Malloc1p0" 00:11:20.784 }, 00:11:20.784 { 00:11:20.784 "nbd_device": "/dev/nbd2", 00:11:20.784 "bdev_name": "Malloc1p1" 00:11:20.784 }, 00:11:20.784 { 00:11:20.784 "nbd_device": "/dev/nbd3", 00:11:20.784 "bdev_name": "Malloc2p0" 00:11:20.784 }, 00:11:20.784 { 00:11:20.784 "nbd_device": "/dev/nbd4", 00:11:20.784 "bdev_name": "Malloc2p1" 00:11:20.784 }, 00:11:20.784 { 00:11:20.784 "nbd_device": "/dev/nbd5", 00:11:20.784 "bdev_name": "Malloc2p2" 00:11:20.784 }, 00:11:20.784 { 00:11:20.784 "nbd_device": "/dev/nbd6", 00:11:20.784 "bdev_name": "Malloc2p3" 00:11:20.784 }, 00:11:20.784 { 00:11:20.784 "nbd_device": "/dev/nbd7", 00:11:20.784 "bdev_name": "Malloc2p4" 00:11:20.784 }, 00:11:20.784 { 00:11:20.784 "nbd_device": "/dev/nbd8", 00:11:20.784 "bdev_name": "Malloc2p5" 00:11:20.784 }, 00:11:20.784 { 00:11:20.784 "nbd_device": "/dev/nbd9", 00:11:20.784 "bdev_name": "Malloc2p6" 00:11:20.784 }, 00:11:20.784 { 00:11:20.784 "nbd_device": "/dev/nbd10", 00:11:20.784 "bdev_name": "Malloc2p7" 00:11:20.784 }, 00:11:20.784 { 00:11:20.784 "nbd_device": "/dev/nbd11", 00:11:20.784 "bdev_name": "TestPT" 00:11:20.784 }, 00:11:20.784 { 00:11:20.784 "nbd_device": "/dev/nbd12", 00:11:20.784 "bdev_name": "raid0" 00:11:20.784 }, 00:11:20.784 { 00:11:20.784 "nbd_device": "/dev/nbd13", 00:11:20.784 "bdev_name": "concat0" 00:11:20.784 }, 00:11:20.784 { 00:11:20.784 "nbd_device": "/dev/nbd14", 00:11:20.784 "bdev_name": "raid1" 00:11:20.784 }, 00:11:20.784 { 00:11:20.784 "nbd_device": "/dev/nbd15", 00:11:20.784 "bdev_name": "AIO0" 00:11:20.784 } 00:11:20.784 ]' 00:11:20.784 14:04:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:11:20.784 14:04:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:11:20.784 { 00:11:20.784 "nbd_device": "/dev/nbd0", 00:11:20.784 "bdev_name": "Malloc0" 00:11:20.784 }, 00:11:20.784 { 00:11:20.784 "nbd_device": "/dev/nbd1", 00:11:20.784 "bdev_name": "Malloc1p0" 00:11:20.784 }, 00:11:20.784 { 00:11:20.784 "nbd_device": "/dev/nbd2", 00:11:20.784 "bdev_name": "Malloc1p1" 00:11:20.784 }, 00:11:20.784 { 00:11:20.784 "nbd_device": "/dev/nbd3", 00:11:20.784 "bdev_name": "Malloc2p0" 00:11:20.784 }, 00:11:20.784 { 00:11:20.784 "nbd_device": "/dev/nbd4", 00:11:20.784 "bdev_name": "Malloc2p1" 00:11:20.784 }, 00:11:20.784 { 00:11:20.784 "nbd_device": "/dev/nbd5", 00:11:20.784 "bdev_name": "Malloc2p2" 00:11:20.784 }, 00:11:20.784 { 00:11:20.784 "nbd_device": "/dev/nbd6", 00:11:20.784 "bdev_name": "Malloc2p3" 00:11:20.784 }, 00:11:20.784 { 00:11:20.784 "nbd_device": "/dev/nbd7", 00:11:20.784 "bdev_name": "Malloc2p4" 00:11:20.784 }, 00:11:20.784 { 00:11:20.784 "nbd_device": "/dev/nbd8", 00:11:20.784 "bdev_name": "Malloc2p5" 00:11:20.784 }, 00:11:20.784 { 00:11:20.784 "nbd_device": "/dev/nbd9", 00:11:20.784 "bdev_name": "Malloc2p6" 00:11:20.784 }, 00:11:20.784 { 00:11:20.784 "nbd_device": "/dev/nbd10", 00:11:20.784 "bdev_name": "Malloc2p7" 00:11:20.784 }, 00:11:20.784 { 00:11:20.784 "nbd_device": "/dev/nbd11", 00:11:20.784 "bdev_name": "TestPT" 00:11:20.784 }, 00:11:20.784 { 00:11:20.784 "nbd_device": "/dev/nbd12", 00:11:20.784 "bdev_name": "raid0" 00:11:20.784 }, 00:11:20.784 { 00:11:20.784 "nbd_device": "/dev/nbd13", 00:11:20.784 "bdev_name": "concat0" 00:11:20.784 }, 00:11:20.784 { 00:11:20.784 "nbd_device": "/dev/nbd14", 00:11:20.784 "bdev_name": "raid1" 00:11:20.784 }, 00:11:20.784 { 00:11:20.784 "nbd_device": "/dev/nbd15", 00:11:20.784 "bdev_name": "AIO0" 00:11:20.784 } 00:11:20.784 ]' 00:11:20.784 14:04:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:11:21.041 14:04:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15' 00:11:21.041 14:04:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:21.041 14:04:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15') 00:11:21.041 14:04:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:21.041 14:04:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:11:21.041 14:04:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:21.041 14:04:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:21.298 14:04:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:21.298 14:04:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:21.298 14:04:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:21.298 14:04:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:21.298 14:04:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:21.298 14:04:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:21.298 14:04:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:21.298 14:04:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:21.298 14:04:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:21.298 14:04:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:21.727 14:04:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:21.727 14:04:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:21.727 14:04:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:21.727 14:04:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:21.727 14:04:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:21.727 14:04:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:21.727 14:04:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:21.727 14:04:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:21.727 14:04:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:21.727 14:04:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:11:21.727 14:04:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:11:21.727 14:04:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:11:21.727 14:04:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:11:21.727 14:04:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:21.727 14:04:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:21.727 14:04:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:11:21.727 14:04:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:21.727 14:04:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:21.727 14:04:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:21.727 14:04:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:11:21.983 14:04:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:11:21.983 14:04:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:11:21.983 14:04:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:11:21.983 14:04:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:21.983 14:04:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:21.983 14:04:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:11:21.983 14:04:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:21.983 14:04:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:21.983 14:04:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:21.983 14:04:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:11:22.239 14:04:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:11:22.239 14:04:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:11:22.239 14:04:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:11:22.239 14:04:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:22.239 14:04:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:22.239 14:04:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:11:22.239 14:04:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:22.239 14:04:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:22.239 14:04:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:22.239 14:04:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:11:22.496 14:04:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:11:22.496 14:04:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:11:22.496 14:04:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:11:22.496 14:04:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:22.496 14:04:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:22.496 14:04:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:11:22.496 14:04:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:22.496 14:04:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:22.496 14:04:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:22.496 14:04:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:11:22.754 14:04:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:11:22.754 14:04:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:11:22.754 14:04:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:11:22.754 14:04:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:22.754 14:04:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:22.754 14:04:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:11:22.754 14:04:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:22.754 14:04:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:22.754 14:04:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:22.754 14:04:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:11:23.011 14:04:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:11:23.011 14:04:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:11:23.011 14:04:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:11:23.011 14:04:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:23.011 14:04:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:23.011 14:04:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:11:23.011 14:04:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:23.011 14:04:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:23.011 14:04:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:23.011 14:04:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:11:23.269 14:04:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:11:23.269 14:04:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:11:23.269 14:04:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:11:23.269 14:04:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:23.269 14:04:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:23.269 14:04:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:11:23.269 14:04:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:23.269 14:04:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:23.269 14:04:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:23.269 14:04:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:11:23.526 14:04:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:11:23.526 14:04:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:11:23.526 14:04:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:11:23.526 14:04:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:23.526 14:04:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:23.526 14:04:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:11:23.526 14:04:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:23.526 14:04:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:23.526 14:04:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:23.526 14:04:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:11:23.784 14:04:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:11:23.784 14:04:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:11:23.784 14:04:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:11:23.784 14:04:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:23.784 14:04:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:23.784 14:04:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:11:23.784 14:04:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:23.784 14:04:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:23.784 14:04:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:23.784 14:04:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:11:24.041 14:04:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:11:24.041 14:04:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:11:24.041 14:04:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:11:24.041 14:04:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:24.041 14:04:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:24.041 14:04:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:11:24.041 14:04:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:24.041 14:04:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:24.041 14:04:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:24.041 14:04:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:11:24.298 14:04:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:11:24.298 14:04:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:11:24.298 14:04:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:11:24.298 14:04:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:24.298 14:04:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:24.298 14:04:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:11:24.298 14:04:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:24.298 14:04:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:24.298 14:04:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:24.298 14:04:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:11:24.558 14:04:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:11:24.817 14:04:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:11:24.817 14:04:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:11:24.817 14:04:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:24.817 14:04:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:24.817 14:04:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:11:24.817 14:04:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:24.817 14:04:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:24.817 14:04:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:24.817 14:04:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:11:25.075 14:04:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:11:25.075 14:04:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:11:25.075 14:04:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:11:25.075 14:04:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:25.075 14:04:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:25.075 14:04:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:11:25.075 14:04:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:25.075 14:04:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:25.075 14:04:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:25.075 14:04:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:11:25.333 14:04:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:11:25.333 14:04:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:11:25.333 14:04:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:11:25.333 14:04:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:25.333 14:04:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:25.333 14:04:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:11:25.333 14:04:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:25.333 14:04:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:25.333 14:04:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:25.333 14:04:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:25.333 14:04:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:25.591 14:04:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:25.591 14:04:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:25.591 14:04:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:25.591 14:04:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:25.591 14:04:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:25.591 14:04:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:11:25.591 14:04:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:11:25.591 14:04:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:11:25.591 14:04:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:11:25.591 14:04:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:11:25.591 14:04:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:11:25.591 14:04:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:11:25.591 14:04:11 blockdev_general.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:11:25.591 14:04:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:25.591 14:04:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:11:25.591 14:04:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:25.591 14:04:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:25.591 14:04:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:25.591 14:04:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:11:25.591 14:04:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:25.591 14:04:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:11:25.591 14:04:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:25.591 14:04:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:25.591 14:04:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:25.591 14:04:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:11:25.591 14:04:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:25.592 14:04:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:25.592 14:04:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:11:25.850 /dev/nbd0 00:11:25.850 14:04:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:25.850 14:04:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:25.850 14:04:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:11:25.850 14:04:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:11:25.850 14:04:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:25.850 14:04:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:25.850 14:04:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:11:25.850 14:04:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:11:25.850 14:04:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:25.850 14:04:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:25.850 14:04:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:25.850 1+0 records in 00:11:25.850 1+0 records out 00:11:25.850 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000489142 s, 8.4 MB/s 00:11:25.850 14:04:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:25.850 14:04:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:11:25.850 14:04:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:25.850 14:04:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:25.850 14:04:11 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:11:25.850 14:04:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:25.850 14:04:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:25.850 14:04:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 /dev/nbd1 00:11:26.108 /dev/nbd1 00:11:26.108 14:04:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:26.108 14:04:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:26.108 14:04:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:11:26.108 14:04:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:11:26.108 14:04:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:26.108 14:04:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:26.109 14:04:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:11:26.109 14:04:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:11:26.109 14:04:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:26.109 14:04:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:26.109 14:04:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:26.109 1+0 records in 00:11:26.109 1+0 records out 00:11:26.109 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000345289 s, 11.9 MB/s 00:11:26.109 14:04:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:26.109 14:04:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:11:26.109 14:04:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:26.109 14:04:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:26.109 14:04:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:11:26.109 14:04:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:26.109 14:04:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:26.109 14:04:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 /dev/nbd10 00:11:26.367 /dev/nbd10 00:11:26.367 14:04:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:11:26.367 14:04:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:11:26.367 14:04:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:11:26.367 14:04:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:11:26.367 14:04:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:26.367 14:04:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:26.367 14:04:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:11:26.367 14:04:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:11:26.367 14:04:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:26.367 14:04:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:26.367 14:04:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:26.367 1+0 records in 00:11:26.367 1+0 records out 00:11:26.367 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000354036 s, 11.6 MB/s 00:11:26.367 14:04:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:26.367 14:04:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:11:26.367 14:04:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:26.367 14:04:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:26.367 14:04:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:11:26.367 14:04:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:26.367 14:04:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:26.367 14:04:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 /dev/nbd11 00:11:26.626 /dev/nbd11 00:11:26.626 14:04:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:11:26.626 14:04:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:11:26.626 14:04:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:11:26.626 14:04:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:11:26.626 14:04:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:26.626 14:04:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:26.626 14:04:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:11:26.626 14:04:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:11:26.626 14:04:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:26.626 14:04:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:26.626 14:04:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:26.626 1+0 records in 00:11:26.626 1+0 records out 00:11:26.626 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000542299 s, 7.6 MB/s 00:11:26.626 14:04:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:26.626 14:04:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:11:26.626 14:04:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:26.626 14:04:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:26.626 14:04:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:11:26.626 14:04:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:26.626 14:04:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:26.626 14:04:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 /dev/nbd12 00:11:26.884 /dev/nbd12 00:11:26.884 14:04:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:11:26.884 14:04:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:11:26.884 14:04:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:11:26.884 14:04:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:11:26.884 14:04:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:26.884 14:04:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:26.884 14:04:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:11:26.884 14:04:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:11:26.884 14:04:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:26.884 14:04:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:26.884 14:04:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:26.884 1+0 records in 00:11:26.884 1+0 records out 00:11:26.884 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000700379 s, 5.8 MB/s 00:11:26.884 14:04:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:26.884 14:04:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:11:26.884 14:04:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:26.884 14:04:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:26.884 14:04:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:11:26.884 14:04:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:26.884 14:04:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:26.884 14:04:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 /dev/nbd13 00:11:27.143 /dev/nbd13 00:11:27.143 14:04:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:11:27.143 14:04:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:11:27.143 14:04:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:11:27.143 14:04:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:11:27.143 14:04:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:27.143 14:04:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:27.143 14:04:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:11:27.143 14:04:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:11:27.143 14:04:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:27.143 14:04:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:27.143 14:04:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:27.143 1+0 records in 00:11:27.143 1+0 records out 00:11:27.143 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000698694 s, 5.9 MB/s 00:11:27.143 14:04:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:27.143 14:04:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:11:27.143 14:04:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:27.143 14:04:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:27.143 14:04:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:11:27.143 14:04:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:27.143 14:04:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:27.143 14:04:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 /dev/nbd14 00:11:27.421 /dev/nbd14 00:11:27.421 14:04:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:11:27.421 14:04:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:11:27.421 14:04:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd14 00:11:27.421 14:04:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:11:27.421 14:04:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:27.421 14:04:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:27.421 14:04:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd14 /proc/partitions 00:11:27.421 14:04:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:11:27.421 14:04:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:27.421 14:04:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:27.421 14:04:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:27.421 1+0 records in 00:11:27.421 1+0 records out 00:11:27.421 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000558895 s, 7.3 MB/s 00:11:27.421 14:04:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:27.421 14:04:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:11:27.421 14:04:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:27.421 14:04:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:27.421 14:04:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:11:27.421 14:04:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:27.421 14:04:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:27.421 14:04:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 /dev/nbd15 00:11:27.678 /dev/nbd15 00:11:27.678 14:04:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd15 00:11:27.678 14:04:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd15 00:11:27.678 14:04:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd15 00:11:27.678 14:04:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:11:27.678 14:04:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:27.678 14:04:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:27.678 14:04:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd15 /proc/partitions 00:11:27.678 14:04:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:11:27.678 14:04:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:27.678 14:04:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:27.678 14:04:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:27.678 1+0 records in 00:11:27.678 1+0 records out 00:11:27.678 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00061546 s, 6.7 MB/s 00:11:27.678 14:04:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:27.678 14:04:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:11:27.678 14:04:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:27.678 14:04:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:27.678 14:04:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:11:27.678 14:04:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:27.678 14:04:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:27.678 14:04:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 /dev/nbd2 00:11:27.936 /dev/nbd2 00:11:27.936 14:04:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd2 00:11:27.936 14:04:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd2 00:11:27.936 14:04:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:11:27.936 14:04:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:11:27.936 14:04:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:27.936 14:04:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:27.936 14:04:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:11:27.936 14:04:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:11:27.936 14:04:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:27.936 14:04:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:27.936 14:04:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:27.936 1+0 records in 00:11:27.936 1+0 records out 00:11:27.936 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000763939 s, 5.4 MB/s 00:11:27.936 14:04:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:27.936 14:04:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:11:27.936 14:04:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:27.936 14:04:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:27.936 14:04:13 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:11:27.936 14:04:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:27.936 14:04:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:27.936 14:04:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 /dev/nbd3 00:11:28.194 /dev/nbd3 00:11:28.194 14:04:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd3 00:11:28.194 14:04:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd3 00:11:28.194 14:04:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:11:28.194 14:04:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:11:28.194 14:04:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:28.194 14:04:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:28.194 14:04:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:11:28.194 14:04:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:11:28.194 14:04:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:28.194 14:04:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:28.194 14:04:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:28.194 1+0 records in 00:11:28.194 1+0 records out 00:11:28.194 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000524025 s, 7.8 MB/s 00:11:28.194 14:04:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:28.194 14:04:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:11:28.194 14:04:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:28.194 14:04:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:28.194 14:04:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:11:28.194 14:04:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:28.194 14:04:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:28.194 14:04:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 /dev/nbd4 00:11:28.452 /dev/nbd4 00:11:28.452 14:04:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd4 00:11:28.452 14:04:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd4 00:11:28.452 14:04:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:11:28.452 14:04:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:11:28.452 14:04:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:28.452 14:04:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:28.452 14:04:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:11:28.452 14:04:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:11:28.452 14:04:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:28.452 14:04:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:28.452 14:04:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:28.452 1+0 records in 00:11:28.452 1+0 records out 00:11:28.452 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000607898 s, 6.7 MB/s 00:11:28.452 14:04:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:28.452 14:04:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:11:28.452 14:04:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:28.710 14:04:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:28.710 14:04:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:11:28.710 14:04:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:28.710 14:04:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:28.710 14:04:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT /dev/nbd5 00:11:28.710 /dev/nbd5 00:11:28.710 14:04:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd5 00:11:28.969 14:04:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd5 00:11:28.969 14:04:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:11:28.969 14:04:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:11:28.969 14:04:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:28.969 14:04:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:28.969 14:04:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:11:28.969 14:04:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:11:28.969 14:04:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:28.969 14:04:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:28.969 14:04:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:28.969 1+0 records in 00:11:28.969 1+0 records out 00:11:28.969 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000693276 s, 5.9 MB/s 00:11:28.969 14:04:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:28.969 14:04:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:11:28.969 14:04:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:28.969 14:04:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:28.969 14:04:14 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:11:28.969 14:04:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:28.969 14:04:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:28.969 14:04:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 /dev/nbd6 00:11:29.227 /dev/nbd6 00:11:29.227 14:04:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd6 00:11:29.227 14:04:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd6 00:11:29.227 14:04:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd6 00:11:29.227 14:04:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:11:29.227 14:04:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:29.227 14:04:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:29.227 14:04:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd6 /proc/partitions 00:11:29.227 14:04:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:11:29.227 14:04:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:29.227 14:04:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:29.227 14:04:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:29.227 1+0 records in 00:11:29.227 1+0 records out 00:11:29.227 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000543911 s, 7.5 MB/s 00:11:29.227 14:04:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:29.227 14:04:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:11:29.227 14:04:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:29.227 14:04:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:29.227 14:04:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:11:29.227 14:04:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:29.227 14:04:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:29.227 14:04:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 /dev/nbd7 00:11:29.485 /dev/nbd7 00:11:29.485 14:04:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd7 00:11:29.485 14:04:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd7 00:11:29.485 14:04:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd7 00:11:29.485 14:04:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:11:29.485 14:04:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:29.485 14:04:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:29.485 14:04:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd7 /proc/partitions 00:11:29.485 14:04:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:11:29.485 14:04:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:29.485 14:04:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:29.485 14:04:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:29.485 1+0 records in 00:11:29.485 1+0 records out 00:11:29.485 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000648379 s, 6.3 MB/s 00:11:29.485 14:04:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:29.485 14:04:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:11:29.485 14:04:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:29.485 14:04:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:29.485 14:04:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:11:29.485 14:04:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:29.485 14:04:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:29.485 14:04:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 /dev/nbd8 00:11:29.742 /dev/nbd8 00:11:29.742 14:04:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd8 00:11:29.742 14:04:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd8 00:11:29.742 14:04:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd8 00:11:29.742 14:04:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:11:29.742 14:04:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:29.743 14:04:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:29.743 14:04:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd8 /proc/partitions 00:11:29.743 14:04:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:11:29.743 14:04:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:29.743 14:04:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:29.743 14:04:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:29.743 1+0 records in 00:11:29.743 1+0 records out 00:11:29.743 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000817492 s, 5.0 MB/s 00:11:29.743 14:04:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:29.743 14:04:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:11:29.743 14:04:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:29.743 14:04:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:29.743 14:04:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:11:29.743 14:04:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:29.743 14:04:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:29.743 14:04:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 /dev/nbd9 00:11:30.000 /dev/nbd9 00:11:30.000 14:04:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd9 00:11:30.000 14:04:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd9 00:11:30.000 14:04:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd9 00:11:30.000 14:04:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:11:30.000 14:04:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:30.000 14:04:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:30.000 14:04:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd9 /proc/partitions 00:11:30.000 14:04:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:11:30.000 14:04:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:30.000 14:04:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:30.000 14:04:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:30.000 1+0 records in 00:11:30.000 1+0 records out 00:11:30.000 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000867039 s, 4.7 MB/s 00:11:30.000 14:04:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:30.000 14:04:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:11:30.000 14:04:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:30.000 14:04:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:30.000 14:04:15 blockdev_general.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:11:30.000 14:04:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:30.000 14:04:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:30.000 14:04:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:30.000 14:04:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:30.000 14:04:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:30.566 14:04:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:30.566 { 00:11:30.566 "nbd_device": "/dev/nbd0", 00:11:30.566 "bdev_name": "Malloc0" 00:11:30.566 }, 00:11:30.566 { 00:11:30.566 "nbd_device": "/dev/nbd1", 00:11:30.566 "bdev_name": "Malloc1p0" 00:11:30.566 }, 00:11:30.566 { 00:11:30.566 "nbd_device": "/dev/nbd10", 00:11:30.566 "bdev_name": "Malloc1p1" 00:11:30.566 }, 00:11:30.566 { 00:11:30.566 "nbd_device": "/dev/nbd11", 00:11:30.566 "bdev_name": "Malloc2p0" 00:11:30.566 }, 00:11:30.566 { 00:11:30.566 "nbd_device": "/dev/nbd12", 00:11:30.566 "bdev_name": "Malloc2p1" 00:11:30.566 }, 00:11:30.566 { 00:11:30.566 "nbd_device": "/dev/nbd13", 00:11:30.566 "bdev_name": "Malloc2p2" 00:11:30.566 }, 00:11:30.566 { 00:11:30.566 "nbd_device": "/dev/nbd14", 00:11:30.566 "bdev_name": "Malloc2p3" 00:11:30.566 }, 00:11:30.566 { 00:11:30.566 "nbd_device": "/dev/nbd15", 00:11:30.566 "bdev_name": "Malloc2p4" 00:11:30.566 }, 00:11:30.566 { 00:11:30.566 "nbd_device": "/dev/nbd2", 00:11:30.566 "bdev_name": "Malloc2p5" 00:11:30.566 }, 00:11:30.566 { 00:11:30.566 "nbd_device": "/dev/nbd3", 00:11:30.566 "bdev_name": "Malloc2p6" 00:11:30.566 }, 00:11:30.566 { 00:11:30.566 "nbd_device": "/dev/nbd4", 00:11:30.566 "bdev_name": "Malloc2p7" 00:11:30.566 }, 00:11:30.566 { 00:11:30.566 "nbd_device": "/dev/nbd5", 00:11:30.566 "bdev_name": "TestPT" 00:11:30.566 }, 00:11:30.566 { 00:11:30.566 "nbd_device": "/dev/nbd6", 00:11:30.566 "bdev_name": "raid0" 00:11:30.566 }, 00:11:30.566 { 00:11:30.566 "nbd_device": "/dev/nbd7", 00:11:30.566 "bdev_name": "concat0" 00:11:30.566 }, 00:11:30.566 { 00:11:30.566 "nbd_device": "/dev/nbd8", 00:11:30.566 "bdev_name": "raid1" 00:11:30.566 }, 00:11:30.566 { 00:11:30.566 "nbd_device": "/dev/nbd9", 00:11:30.566 "bdev_name": "AIO0" 00:11:30.566 } 00:11:30.566 ]' 00:11:30.566 14:04:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:30.566 { 00:11:30.566 "nbd_device": "/dev/nbd0", 00:11:30.566 "bdev_name": "Malloc0" 00:11:30.566 }, 00:11:30.566 { 00:11:30.566 "nbd_device": "/dev/nbd1", 00:11:30.566 "bdev_name": "Malloc1p0" 00:11:30.566 }, 00:11:30.566 { 00:11:30.566 "nbd_device": "/dev/nbd10", 00:11:30.566 "bdev_name": "Malloc1p1" 00:11:30.566 }, 00:11:30.566 { 00:11:30.566 "nbd_device": "/dev/nbd11", 00:11:30.566 "bdev_name": "Malloc2p0" 00:11:30.566 }, 00:11:30.566 { 00:11:30.566 "nbd_device": "/dev/nbd12", 00:11:30.566 "bdev_name": "Malloc2p1" 00:11:30.566 }, 00:11:30.566 { 00:11:30.566 "nbd_device": "/dev/nbd13", 00:11:30.566 "bdev_name": "Malloc2p2" 00:11:30.566 }, 00:11:30.566 { 00:11:30.566 "nbd_device": "/dev/nbd14", 00:11:30.566 "bdev_name": "Malloc2p3" 00:11:30.566 }, 00:11:30.566 { 00:11:30.566 "nbd_device": "/dev/nbd15", 00:11:30.566 "bdev_name": "Malloc2p4" 00:11:30.566 }, 00:11:30.566 { 00:11:30.566 "nbd_device": "/dev/nbd2", 00:11:30.566 "bdev_name": "Malloc2p5" 00:11:30.566 }, 00:11:30.566 { 00:11:30.566 "nbd_device": "/dev/nbd3", 00:11:30.566 "bdev_name": "Malloc2p6" 00:11:30.566 }, 00:11:30.566 { 00:11:30.566 "nbd_device": "/dev/nbd4", 00:11:30.566 "bdev_name": "Malloc2p7" 00:11:30.566 }, 00:11:30.566 { 00:11:30.566 "nbd_device": "/dev/nbd5", 00:11:30.566 "bdev_name": "TestPT" 00:11:30.566 }, 00:11:30.566 { 00:11:30.566 "nbd_device": "/dev/nbd6", 00:11:30.566 "bdev_name": "raid0" 00:11:30.566 }, 00:11:30.566 { 00:11:30.566 "nbd_device": "/dev/nbd7", 00:11:30.566 "bdev_name": "concat0" 00:11:30.566 }, 00:11:30.566 { 00:11:30.566 "nbd_device": "/dev/nbd8", 00:11:30.566 "bdev_name": "raid1" 00:11:30.566 }, 00:11:30.566 { 00:11:30.566 "nbd_device": "/dev/nbd9", 00:11:30.566 "bdev_name": "AIO0" 00:11:30.566 } 00:11:30.566 ]' 00:11:30.566 14:04:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:30.566 14:04:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:30.566 /dev/nbd1 00:11:30.566 /dev/nbd10 00:11:30.566 /dev/nbd11 00:11:30.566 /dev/nbd12 00:11:30.566 /dev/nbd13 00:11:30.566 /dev/nbd14 00:11:30.566 /dev/nbd15 00:11:30.566 /dev/nbd2 00:11:30.566 /dev/nbd3 00:11:30.566 /dev/nbd4 00:11:30.566 /dev/nbd5 00:11:30.566 /dev/nbd6 00:11:30.566 /dev/nbd7 00:11:30.566 /dev/nbd8 00:11:30.566 /dev/nbd9' 00:11:30.566 14:04:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:30.566 /dev/nbd1 00:11:30.566 /dev/nbd10 00:11:30.566 /dev/nbd11 00:11:30.566 /dev/nbd12 00:11:30.566 /dev/nbd13 00:11:30.566 /dev/nbd14 00:11:30.566 /dev/nbd15 00:11:30.566 /dev/nbd2 00:11:30.566 /dev/nbd3 00:11:30.566 /dev/nbd4 00:11:30.566 /dev/nbd5 00:11:30.566 /dev/nbd6 00:11:30.566 /dev/nbd7 00:11:30.566 /dev/nbd8 00:11:30.566 /dev/nbd9' 00:11:30.566 14:04:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:30.566 14:04:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=16 00:11:30.566 14:04:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 16 00:11:30.566 14:04:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=16 00:11:30.566 14:04:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 16 -ne 16 ']' 00:11:30.566 14:04:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' write 00:11:30.566 14:04:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:30.566 14:04:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:30.566 14:04:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:30.566 14:04:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:30.566 14:04:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:30.566 14:04:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:11:30.566 256+0 records in 00:11:30.566 256+0 records out 00:11:30.566 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00925961 s, 113 MB/s 00:11:30.566 14:04:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:30.566 14:04:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:30.566 256+0 records in 00:11:30.566 256+0 records out 00:11:30.566 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.130781 s, 8.0 MB/s 00:11:30.566 14:04:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:30.566 14:04:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:30.824 256+0 records in 00:11:30.824 256+0 records out 00:11:30.824 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.124689 s, 8.4 MB/s 00:11:30.824 14:04:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:30.824 14:04:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:11:30.824 256+0 records in 00:11:30.824 256+0 records out 00:11:30.824 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.125781 s, 8.3 MB/s 00:11:30.824 14:04:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:30.824 14:04:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:11:31.081 256+0 records in 00:11:31.081 256+0 records out 00:11:31.081 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.125352 s, 8.4 MB/s 00:11:31.081 14:04:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:31.081 14:04:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:11:31.081 256+0 records in 00:11:31.081 256+0 records out 00:11:31.081 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.125243 s, 8.4 MB/s 00:11:31.081 14:04:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:31.081 14:04:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:11:31.338 256+0 records in 00:11:31.338 256+0 records out 00:11:31.338 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.126807 s, 8.3 MB/s 00:11:31.338 14:04:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:31.338 14:04:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:11:31.338 256+0 records in 00:11:31.338 256+0 records out 00:11:31.338 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.124823 s, 8.4 MB/s 00:11:31.338 14:04:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:31.338 14:04:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd15 bs=4096 count=256 oflag=direct 00:11:31.595 256+0 records in 00:11:31.595 256+0 records out 00:11:31.595 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.125454 s, 8.4 MB/s 00:11:31.595 14:04:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:31.595 14:04:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd2 bs=4096 count=256 oflag=direct 00:11:31.595 256+0 records in 00:11:31.595 256+0 records out 00:11:31.595 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.12534 s, 8.4 MB/s 00:11:31.595 14:04:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:31.595 14:04:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd3 bs=4096 count=256 oflag=direct 00:11:31.852 256+0 records in 00:11:31.852 256+0 records out 00:11:31.852 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.124119 s, 8.4 MB/s 00:11:31.853 14:04:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:31.853 14:04:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd4 bs=4096 count=256 oflag=direct 00:11:31.853 256+0 records in 00:11:31.853 256+0 records out 00:11:31.853 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.129754 s, 8.1 MB/s 00:11:31.853 14:04:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:31.853 14:04:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd5 bs=4096 count=256 oflag=direct 00:11:32.109 256+0 records in 00:11:32.109 256+0 records out 00:11:32.109 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.123717 s, 8.5 MB/s 00:11:32.109 14:04:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:32.109 14:04:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd6 bs=4096 count=256 oflag=direct 00:11:32.109 256+0 records in 00:11:32.109 256+0 records out 00:11:32.109 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.125169 s, 8.4 MB/s 00:11:32.109 14:04:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:32.109 14:04:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd7 bs=4096 count=256 oflag=direct 00:11:32.366 256+0 records in 00:11:32.366 256+0 records out 00:11:32.366 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.127322 s, 8.2 MB/s 00:11:32.366 14:04:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:32.366 14:04:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd8 bs=4096 count=256 oflag=direct 00:11:32.366 256+0 records in 00:11:32.366 256+0 records out 00:11:32.366 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.130141 s, 8.1 MB/s 00:11:32.366 14:04:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:32.366 14:04:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd9 bs=4096 count=256 oflag=direct 00:11:32.624 256+0 records in 00:11:32.624 256+0 records out 00:11:32.624 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.188724 s, 5.6 MB/s 00:11:32.624 14:04:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' verify 00:11:32.624 14:04:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:32.624 14:04:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:32.624 14:04:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:32.624 14:04:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:32.624 14:04:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:32.624 14:04:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:32.624 14:04:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:32.624 14:04:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:11:32.624 14:04:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:32.624 14:04:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:11:32.624 14:04:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:32.624 14:04:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:11:32.624 14:04:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:32.624 14:04:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:11:32.624 14:04:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:32.624 14:04:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:11:32.624 14:04:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:32.624 14:04:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:11:32.624 14:04:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:32.624 14:04:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:11:32.624 14:04:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:32.624 14:04:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd15 00:11:32.624 14:04:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:32.624 14:04:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd2 00:11:32.624 14:04:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:32.624 14:04:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd3 00:11:32.624 14:04:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:32.624 14:04:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd4 00:11:32.624 14:04:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:32.624 14:04:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd5 00:11:32.624 14:04:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:32.624 14:04:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd6 00:11:32.624 14:04:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:32.624 14:04:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd7 00:11:32.624 14:04:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:32.624 14:04:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd8 00:11:32.881 14:04:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:32.881 14:04:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd9 00:11:32.881 14:04:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:32.881 14:04:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:11:32.881 14:04:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:32.881 14:04:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:32.881 14:04:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:32.881 14:04:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:11:32.881 14:04:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:32.881 14:04:18 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:33.138 14:04:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:33.138 14:04:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:33.138 14:04:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:33.138 14:04:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:33.138 14:04:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:33.138 14:04:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:33.138 14:04:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:33.138 14:04:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:33.138 14:04:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:33.138 14:04:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:33.396 14:04:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:33.396 14:04:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:33.396 14:04:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:33.396 14:04:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:33.396 14:04:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:33.396 14:04:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:33.396 14:04:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:33.396 14:04:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:33.396 14:04:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:33.396 14:04:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:11:33.653 14:04:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:11:33.653 14:04:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:11:33.653 14:04:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:11:33.653 14:04:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:33.653 14:04:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:33.653 14:04:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:11:33.653 14:04:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:33.653 14:04:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:33.653 14:04:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:33.653 14:04:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:11:34.219 14:04:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:11:34.219 14:04:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:11:34.219 14:04:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:11:34.219 14:04:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:34.219 14:04:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:34.219 14:04:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:11:34.219 14:04:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:34.219 14:04:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:34.219 14:04:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:34.219 14:04:19 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:11:34.477 14:04:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:11:34.477 14:04:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:11:34.477 14:04:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:11:34.477 14:04:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:34.477 14:04:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:34.477 14:04:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:11:34.477 14:04:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:34.477 14:04:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:34.477 14:04:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:34.477 14:04:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:11:34.734 14:04:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:11:34.735 14:04:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:11:34.735 14:04:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:11:34.735 14:04:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:34.735 14:04:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:34.735 14:04:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:11:34.735 14:04:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:34.735 14:04:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:34.735 14:04:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:34.735 14:04:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:11:34.992 14:04:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:11:34.992 14:04:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:11:34.992 14:04:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:11:34.992 14:04:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:34.992 14:04:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:34.992 14:04:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:11:34.992 14:04:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:34.992 14:04:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:34.992 14:04:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:34.992 14:04:20 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:11:35.249 14:04:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:11:35.249 14:04:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:11:35.249 14:04:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:11:35.249 14:04:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:35.249 14:04:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:35.249 14:04:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:11:35.249 14:04:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:35.249 14:04:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:35.249 14:04:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:35.249 14:04:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:11:35.507 14:04:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:11:35.765 14:04:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:11:35.765 14:04:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:11:35.765 14:04:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:35.765 14:04:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:35.765 14:04:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:11:35.765 14:04:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:35.765 14:04:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:35.765 14:04:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:35.765 14:04:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:11:35.765 14:04:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:11:35.765 14:04:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:11:35.765 14:04:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:11:35.765 14:04:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:35.765 14:04:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:35.765 14:04:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:11:35.765 14:04:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:35.765 14:04:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:35.765 14:04:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:35.765 14:04:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:11:36.023 14:04:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:11:36.023 14:04:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:11:36.023 14:04:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:11:36.023 14:04:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:36.023 14:04:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:36.023 14:04:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:11:36.023 14:04:21 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:36.023 14:04:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:36.023 14:04:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:36.023 14:04:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:11:36.280 14:04:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:11:36.280 14:04:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:11:36.280 14:04:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:11:36.280 14:04:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:36.280 14:04:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:36.280 14:04:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:11:36.280 14:04:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:36.280 14:04:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:36.280 14:04:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:36.280 14:04:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:11:36.537 14:04:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:11:36.537 14:04:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:11:36.537 14:04:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:11:36.537 14:04:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:36.537 14:04:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:36.537 14:04:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:11:36.537 14:04:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:36.537 14:04:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:36.537 14:04:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:36.537 14:04:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:11:36.796 14:04:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:11:36.796 14:04:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:11:36.796 14:04:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:11:36.796 14:04:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:36.796 14:04:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:36.796 14:04:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:11:36.796 14:04:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:36.796 14:04:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:36.796 14:04:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:36.796 14:04:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:11:37.053 14:04:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:11:37.053 14:04:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:11:37.053 14:04:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:11:37.053 14:04:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:37.054 14:04:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:37.054 14:04:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:11:37.054 14:04:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:37.054 14:04:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:37.054 14:04:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:37.054 14:04:22 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:11:37.403 14:04:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:11:37.403 14:04:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:11:37.403 14:04:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:11:37.403 14:04:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:37.403 14:04:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:37.403 14:04:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:11:37.403 14:04:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:37.403 14:04:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:37.404 14:04:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:37.404 14:04:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:37.404 14:04:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:37.662 14:04:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:37.662 14:04:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:37.662 14:04:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:37.662 14:04:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:37.662 14:04:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:11:37.662 14:04:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:37.662 14:04:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:11:37.662 14:04:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:11:37.662 14:04:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:11:37.662 14:04:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:11:37.662 14:04:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:37.662 14:04:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:11:37.662 14:04:23 blockdev_general.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:11:37.662 14:04:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:37.662 14:04:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:37.662 14:04:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:11:37.662 14:04:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:11:37.662 14:04:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:11:37.919 malloc_lvol_verify 00:11:37.919 14:04:23 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:11:38.178 88875729-3476-404d-8eea-f13c76a7b44c 00:11:38.178 14:04:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:11:38.435 e1429166-4297-4724-b6d7-3d70069c5bc3 00:11:38.435 14:04:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:11:38.693 /dev/nbd0 00:11:38.693 14:04:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:11:38.693 mke2fs 1.46.5 (30-Dec-2021) 00:11:38.693 Discarding device blocks: 0/4096 done 00:11:38.693 Creating filesystem with 4096 1k blocks and 1024 inodes 00:11:38.693 00:11:38.693 Allocating group tables: 0/1 done 00:11:38.693 Writing inode tables: 0/1 done 00:11:38.693 Creating journal (1024 blocks): done 00:11:38.693 Writing superblocks and filesystem accounting information: 0/1 done 00:11:38.693 00:11:38.693 14:04:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:11:38.693 14:04:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:11:38.693 14:04:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:38.693 14:04:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:38.693 14:04:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:38.693 14:04:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:11:38.693 14:04:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:38.693 14:04:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:38.979 14:04:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:38.979 14:04:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:38.979 14:04:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:38.979 14:04:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:38.979 14:04:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:38.979 14:04:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:38.979 14:04:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:11:38.979 14:04:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:11:38.979 14:04:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:11:38.979 14:04:24 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:11:38.979 14:04:24 blockdev_general.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 183296 00:11:38.979 14:04:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 183296 ']' 00:11:38.979 14:04:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 183296 00:11:38.979 14:04:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:11:38.979 14:04:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:38.979 14:04:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 183296 00:11:38.979 14:04:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:38.979 14:04:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:38.979 14:04:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 183296' 00:11:38.979 killing process with pid 183296 00:11:38.979 14:04:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@967 -- # kill 183296 00:11:38.979 14:04:24 blockdev_general.bdev_nbd -- common/autotest_common.sh@972 -- # wait 183296 00:11:41.504 14:04:27 blockdev_general.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:11:41.504 00:11:41.504 real 0m26.695s 00:11:41.504 user 0m35.368s 00:11:41.504 sys 0m11.247s 00:11:41.504 14:04:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:41.504 14:04:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:11:41.504 ************************************ 00:11:41.504 END TEST bdev_nbd 00:11:41.504 ************************************ 00:11:41.504 14:04:27 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:11:41.504 14:04:27 blockdev_general -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:11:41.504 14:04:27 blockdev_general -- bdev/blockdev.sh@764 -- # '[' bdev = nvme ']' 00:11:41.504 14:04:27 blockdev_general -- bdev/blockdev.sh@764 -- # '[' bdev = gpt ']' 00:11:41.504 14:04:27 blockdev_general -- bdev/blockdev.sh@768 -- # run_test bdev_fio fio_test_suite '' 00:11:41.504 14:04:27 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:41.504 14:04:27 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:41.504 14:04:27 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:41.504 ************************************ 00:11:41.504 START TEST bdev_fio 00:11:41.504 ************************************ 00:11:41.504 14:04:27 blockdev_general.bdev_fio -- common/autotest_common.sh@1123 -- # fio_test_suite '' 00:11:41.504 14:04:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@331 -- # local env_context 00:11:41.504 14:04:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@335 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:11:41.504 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:11:41.504 14:04:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@336 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:11:41.504 14:04:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # echo '' 00:11:41.504 14:04:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # sed s/--env-context=// 00:11:41.504 14:04:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # env_context= 00:11:41.504 14:04:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:11:41.504 14:04:27 blockdev_general.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:11:41.504 14:04:27 blockdev_general.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:11:41.504 14:04:27 blockdev_general.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:11:41.504 14:04:27 blockdev_general.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:11:41.504 14:04:27 blockdev_general.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:11:41.504 14:04:27 blockdev_general.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:11:41.504 14:04:27 blockdev_general.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:11:41.504 14:04:27 blockdev_general.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:11:41.504 14:04:27 blockdev_general.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:11:41.504 14:04:27 blockdev_general.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:11:41.504 14:04:27 blockdev_general.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:11:41.504 14:04:27 blockdev_general.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:11:41.504 14:04:27 blockdev_general.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:11:41.504 14:04:27 blockdev_general.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:11:41.504 14:04:27 blockdev_general.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:11:41.504 14:04:27 blockdev_general.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:11:41.504 14:04:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:11:41.504 14:04:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc0]' 00:11:41.504 14:04:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc0 00:11:41.504 14:04:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:11:41.504 14:04:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc1p0]' 00:11:41.504 14:04:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc1p0 00:11:41.504 14:04:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:11:41.504 14:04:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc1p1]' 00:11:41.504 14:04:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc1p1 00:11:41.504 14:04:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:11:41.504 14:04:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p0]' 00:11:41.504 14:04:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p0 00:11:41.504 14:04:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:11:41.504 14:04:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p1]' 00:11:41.504 14:04:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p1 00:11:41.504 14:04:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:11:41.504 14:04:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p2]' 00:11:41.504 14:04:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p2 00:11:41.504 14:04:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:11:41.504 14:04:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p3]' 00:11:41.504 14:04:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p3 00:11:41.504 14:04:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:11:41.504 14:04:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p4]' 00:11:41.504 14:04:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p4 00:11:41.504 14:04:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:11:41.504 14:04:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p5]' 00:11:41.504 14:04:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p5 00:11:41.504 14:04:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:11:41.504 14:04:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p6]' 00:11:41.504 14:04:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p6 00:11:41.504 14:04:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:11:41.504 14:04:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p7]' 00:11:41.504 14:04:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p7 00:11:41.504 14:04:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:11:41.504 14:04:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_TestPT]' 00:11:41.504 14:04:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=TestPT 00:11:41.504 14:04:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:11:41.504 14:04:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_raid0]' 00:11:41.504 14:04:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=raid0 00:11:41.504 14:04:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:11:41.504 14:04:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_concat0]' 00:11:41.504 14:04:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=concat0 00:11:41.505 14:04:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:11:41.505 14:04:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_raid1]' 00:11:41.505 14:04:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=raid1 00:11:41.505 14:04:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:11:41.505 14:04:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_AIO0]' 00:11:41.505 14:04:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=AIO0 00:11:41.505 14:04:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@347 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:11:41.505 14:04:27 blockdev_general.bdev_fio -- bdev/blockdev.sh@349 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:11:41.505 14:04:27 blockdev_general.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:11:41.505 14:04:27 blockdev_general.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:41.505 14:04:27 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:11:41.505 ************************************ 00:11:41.505 START TEST bdev_fio_rw_verify 00:11:41.505 ************************************ 00:11:41.505 14:04:27 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:11:41.505 14:04:27 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:11:41.505 14:04:27 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:11:41.505 14:04:27 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:41.505 14:04:27 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:11:41.505 14:04:27 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:11:41.505 14:04:27 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:11:41.505 14:04:27 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:11:41.505 14:04:27 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:11:41.505 14:04:27 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:11:41.505 14:04:27 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:11:41.505 14:04:27 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:11:41.505 14:04:27 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.6 00:11:41.505 14:04:27 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.6 ]] 00:11:41.505 14:04:27 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:11:41.505 14:04:27 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:11:41.505 14:04:27 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:11:41.505 job_Malloc0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:41.505 job_Malloc1p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:41.505 job_Malloc1p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:41.505 job_Malloc2p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:41.505 job_Malloc2p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:41.505 job_Malloc2p2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:41.505 job_Malloc2p3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:41.505 job_Malloc2p4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:41.505 job_Malloc2p5: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:41.505 job_Malloc2p6: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:41.505 job_Malloc2p7: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:41.505 job_TestPT: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:41.505 job_raid0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:41.505 job_concat0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:41.505 job_raid1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:41.505 job_AIO0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:41.505 fio-3.35 00:11:41.505 Starting 16 threads 00:11:53.728 00:11:53.728 job_Malloc0: (groupid=0, jobs=16): err= 0: pid=184247: Mon Jul 15 14:04:39 2024 00:11:53.728 read: IOPS=111k, BW=435MiB/s (456MB/s)(4348MiB/10001msec) 00:11:53.728 slat (nsec): min=923, max=57039k, avg=16933.18, stdev=258319.31 00:11:53.728 clat (usec): min=3, max=57163, avg=160.88, stdev=847.66 00:11:53.728 lat (usec): min=11, max=57181, avg=177.81, stdev=886.61 00:11:53.728 clat percentiles (usec): 00:11:53.728 | 50.000th=[ 86], 99.000th=[ 709], 99.900th=[13435], 99.990th=[23200], 00:11:53.728 | 99.999th=[51119] 00:11:53.728 write: IOPS=176k, BW=688MiB/s (721MB/s)(6789MiB/9874msec); 0 zone resets 00:11:53.728 slat (usec): min=2, max=60418, avg=49.93, stdev=515.69 00:11:53.728 clat (usec): min=4, max=60704, avg=257.15, stdev=1137.98 00:11:53.728 lat (usec): min=22, max=60723, avg=307.08, stdev=1250.81 00:11:53.728 clat percentiles (usec): 00:11:53.728 | 50.000th=[ 139], 99.000th=[ 1467], 99.900th=[16909], 99.990th=[27657], 00:11:53.728 | 99.999th=[43779] 00:11:53.728 bw ( KiB/s): min=394385, max=1119760, per=98.62%, avg=694356.58, stdev=13663.96, samples=304 00:11:53.728 iops : min=98596, max=279940, avg=173589.11, stdev=3415.99, samples=304 00:11:53.728 lat (usec) : 4=0.01%, 10=0.01%, 20=1.04%, 50=12.74%, 100=29.54% 00:11:53.728 lat (usec) : 250=45.81%, 500=7.16%, 750=2.28%, 1000=0.37% 00:11:53.728 lat (msec) : 2=0.35%, 4=0.11%, 10=0.17%, 20=0.37%, 50=0.04% 00:11:53.728 lat (msec) : 100=0.01% 00:11:53.728 cpu : usr=54.76%, sys=2.15%, ctx=260853, majf=1, minf=120387 00:11:53.728 IO depths : 1=11.6%, 2=23.9%, 4=51.3%, 8=13.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:53.728 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:53.728 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:53.728 issued rwts: total=1113103,1738003,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:53.728 latency : target=0, window=0, percentile=100.00%, depth=8 00:11:53.728 00:11:53.728 Run status group 0 (all jobs): 00:11:53.728 READ: bw=435MiB/s (456MB/s), 435MiB/s-435MiB/s (456MB/s-456MB/s), io=4348MiB (4559MB), run=10001-10001msec 00:11:53.728 WRITE: bw=688MiB/s (721MB/s), 688MiB/s-688MiB/s (721MB/s-721MB/s), io=6789MiB (7119MB), run=9874-9874msec 00:11:55.627 ----------------------------------------------------- 00:11:55.627 Suppressions used: 00:11:55.627 count bytes template 00:11:55.627 16 140 /usr/src/fio/parse.c 00:11:55.627 11598 1113408 /usr/src/fio/iolog.c 00:11:55.627 1 8 libtcmalloc_minimal.so 00:11:55.627 1 904 libcrypto.so 00:11:55.627 ----------------------------------------------------- 00:11:55.627 00:11:55.627 00:11:55.627 real 0m14.040s 00:11:55.627 user 1m32.849s 00:11:55.627 sys 0m4.168s 00:11:55.627 14:04:41 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:55.627 14:04:41 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:11:55.627 ************************************ 00:11:55.627 END TEST bdev_fio_rw_verify 00:11:55.627 ************************************ 00:11:55.627 14:04:41 blockdev_general.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:11:55.627 14:04:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f 00:11:55.627 14:04:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@351 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:11:55.627 14:04:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@354 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:11:55.627 14:04:41 blockdev_general.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:11:55.627 14:04:41 blockdev_general.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:11:55.627 14:04:41 blockdev_general.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:11:55.627 14:04:41 blockdev_general.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:11:55.627 14:04:41 blockdev_general.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:11:55.627 14:04:41 blockdev_general.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:11:55.627 14:04:41 blockdev_general.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:11:55.627 14:04:41 blockdev_general.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:11:55.627 14:04:41 blockdev_general.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:11:55.627 14:04:41 blockdev_general.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:11:55.627 14:04:41 blockdev_general.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:11:55.627 14:04:41 blockdev_general.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:11:55.627 14:04:41 blockdev_general.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:11:55.627 14:04:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:11:55.628 14:04:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "d02aa269-ae2f-4cbb-ac80-2afa3ffaa0e2"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "d02aa269-ae2f-4cbb-ac80-2afa3ffaa0e2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "3ab29fa7-2995-5599-a4eb-acc58a058ca0"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "3ab29fa7-2995-5599-a4eb-acc58a058ca0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "c2461ec8-13b5-57b5-beca-f48d19444804"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "c2461ec8-13b5-57b5-beca-f48d19444804",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "c8c09a0c-bc3f-5a99-adfe-cc25537c2a4c"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "c8c09a0c-bc3f-5a99-adfe-cc25537c2a4c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "06c46d9c-9000-5e14-9782-7a0f7454357b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "06c46d9c-9000-5e14-9782-7a0f7454357b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "de5d9cfe-7834-5b8b-ba7c-3c9e05d606f7"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "de5d9cfe-7834-5b8b-ba7c-3c9e05d606f7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "9820adfd-bd9a-5946-8588-cd4d1808911c"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "9820adfd-bd9a-5946-8588-cd4d1808911c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "cfca1155-dd52-564a-85bc-6b698daece95"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "cfca1155-dd52-564a-85bc-6b698daece95",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "20b8f8f8-8b1e-514e-842f-5f1ad1e62135"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "20b8f8f8-8b1e-514e-842f-5f1ad1e62135",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "1357ee3a-ebab-5b7c-bd88-9005fb61cde8"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "1357ee3a-ebab-5b7c-bd88-9005fb61cde8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "5cfb737f-2cc0-5b14-999e-8e285b635145"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "5cfb737f-2cc0-5b14-999e-8e285b635145",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "9699ae8d-77e7-54f6-8ecf-1b37edbd3ccd"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "9699ae8d-77e7-54f6-8ecf-1b37edbd3ccd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "6a8469d3-f60c-47d8-8740-963cf5e2e931"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "6a8469d3-f60c-47d8-8740-963cf5e2e931",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "6a8469d3-f60c-47d8-8740-963cf5e2e931",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "d44b941d-5dd5-4fa0-89df-401bc33626b0",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "add801b3-df31-45ca-ad40-c9efbe971f5d",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "046493ce-43fc-4be4-b04b-b47267d7e6b1"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "046493ce-43fc-4be4-b04b-b47267d7e6b1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "046493ce-43fc-4be4-b04b-b47267d7e6b1",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "3952e20d-61c6-48fd-8c7e-effa1899f995",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "e119af84-3c5e-4662-b223-1b1bd2229dfa",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "1d89eff0-9057-4dc7-af62-a370bbe0a535"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "1d89eff0-9057-4dc7-af62-a370bbe0a535",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "1d89eff0-9057-4dc7-af62-a370bbe0a535",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "94b20597-34e3-4fb8-877e-4f98b85b025b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "7955d88b-042c-48ac-ad17-2829f4e9f3dc",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "64efa5b5-6251-406b-8d94-3d32bdfd81a7"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "64efa5b5-6251-406b-8d94-3d32bdfd81a7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:11:55.628 14:04:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # [[ -n Malloc0 00:11:55.628 Malloc1p0 00:11:55.628 Malloc1p1 00:11:55.628 Malloc2p0 00:11:55.628 Malloc2p1 00:11:55.628 Malloc2p2 00:11:55.628 Malloc2p3 00:11:55.628 Malloc2p4 00:11:55.628 Malloc2p5 00:11:55.628 Malloc2p6 00:11:55.628 Malloc2p7 00:11:55.628 TestPT 00:11:55.628 raid0 00:11:55.628 concat0 ]] 00:11:55.628 14:04:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:11:55.630 14:04:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "d02aa269-ae2f-4cbb-ac80-2afa3ffaa0e2"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "d02aa269-ae2f-4cbb-ac80-2afa3ffaa0e2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "3ab29fa7-2995-5599-a4eb-acc58a058ca0"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "3ab29fa7-2995-5599-a4eb-acc58a058ca0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "c2461ec8-13b5-57b5-beca-f48d19444804"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "c2461ec8-13b5-57b5-beca-f48d19444804",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "c8c09a0c-bc3f-5a99-adfe-cc25537c2a4c"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "c8c09a0c-bc3f-5a99-adfe-cc25537c2a4c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "06c46d9c-9000-5e14-9782-7a0f7454357b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "06c46d9c-9000-5e14-9782-7a0f7454357b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "de5d9cfe-7834-5b8b-ba7c-3c9e05d606f7"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "de5d9cfe-7834-5b8b-ba7c-3c9e05d606f7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "9820adfd-bd9a-5946-8588-cd4d1808911c"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "9820adfd-bd9a-5946-8588-cd4d1808911c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "cfca1155-dd52-564a-85bc-6b698daece95"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "cfca1155-dd52-564a-85bc-6b698daece95",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "20b8f8f8-8b1e-514e-842f-5f1ad1e62135"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "20b8f8f8-8b1e-514e-842f-5f1ad1e62135",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "1357ee3a-ebab-5b7c-bd88-9005fb61cde8"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "1357ee3a-ebab-5b7c-bd88-9005fb61cde8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "5cfb737f-2cc0-5b14-999e-8e285b635145"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "5cfb737f-2cc0-5b14-999e-8e285b635145",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "9699ae8d-77e7-54f6-8ecf-1b37edbd3ccd"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "9699ae8d-77e7-54f6-8ecf-1b37edbd3ccd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "6a8469d3-f60c-47d8-8740-963cf5e2e931"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "6a8469d3-f60c-47d8-8740-963cf5e2e931",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "6a8469d3-f60c-47d8-8740-963cf5e2e931",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "d44b941d-5dd5-4fa0-89df-401bc33626b0",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "add801b3-df31-45ca-ad40-c9efbe971f5d",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "046493ce-43fc-4be4-b04b-b47267d7e6b1"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "046493ce-43fc-4be4-b04b-b47267d7e6b1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "046493ce-43fc-4be4-b04b-b47267d7e6b1",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "3952e20d-61c6-48fd-8c7e-effa1899f995",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "e119af84-3c5e-4662-b223-1b1bd2229dfa",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "1d89eff0-9057-4dc7-af62-a370bbe0a535"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "1d89eff0-9057-4dc7-af62-a370bbe0a535",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "1d89eff0-9057-4dc7-af62-a370bbe0a535",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "94b20597-34e3-4fb8-877e-4f98b85b025b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "7955d88b-042c-48ac-ad17-2829f4e9f3dc",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "64efa5b5-6251-406b-8d94-3d32bdfd81a7"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "64efa5b5-6251-406b-8d94-3d32bdfd81a7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:11:55.630 14:04:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:55.630 14:04:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc0]' 00:11:55.630 14:04:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc0 00:11:55.630 14:04:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:55.630 14:04:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc1p0]' 00:11:55.630 14:04:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc1p0 00:11:55.630 14:04:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:55.630 14:04:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc1p1]' 00:11:55.630 14:04:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc1p1 00:11:55.630 14:04:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:55.630 14:04:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p0]' 00:11:55.630 14:04:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p0 00:11:55.630 14:04:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:55.630 14:04:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p1]' 00:11:55.630 14:04:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p1 00:11:55.630 14:04:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:55.630 14:04:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p2]' 00:11:55.630 14:04:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p2 00:11:55.630 14:04:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:55.630 14:04:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p3]' 00:11:55.630 14:04:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p3 00:11:55.630 14:04:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:55.630 14:04:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p4]' 00:11:55.630 14:04:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p4 00:11:55.630 14:04:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:55.630 14:04:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p5]' 00:11:55.630 14:04:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p5 00:11:55.630 14:04:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:55.630 14:04:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p6]' 00:11:55.630 14:04:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p6 00:11:55.630 14:04:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:55.630 14:04:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p7]' 00:11:55.630 14:04:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p7 00:11:55.630 14:04:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:55.630 14:04:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_TestPT]' 00:11:55.630 14:04:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=TestPT 00:11:55.630 14:04:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:55.630 14:04:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_raid0]' 00:11:55.630 14:04:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=raid0 00:11:55.630 14:04:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:11:55.630 14:04:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_concat0]' 00:11:55.630 14:04:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=concat0 00:11:55.630 14:04:41 blockdev_general.bdev_fio -- bdev/blockdev.sh@367 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:11:55.630 14:04:41 blockdev_general.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:11:55.630 14:04:41 blockdev_general.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:55.630 14:04:41 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:11:55.630 ************************************ 00:11:55.630 START TEST bdev_fio_trim 00:11:55.630 ************************************ 00:11:55.630 14:04:41 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:11:55.630 14:04:41 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:11:55.630 14:04:41 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:11:55.630 14:04:41 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:55.630 14:04:41 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local sanitizers 00:11:55.630 14:04:41 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:11:55.630 14:04:41 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # shift 00:11:55.630 14:04:41 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # local asan_lib= 00:11:55.630 14:04:41 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:11:55.630 14:04:41 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:11:55.630 14:04:41 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libasan 00:11:55.630 14:04:41 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:11:55.630 14:04:41 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.6 00:11:55.630 14:04:41 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.6 ]] 00:11:55.630 14:04:41 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1347 -- # break 00:11:55.630 14:04:41 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:11:55.630 14:04:41 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:11:55.889 job_Malloc0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:55.889 job_Malloc1p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:55.889 job_Malloc1p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:55.889 job_Malloc2p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:55.889 job_Malloc2p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:55.889 job_Malloc2p2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:55.889 job_Malloc2p3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:55.889 job_Malloc2p4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:55.889 job_Malloc2p5: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:55.889 job_Malloc2p6: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:55.889 job_Malloc2p7: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:55.889 job_TestPT: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:55.889 job_raid0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:55.889 job_concat0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:55.889 fio-3.35 00:11:55.889 Starting 14 threads 00:12:08.149 00:12:08.149 job_Malloc0: (groupid=0, jobs=14): err= 0: pid=184462: Mon Jul 15 14:04:53 2024 00:12:08.149 write: IOPS=278k, BW=1088MiB/s (1141MB/s)(10.6GiB/10014msec); 0 zone resets 00:12:08.149 slat (nsec): min=1107, max=35334k, avg=16388.84, stdev=241082.85 00:12:08.149 clat (usec): min=9, max=30253, avg=142.26, stdev=788.77 00:12:08.149 lat (usec): min=12, max=35463, avg=158.65, stdev=824.54 00:12:08.149 clat percentiles (usec): 00:12:08.149 | 50.000th=[ 81], 99.000th=[ 693], 99.900th=[13566], 99.990th=[17433], 00:12:08.149 | 99.999th=[25297] 00:12:08.149 bw ( MiB/s): min= 605, max= 1698, per=100.00%, avg=1100.58, stdev=25.05, samples=267 00:12:08.149 iops : min=154880, max=434782, avg=281748.89, stdev=6412.20, samples=267 00:12:08.149 trim: IOPS=278k, BW=1088MiB/s (1141MB/s)(10.6GiB/10014msec); 0 zone resets 00:12:08.149 slat (usec): min=2, max=30092, avg=11.82, stdev=201.54 00:12:08.149 clat (nsec): min=1895, max=35464k, avg=127650.51, stdev=665597.19 00:12:08.149 lat (usec): min=5, max=35469, avg=139.47, stdev=695.57 00:12:08.149 clat percentiles (usec): 00:12:08.149 | 50.000th=[ 89], 99.000th=[ 215], 99.900th=[13173], 99.990th=[15139], 00:12:08.149 | 99.999th=[25297] 00:12:08.149 bw ( MiB/s): min= 605, max= 1698, per=100.00%, avg=1100.59, stdev=25.05, samples=267 00:12:08.149 iops : min=154880, max=434776, avg=281749.80, stdev=6412.16, samples=267 00:12:08.149 lat (usec) : 2=0.01%, 4=0.01%, 10=0.24%, 20=0.45%, 50=11.75% 00:12:08.149 lat (usec) : 100=54.67%, 250=31.13%, 500=0.67%, 750=0.65%, 1000=0.09% 00:12:08.149 lat (msec) : 2=0.01%, 4=0.01%, 10=0.03%, 20=0.30%, 50=0.01% 00:12:08.149 cpu : usr=68.75%, sys=0.24%, ctx=166407, majf=0, minf=831 00:12:08.149 IO depths : 1=12.3%, 2=24.6%, 4=50.1%, 8=13.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:08.149 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:08.149 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:08.149 issued rwts: total=0,2788483,2788489,0 short=0,0,0,0 dropped=0,0,0,0 00:12:08.149 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:08.149 00:12:08.149 Run status group 0 (all jobs): 00:12:08.149 WRITE: bw=1088MiB/s (1141MB/s), 1088MiB/s-1088MiB/s (1141MB/s-1141MB/s), io=10.6GiB (11.4GB), run=10014-10014msec 00:12:08.149 TRIM: bw=1088MiB/s (1141MB/s), 1088MiB/s-1088MiB/s (1141MB/s-1141MB/s), io=10.6GiB (11.4GB), run=10014-10014msec 00:12:09.526 ----------------------------------------------------- 00:12:09.526 Suppressions used: 00:12:09.526 count bytes template 00:12:09.526 14 129 /usr/src/fio/parse.c 00:12:09.526 1 8 libtcmalloc_minimal.so 00:12:09.526 1 904 libcrypto.so 00:12:09.526 ----------------------------------------------------- 00:12:09.526 00:12:09.526 00:12:09.526 real 0m13.866s 00:12:09.526 user 1m40.908s 00:12:09.526 sys 0m0.922s 00:12:09.526 14:04:55 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:09.526 14:04:55 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:12:09.526 ************************************ 00:12:09.526 END TEST bdev_fio_trim 00:12:09.526 ************************************ 00:12:09.526 14:04:55 blockdev_general.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:12:09.526 14:04:55 blockdev_general.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f 00:12:09.526 14:04:55 blockdev_general.bdev_fio -- bdev/blockdev.sh@369 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:09.526 14:04:55 blockdev_general.bdev_fio -- bdev/blockdev.sh@370 -- # popd 00:12:09.526 /home/vagrant/spdk_repo/spdk 00:12:09.526 14:04:55 blockdev_general.bdev_fio -- bdev/blockdev.sh@371 -- # trap - SIGINT SIGTERM EXIT 00:12:09.526 00:12:09.526 real 0m28.216s 00:12:09.526 user 3m13.927s 00:12:09.526 sys 0m5.200s 00:12:09.526 14:04:55 blockdev_general.bdev_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:09.526 14:04:55 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:12:09.526 ************************************ 00:12:09.526 END TEST bdev_fio 00:12:09.526 ************************************ 00:12:09.527 14:04:55 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:12:09.527 14:04:55 blockdev_general -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:09.527 14:04:55 blockdev_general -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:09.527 14:04:55 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:12:09.527 14:04:55 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:09.527 14:04:55 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:09.527 ************************************ 00:12:09.527 START TEST bdev_verify 00:12:09.527 ************************************ 00:12:09.527 14:04:55 blockdev_general.bdev_verify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:09.786 [2024-07-15 14:04:55.549700] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:12:09.786 [2024-07-15 14:04:55.550072] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid184647 ] 00:12:09.786 [2024-07-15 14:04:55.710183] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:10.044 [2024-07-15 14:04:55.954830] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:10.044 [2024-07-15 14:04:55.954833] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:10.612 [2024-07-15 14:04:56.330555] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:10.612 [2024-07-15 14:04:56.330999] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:10.612 [2024-07-15 14:04:56.338530] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:10.612 [2024-07-15 14:04:56.338810] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:10.612 [2024-07-15 14:04:56.346585] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:10.612 [2024-07-15 14:04:56.346866] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:10.612 [2024-07-15 14:04:56.347096] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:10.612 [2024-07-15 14:04:56.530186] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:10.612 [2024-07-15 14:04:56.530603] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:10.612 [2024-07-15 14:04:56.530862] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:12:10.612 [2024-07-15 14:04:56.531092] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:10.612 [2024-07-15 14:04:56.533153] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:10.612 [2024-07-15 14:04:56.533390] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:11.179 Running I/O for 5 seconds... 00:12:16.448 00:12:16.448 Latency(us) 00:12:16.448 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:16.448 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:16.448 Verification LBA range: start 0x0 length 0x1000 00:12:16.448 Malloc0 : 5.08 2346.56 9.17 0.00 0.00 54487.83 336.99 127735.62 00:12:16.448 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:16.448 Verification LBA range: start 0x1000 length 0x1000 00:12:16.448 Malloc0 : 5.08 2262.41 8.84 0.00 0.00 56515.10 47.71 280255.77 00:12:16.448 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:16.448 Verification LBA range: start 0x0 length 0x800 00:12:16.448 Malloc1p0 : 5.12 1200.70 4.69 0.00 0.00 106330.00 1325.61 124875.87 00:12:16.448 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:16.448 Verification LBA range: start 0x800 length 0x800 00:12:16.448 Malloc1p0 : 5.12 1249.19 4.88 0.00 0.00 102212.77 1333.06 87699.08 00:12:16.448 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:16.448 Verification LBA range: start 0x0 length 0x800 00:12:16.448 Malloc1p1 : 5.12 1200.40 4.69 0.00 0.00 106222.82 1273.48 118203.11 00:12:16.448 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:16.448 Verification LBA range: start 0x800 length 0x800 00:12:16.448 Malloc1p1 : 5.13 1248.69 4.88 0.00 0.00 102124.39 1258.59 86269.21 00:12:16.448 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:16.448 Verification LBA range: start 0x0 length 0x200 00:12:16.448 Malloc2p0 : 5.12 1200.12 4.69 0.00 0.00 106113.61 1213.91 111053.73 00:12:16.448 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:16.448 Verification LBA range: start 0x200 length 0x200 00:12:16.448 Malloc2p0 : 5.13 1248.21 4.88 0.00 0.00 102041.10 1221.35 86269.21 00:12:16.448 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:16.448 Verification LBA range: start 0x0 length 0x200 00:12:16.448 Malloc2p1 : 5.12 1199.86 4.69 0.00 0.00 106013.21 1184.12 104857.60 00:12:16.448 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:16.448 Verification LBA range: start 0x200 length 0x200 00:12:16.448 Malloc2p1 : 5.13 1247.81 4.87 0.00 0.00 101956.44 1191.56 86269.21 00:12:16.448 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:16.448 Verification LBA range: start 0x0 length 0x200 00:12:16.448 Malloc2p2 : 5.12 1199.64 4.69 0.00 0.00 105912.41 1146.88 100091.35 00:12:16.448 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:16.448 Verification LBA range: start 0x200 length 0x200 00:12:16.448 Malloc2p2 : 5.13 1247.47 4.87 0.00 0.00 101859.08 1154.33 86269.21 00:12:16.448 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:16.448 Verification LBA range: start 0x0 length 0x200 00:12:16.448 Malloc2p3 : 5.12 1199.21 4.68 0.00 0.00 105818.17 1109.64 95325.09 00:12:16.448 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:16.448 Verification LBA range: start 0x200 length 0x200 00:12:16.448 Malloc2p3 : 5.13 1247.13 4.87 0.00 0.00 101776.62 1102.20 85315.96 00:12:16.448 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:16.448 Verification LBA range: start 0x0 length 0x200 00:12:16.448 Malloc2p4 : 5.13 1198.74 4.68 0.00 0.00 105743.55 1064.96 93418.59 00:12:16.448 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:16.448 Verification LBA range: start 0x200 length 0x200 00:12:16.448 Malloc2p4 : 5.13 1246.78 4.87 0.00 0.00 101691.24 1087.30 84839.33 00:12:16.448 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:16.448 Verification LBA range: start 0x0 length 0x200 00:12:16.448 Malloc2p5 : 5.13 1198.27 4.68 0.00 0.00 105673.51 1057.51 93895.21 00:12:16.448 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:16.448 Verification LBA range: start 0x200 length 0x200 00:12:16.448 Malloc2p5 : 5.13 1246.45 4.87 0.00 0.00 101612.51 1050.07 85315.96 00:12:16.448 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:16.448 Verification LBA range: start 0x0 length 0x200 00:12:16.449 Malloc2p6 : 5.13 1197.87 4.68 0.00 0.00 105596.98 1035.17 95325.09 00:12:16.449 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:16.449 Verification LBA range: start 0x200 length 0x200 00:12:16.449 Malloc2p6 : 5.14 1246.13 4.87 0.00 0.00 101528.62 1027.72 85315.96 00:12:16.449 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:16.449 Verification LBA range: start 0x0 length 0x200 00:12:16.449 Malloc2p7 : 5.13 1197.48 4.68 0.00 0.00 105517.68 1012.83 101521.22 00:12:16.449 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:16.449 Verification LBA range: start 0x200 length 0x200 00:12:16.449 Malloc2p7 : 5.14 1245.78 4.87 0.00 0.00 101445.51 1012.83 85792.58 00:12:16.449 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:16.449 Verification LBA range: start 0x0 length 0x1000 00:12:16.449 TestPT : 5.13 1197.07 4.68 0.00 0.00 105406.51 811.75 107240.73 00:12:16.449 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:16.449 Verification LBA range: start 0x1000 length 0x1000 00:12:16.449 TestPT : 5.14 1244.74 4.86 0.00 0.00 101410.42 3083.17 85315.96 00:12:16.449 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:16.449 Verification LBA range: start 0x0 length 0x2000 00:12:16.449 raid0 : 5.13 1196.70 4.67 0.00 0.00 105263.45 938.36 113436.86 00:12:16.449 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:16.449 Verification LBA range: start 0x2000 length 0x2000 00:12:16.449 raid0 : 5.14 1245.29 4.86 0.00 0.00 101198.31 916.01 80073.08 00:12:16.449 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:16.449 Verification LBA range: start 0x0 length 0x2000 00:12:16.449 concat0 : 5.14 1196.36 4.67 0.00 0.00 105185.61 983.04 119156.36 00:12:16.449 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:16.449 Verification LBA range: start 0x2000 length 0x2000 00:12:16.449 concat0 : 5.14 1245.14 4.86 0.00 0.00 101109.22 934.63 81026.33 00:12:16.449 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:16.449 Verification LBA range: start 0x0 length 0x1000 00:12:16.449 raid1 : 5.14 1195.97 4.67 0.00 0.00 105097.15 997.93 125829.12 00:12:16.449 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:16.449 Verification LBA range: start 0x1000 length 0x1000 00:12:16.449 raid1 : 5.14 1244.97 4.86 0.00 0.00 101015.63 1146.88 81979.58 00:12:16.449 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:16.449 Verification LBA range: start 0x0 length 0x4e2 00:12:16.449 AIO0 : 5.14 1195.61 4.67 0.00 0.00 105007.56 1050.07 128688.87 00:12:16.449 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:16.449 Verification LBA range: start 0x4e2 length 0x4e2 00:12:16.449 AIO0 : 5.14 1244.70 4.86 0.00 0.00 100892.79 711.21 85792.58 00:12:16.449 =================================================================================================================== 00:12:16.449 Total : 41281.47 161.26 0.00 0.00 98259.23 47.71 280255.77 00:12:18.360 00:12:18.360 real 0m8.820s 00:12:18.360 user 0m15.888s 00:12:18.360 sys 0m0.578s 00:12:18.360 14:05:04 blockdev_general.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:18.360 14:05:04 blockdev_general.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:12:18.360 ************************************ 00:12:18.360 END TEST bdev_verify 00:12:18.360 ************************************ 00:12:18.618 14:05:04 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:12:18.618 14:05:04 blockdev_general -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:18.618 14:05:04 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:12:18.618 14:05:04 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:18.618 14:05:04 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:18.618 ************************************ 00:12:18.618 START TEST bdev_verify_big_io 00:12:18.618 ************************************ 00:12:18.618 14:05:04 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:18.618 [2024-07-15 14:05:04.421287] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:12:18.618 [2024-07-15 14:05:04.421628] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid184770 ] 00:12:18.619 [2024-07-15 14:05:04.577092] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:18.876 [2024-07-15 14:05:04.796657] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:18.876 [2024-07-15 14:05:04.796663] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:19.442 [2024-07-15 14:05:05.178481] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:19.442 [2024-07-15 14:05:05.178966] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:19.442 [2024-07-15 14:05:05.186441] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:19.442 [2024-07-15 14:05:05.186744] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:19.442 [2024-07-15 14:05:05.194494] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:19.442 [2024-07-15 14:05:05.194804] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:19.442 [2024-07-15 14:05:05.195018] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:19.442 [2024-07-15 14:05:05.391339] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:19.442 [2024-07-15 14:05:05.391823] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:19.442 [2024-07-15 14:05:05.392072] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:12:19.442 [2024-07-15 14:05:05.392322] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:19.442 [2024-07-15 14:05:05.394546] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:19.442 [2024-07-15 14:05:05.394811] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:20.009 [2024-07-15 14:05:05.750078] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:12:20.009 [2024-07-15 14:05:05.753892] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:12:20.009 [2024-07-15 14:05:05.757853] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:12:20.009 [2024-07-15 14:05:05.761796] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:12:20.009 [2024-07-15 14:05:05.765281] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:12:20.009 [2024-07-15 14:05:05.769158] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:12:20.009 [2024-07-15 14:05:05.772512] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:12:20.009 [2024-07-15 14:05:05.776476] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:12:20.009 [2024-07-15 14:05:05.779841] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:12:20.009 [2024-07-15 14:05:05.783723] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:12:20.009 [2024-07-15 14:05:05.787023] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:12:20.009 [2024-07-15 14:05:05.790922] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:12:20.009 [2024-07-15 14:05:05.794211] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:12:20.009 [2024-07-15 14:05:05.798092] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:12:20.009 [2024-07-15 14:05:05.802045] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:12:20.009 [2024-07-15 14:05:05.805420] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:12:20.009 [2024-07-15 14:05:05.888837] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:12:20.009 [2024-07-15 14:05:05.895837] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:12:20.009 Running I/O for 5 seconds... 00:12:26.572 00:12:26.572 Latency(us) 00:12:26.572 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:26.572 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:26.572 Verification LBA range: start 0x0 length 0x100 00:12:26.572 Malloc0 : 5.38 475.52 29.72 0.00 0.00 266731.55 323.96 911307.87 00:12:26.572 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:26.572 Verification LBA range: start 0x100 length 0x100 00:12:26.572 Malloc0 : 5.27 412.82 25.80 0.00 0.00 306463.71 307.20 1060015.01 00:12:26.572 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:26.573 Verification LBA range: start 0x0 length 0x80 00:12:26.573 Malloc1p0 : 5.50 241.31 15.08 0.00 0.00 511850.40 1616.06 1075267.03 00:12:26.573 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:26.573 Verification LBA range: start 0x80 length 0x80 00:12:26.573 Malloc1p0 : 5.82 71.50 4.47 0.00 0.00 1694124.37 815.48 2547086.43 00:12:26.573 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:26.573 Verification LBA range: start 0x0 length 0x80 00:12:26.573 Malloc1p1 : 5.66 79.14 4.95 0.00 0.00 1529060.76 722.39 2211542.11 00:12:26.573 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:26.573 Verification LBA range: start 0x80 length 0x80 00:12:26.573 Malloc1p1 : 5.82 71.49 4.47 0.00 0.00 1664240.18 815.48 2470826.36 00:12:26.573 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:26.573 Verification LBA range: start 0x0 length 0x20 00:12:26.573 Malloc2p0 : 5.46 64.45 4.03 0.00 0.00 471398.91 370.50 800730.76 00:12:26.573 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:26.573 Verification LBA range: start 0x20 length 0x20 00:12:26.573 Malloc2p0 : 5.53 57.82 3.61 0.00 0.00 516832.50 357.47 880803.84 00:12:26.573 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:26.573 Verification LBA range: start 0x0 length 0x20 00:12:26.573 Malloc2p1 : 5.46 64.45 4.03 0.00 0.00 469799.61 383.53 789291.75 00:12:26.573 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:26.573 Verification LBA range: start 0x20 length 0x20 00:12:26.573 Malloc2p1 : 5.59 60.16 3.76 0.00 0.00 497744.84 379.81 869364.83 00:12:26.573 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:26.573 Verification LBA range: start 0x0 length 0x20 00:12:26.573 Malloc2p2 : 5.46 64.44 4.03 0.00 0.00 468210.48 379.81 781665.75 00:12:26.573 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:26.573 Verification LBA range: start 0x20 length 0x20 00:12:26.573 Malloc2p2 : 5.59 60.16 3.76 0.00 0.00 495519.20 409.60 854112.81 00:12:26.573 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:26.573 Verification LBA range: start 0x0 length 0x20 00:12:26.573 Malloc2p3 : 5.46 64.43 4.03 0.00 0.00 466452.24 377.95 774039.74 00:12:26.573 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:26.573 Verification LBA range: start 0x20 length 0x20 00:12:26.573 Malloc2p3 : 5.59 60.15 3.76 0.00 0.00 493078.57 338.85 842673.80 00:12:26.573 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:26.573 Verification LBA range: start 0x0 length 0x20 00:12:26.573 Malloc2p4 : 5.46 64.43 4.03 0.00 0.00 464835.35 336.99 758787.72 00:12:26.573 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:26.573 Verification LBA range: start 0x20 length 0x20 00:12:26.573 Malloc2p4 : 5.59 60.14 3.76 0.00 0.00 490950.91 348.16 827421.79 00:12:26.573 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:26.573 Verification LBA range: start 0x0 length 0x20 00:12:26.573 Malloc2p5 : 5.50 66.86 4.18 0.00 0.00 448002.01 364.92 751161.72 00:12:26.573 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:26.573 Verification LBA range: start 0x20 length 0x20 00:12:26.573 Malloc2p5 : 5.59 60.14 3.76 0.00 0.00 488610.19 344.44 815982.78 00:12:26.573 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:26.573 Verification LBA range: start 0x0 length 0x20 00:12:26.573 Malloc2p6 : 5.50 66.85 4.18 0.00 0.00 446477.05 381.67 739722.71 00:12:26.573 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:26.573 Verification LBA range: start 0x20 length 0x20 00:12:26.573 Malloc2p6 : 5.59 60.13 3.76 0.00 0.00 486573.47 353.75 804543.77 00:12:26.573 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:26.573 Verification LBA range: start 0x0 length 0x20 00:12:26.573 Malloc2p7 : 5.51 66.84 4.18 0.00 0.00 444623.18 521.31 724470.69 00:12:26.573 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:26.573 Verification LBA range: start 0x20 length 0x20 00:12:26.573 Malloc2p7 : 5.59 60.13 3.76 0.00 0.00 484473.76 316.51 793104.76 00:12:26.573 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:26.573 Verification LBA range: start 0x0 length 0x100 00:12:26.573 TestPT : 5.71 79.11 4.94 0.00 0.00 1459630.66 43849.54 1868371.78 00:12:26.573 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:26.573 Verification LBA range: start 0x100 length 0x100 00:12:26.573 TestPT : 5.83 71.31 4.46 0.00 0.00 1593205.94 45994.36 2104778.01 00:12:26.573 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:26.573 Verification LBA range: start 0x0 length 0x200 00:12:26.573 raid0 : 5.66 87.58 5.47 0.00 0.00 1309838.81 700.04 1967509.88 00:12:26.573 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:26.573 Verification LBA range: start 0x200 length 0x200 00:12:26.573 raid0 : 5.69 85.00 5.31 0.00 0.00 1333289.42 759.62 2196290.09 00:12:26.573 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:26.573 Verification LBA range: start 0x0 length 0x200 00:12:26.573 concat0 : 5.74 91.99 5.75 0.00 0.00 1229140.47 748.45 1898875.81 00:12:26.573 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:26.573 Verification LBA range: start 0x200 length 0x200 00:12:26.573 concat0 : 5.79 93.93 5.87 0.00 0.00 1188085.48 741.00 2120030.02 00:12:26.573 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:26.573 Verification LBA range: start 0x0 length 0x100 00:12:26.573 raid1 : 5.74 112.17 7.01 0.00 0.00 1004400.19 945.80 1830241.75 00:12:26.573 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:26.573 Verification LBA range: start 0x100 length 0x100 00:12:26.573 raid1 : 5.84 113.80 7.11 0.00 0.00 966714.57 968.15 2043769.95 00:12:26.573 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 78, IO size: 65536) 00:12:26.573 Verification LBA range: start 0x0 length 0x4e 00:12:26.573 AIO0 : 5.74 107.94 6.75 0.00 0.00 629729.32 759.62 1090519.04 00:12:26.573 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 78, IO size: 65536) 00:12:26.573 Verification LBA range: start 0x4e length 0x4e 00:12:26.573 AIO0 : 5.88 128.97 8.06 0.00 0.00 513551.07 904.84 1204909.15 00:12:26.573 =================================================================================================================== 00:12:26.573 Total : 3325.17 207.82 0.00 0.00 688613.56 307.20 2547086.43 00:12:28.555 00:12:28.555 real 0m9.979s 00:12:28.555 user 0m18.298s 00:12:28.555 sys 0m0.546s 00:12:28.555 14:05:14 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:28.555 14:05:14 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:12:28.555 ************************************ 00:12:28.555 END TEST bdev_verify_big_io 00:12:28.555 ************************************ 00:12:28.555 14:05:14 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:12:28.555 14:05:14 blockdev_general -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:28.555 14:05:14 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:12:28.555 14:05:14 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:28.555 14:05:14 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:28.555 ************************************ 00:12:28.555 START TEST bdev_write_zeroes 00:12:28.555 ************************************ 00:12:28.555 14:05:14 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:28.555 [2024-07-15 14:05:14.473125] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:12:28.555 [2024-07-15 14:05:14.473845] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid184912 ] 00:12:28.813 [2024-07-15 14:05:14.637818] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:29.071 [2024-07-15 14:05:14.868612] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.329 [2024-07-15 14:05:15.265343] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:29.329 [2024-07-15 14:05:15.265791] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:29.329 [2024-07-15 14:05:15.273266] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:29.329 [2024-07-15 14:05:15.273503] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:29.329 [2024-07-15 14:05:15.281305] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:29.329 [2024-07-15 14:05:15.281550] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:29.329 [2024-07-15 14:05:15.281841] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:29.587 [2024-07-15 14:05:15.473392] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:29.587 [2024-07-15 14:05:15.473891] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:29.587 [2024-07-15 14:05:15.474410] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:12:29.587 [2024-07-15 14:05:15.474644] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:29.587 [2024-07-15 14:05:15.476632] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:29.587 [2024-07-15 14:05:15.476877] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:29.846 Running I/O for 1 seconds... 00:12:31.222 00:12:31.222 Latency(us) 00:12:31.222 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:31.222 Job: Malloc0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:31.222 Malloc0 : 1.02 10969.95 42.85 0.00 0.00 11661.64 366.78 23354.65 00:12:31.222 Job: Malloc1p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:31.222 Malloc1p0 : 1.02 10966.43 42.84 0.00 0.00 11656.54 502.69 22997.18 00:12:31.222 Job: Malloc1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:31.222 Malloc1p1 : 1.02 10963.07 42.82 0.00 0.00 11648.65 498.97 22520.55 00:12:31.222 Job: Malloc2p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:31.222 Malloc2p0 : 1.02 10959.83 42.81 0.00 0.00 11636.94 539.93 22043.93 00:12:31.222 Job: Malloc2p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:31.222 Malloc2p1 : 1.02 10956.69 42.80 0.00 0.00 11627.11 506.41 21567.30 00:12:31.222 Job: Malloc2p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:31.222 Malloc2p2 : 1.02 10953.59 42.79 0.00 0.00 11618.69 521.31 21209.83 00:12:31.223 Job: Malloc2p3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:31.223 Malloc2p3 : 1.02 10950.28 42.77 0.00 0.00 11609.11 517.59 20614.05 00:12:31.223 Job: Malloc2p4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:31.223 Malloc2p4 : 1.02 10947.29 42.76 0.00 0.00 11596.60 491.52 20137.43 00:12:31.223 Job: Malloc2p5 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:31.223 Malloc2p5 : 1.02 11003.11 42.98 0.00 0.00 11526.14 491.52 19779.96 00:12:31.223 Job: Malloc2p6 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:31.223 Malloc2p6 : 1.02 10999.85 42.97 0.00 0.00 11514.59 498.97 19303.33 00:12:31.223 Job: Malloc2p7 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:31.223 Malloc2p7 : 1.02 10996.76 42.96 0.00 0.00 11507.37 528.76 18826.71 00:12:31.223 Job: TestPT (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:31.223 TestPT : 1.02 10993.63 42.94 0.00 0.00 11496.25 521.31 18469.24 00:12:31.223 Job: raid0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:31.223 raid0 : 1.02 10989.46 42.93 0.00 0.00 11485.38 882.50 17635.14 00:12:31.223 Job: concat0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:31.223 concat0 : 1.03 10985.53 42.91 0.00 0.00 11466.57 826.65 16920.20 00:12:31.223 Job: raid1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:31.223 raid1 : 1.03 10979.95 42.89 0.00 0.00 11447.89 1340.51 15728.64 00:12:31.223 Job: AIO0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:31.223 AIO0 : 1.03 10973.55 42.87 0.00 0.00 11420.33 1400.09 15371.17 00:12:31.223 =================================================================================================================== 00:12:31.223 Total : 175588.96 685.89 0.00 0.00 11557.06 366.78 23354.65 00:12:33.127 00:12:33.127 real 0m4.638s 00:12:33.127 user 0m3.990s 00:12:33.127 sys 0m0.438s 00:12:33.127 14:05:19 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:33.127 14:05:19 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:12:33.127 ************************************ 00:12:33.127 END TEST bdev_write_zeroes 00:12:33.127 ************************************ 00:12:33.127 14:05:19 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:12:33.127 14:05:19 blockdev_general -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:33.127 14:05:19 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:12:33.127 14:05:19 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:33.127 14:05:19 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:33.127 ************************************ 00:12:33.127 START TEST bdev_json_nonenclosed 00:12:33.127 ************************************ 00:12:33.127 14:05:19 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:33.385 [2024-07-15 14:05:19.172070] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:12:33.385 [2024-07-15 14:05:19.172526] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid184993 ] 00:12:33.385 [2024-07-15 14:05:19.339831] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:33.643 [2024-07-15 14:05:19.607823] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.643 [2024-07-15 14:05:19.608361] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:12:33.643 [2024-07-15 14:05:19.608667] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:12:33.643 [2024-07-15 14:05:19.608955] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:34.207 00:12:34.207 real 0m0.927s 00:12:34.207 user 0m0.685s 00:12:34.207 sys 0m0.133s 00:12:34.207 14:05:20 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:12:34.207 14:05:20 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:34.207 14:05:20 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:12:34.207 ************************************ 00:12:34.207 END TEST bdev_json_nonenclosed 00:12:34.207 ************************************ 00:12:34.207 14:05:20 blockdev_general -- common/autotest_common.sh@1142 -- # return 234 00:12:34.207 14:05:20 blockdev_general -- bdev/blockdev.sh@782 -- # true 00:12:34.207 14:05:20 blockdev_general -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:34.207 14:05:20 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:12:34.207 14:05:20 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:34.207 14:05:20 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:34.207 ************************************ 00:12:34.207 START TEST bdev_json_nonarray 00:12:34.207 ************************************ 00:12:34.207 14:05:20 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:34.207 [2024-07-15 14:05:20.161305] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:12:34.207 [2024-07-15 14:05:20.161751] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid185031 ] 00:12:34.464 [2024-07-15 14:05:20.331068] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:34.722 [2024-07-15 14:05:20.610916] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:34.722 [2024-07-15 14:05:20.611547] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:12:34.722 [2024-07-15 14:05:20.611902] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:12:34.722 [2024-07-15 14:05:20.612193] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:35.287 00:12:35.287 real 0m0.937s 00:12:35.287 user 0m0.703s 00:12:35.287 sys 0m0.124s 00:12:35.287 14:05:21 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:12:35.287 14:05:21 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:35.287 14:05:21 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:12:35.287 ************************************ 00:12:35.287 END TEST bdev_json_nonarray 00:12:35.287 ************************************ 00:12:35.287 14:05:21 blockdev_general -- common/autotest_common.sh@1142 -- # return 234 00:12:35.287 14:05:21 blockdev_general -- bdev/blockdev.sh@785 -- # true 00:12:35.287 14:05:21 blockdev_general -- bdev/blockdev.sh@787 -- # [[ bdev == bdev ]] 00:12:35.287 14:05:21 blockdev_general -- bdev/blockdev.sh@788 -- # run_test bdev_qos qos_test_suite '' 00:12:35.287 14:05:21 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:35.287 14:05:21 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:35.287 14:05:21 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:35.287 ************************************ 00:12:35.287 START TEST bdev_qos 00:12:35.287 ************************************ 00:12:35.287 14:05:21 blockdev_general.bdev_qos -- common/autotest_common.sh@1123 -- # qos_test_suite '' 00:12:35.287 14:05:21 blockdev_general.bdev_qos -- bdev/blockdev.sh@446 -- # QOS_PID=185069 00:12:35.287 14:05:21 blockdev_general.bdev_qos -- bdev/blockdev.sh@445 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 256 -o 4096 -w randread -t 60 '' 00:12:35.287 Process qos testing pid: 185069 00:12:35.287 14:05:21 blockdev_general.bdev_qos -- bdev/blockdev.sh@447 -- # echo 'Process qos testing pid: 185069' 00:12:35.287 14:05:21 blockdev_general.bdev_qos -- bdev/blockdev.sh@448 -- # trap 'cleanup; killprocess $QOS_PID; exit 1' SIGINT SIGTERM EXIT 00:12:35.287 14:05:21 blockdev_general.bdev_qos -- bdev/blockdev.sh@449 -- # waitforlisten 185069 00:12:35.287 14:05:21 blockdev_general.bdev_qos -- common/autotest_common.sh@829 -- # '[' -z 185069 ']' 00:12:35.287 14:05:21 blockdev_general.bdev_qos -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:35.287 14:05:21 blockdev_general.bdev_qos -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:35.287 14:05:21 blockdev_general.bdev_qos -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:35.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:35.287 14:05:21 blockdev_general.bdev_qos -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:35.287 14:05:21 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:12:35.287 [2024-07-15 14:05:21.173863] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:12:35.287 [2024-07-15 14:05:21.174445] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid185069 ] 00:12:35.545 [2024-07-15 14:05:21.343974] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:35.803 [2024-07-15 14:05:21.613388] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:36.369 14:05:22 blockdev_general.bdev_qos -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:36.369 14:05:22 blockdev_general.bdev_qos -- common/autotest_common.sh@862 -- # return 0 00:12:36.369 14:05:22 blockdev_general.bdev_qos -- bdev/blockdev.sh@451 -- # rpc_cmd bdev_malloc_create -b Malloc_0 128 512 00:12:36.369 14:05:22 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.369 14:05:22 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:12:36.627 Malloc_0 00:12:36.627 14:05:22 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.627 14:05:22 blockdev_general.bdev_qos -- bdev/blockdev.sh@452 -- # waitforbdev Malloc_0 00:12:36.627 14:05:22 blockdev_general.bdev_qos -- common/autotest_common.sh@897 -- # local bdev_name=Malloc_0 00:12:36.627 14:05:22 blockdev_general.bdev_qos -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:36.627 14:05:22 blockdev_general.bdev_qos -- common/autotest_common.sh@899 -- # local i 00:12:36.627 14:05:22 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:36.627 14:05:22 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:36.627 14:05:22 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:12:36.627 14:05:22 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.627 14:05:22 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:12:36.627 14:05:22 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.627 14:05:22 blockdev_general.bdev_qos -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Malloc_0 -t 2000 00:12:36.627 14:05:22 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.627 14:05:22 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:12:36.627 [ 00:12:36.627 { 00:12:36.627 "name": "Malloc_0", 00:12:36.627 "aliases": [ 00:12:36.627 "d764f6f4-87d2-49e3-ad7a-2c5486aeb584" 00:12:36.627 ], 00:12:36.627 "product_name": "Malloc disk", 00:12:36.627 "block_size": 512, 00:12:36.627 "num_blocks": 262144, 00:12:36.627 "uuid": "d764f6f4-87d2-49e3-ad7a-2c5486aeb584", 00:12:36.627 "assigned_rate_limits": { 00:12:36.627 "rw_ios_per_sec": 0, 00:12:36.627 "rw_mbytes_per_sec": 0, 00:12:36.627 "r_mbytes_per_sec": 0, 00:12:36.627 "w_mbytes_per_sec": 0 00:12:36.627 }, 00:12:36.627 "claimed": false, 00:12:36.627 "zoned": false, 00:12:36.627 "supported_io_types": { 00:12:36.627 "read": true, 00:12:36.627 "write": true, 00:12:36.627 "unmap": true, 00:12:36.627 "flush": true, 00:12:36.627 "reset": true, 00:12:36.627 "nvme_admin": false, 00:12:36.627 "nvme_io": false, 00:12:36.627 "nvme_io_md": false, 00:12:36.627 "write_zeroes": true, 00:12:36.627 "zcopy": true, 00:12:36.627 "get_zone_info": false, 00:12:36.627 "zone_management": false, 00:12:36.627 "zone_append": false, 00:12:36.627 "compare": false, 00:12:36.627 "compare_and_write": false, 00:12:36.627 "abort": true, 00:12:36.627 "seek_hole": false, 00:12:36.627 "seek_data": false, 00:12:36.627 "copy": true, 00:12:36.627 "nvme_iov_md": false 00:12:36.627 }, 00:12:36.627 "memory_domains": [ 00:12:36.627 { 00:12:36.627 "dma_device_id": "system", 00:12:36.627 "dma_device_type": 1 00:12:36.627 }, 00:12:36.627 { 00:12:36.627 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:36.627 "dma_device_type": 2 00:12:36.627 } 00:12:36.627 ], 00:12:36.627 "driver_specific": {} 00:12:36.627 } 00:12:36.627 ] 00:12:36.627 14:05:22 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.627 14:05:22 blockdev_general.bdev_qos -- common/autotest_common.sh@905 -- # return 0 00:12:36.627 14:05:22 blockdev_general.bdev_qos -- bdev/blockdev.sh@453 -- # rpc_cmd bdev_null_create Null_1 128 512 00:12:36.627 14:05:22 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.627 14:05:22 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:12:36.627 Null_1 00:12:36.628 14:05:22 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.628 14:05:22 blockdev_general.bdev_qos -- bdev/blockdev.sh@454 -- # waitforbdev Null_1 00:12:36.628 14:05:22 blockdev_general.bdev_qos -- common/autotest_common.sh@897 -- # local bdev_name=Null_1 00:12:36.628 14:05:22 blockdev_general.bdev_qos -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:36.628 14:05:22 blockdev_general.bdev_qos -- common/autotest_common.sh@899 -- # local i 00:12:36.628 14:05:22 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:36.628 14:05:22 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:36.628 14:05:22 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:12:36.628 14:05:22 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.628 14:05:22 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:12:36.628 14:05:22 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.628 14:05:22 blockdev_general.bdev_qos -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Null_1 -t 2000 00:12:36.628 14:05:22 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.628 14:05:22 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:12:36.628 [ 00:12:36.628 { 00:12:36.628 "name": "Null_1", 00:12:36.628 "aliases": [ 00:12:36.628 "f217ff91-75de-44d0-93db-4c4d58b4af42" 00:12:36.628 ], 00:12:36.628 "product_name": "Null disk", 00:12:36.628 "block_size": 512, 00:12:36.628 "num_blocks": 262144, 00:12:36.628 "uuid": "f217ff91-75de-44d0-93db-4c4d58b4af42", 00:12:36.628 "assigned_rate_limits": { 00:12:36.628 "rw_ios_per_sec": 0, 00:12:36.628 "rw_mbytes_per_sec": 0, 00:12:36.628 "r_mbytes_per_sec": 0, 00:12:36.628 "w_mbytes_per_sec": 0 00:12:36.628 }, 00:12:36.628 "claimed": false, 00:12:36.628 "zoned": false, 00:12:36.628 "supported_io_types": { 00:12:36.628 "read": true, 00:12:36.628 "write": true, 00:12:36.628 "unmap": false, 00:12:36.628 "flush": false, 00:12:36.628 "reset": true, 00:12:36.628 "nvme_admin": false, 00:12:36.628 "nvme_io": false, 00:12:36.628 "nvme_io_md": false, 00:12:36.628 "write_zeroes": true, 00:12:36.628 "zcopy": false, 00:12:36.628 "get_zone_info": false, 00:12:36.628 "zone_management": false, 00:12:36.628 "zone_append": false, 00:12:36.628 "compare": false, 00:12:36.628 "compare_and_write": false, 00:12:36.628 "abort": true, 00:12:36.628 "seek_hole": false, 00:12:36.628 "seek_data": false, 00:12:36.628 "copy": false, 00:12:36.628 "nvme_iov_md": false 00:12:36.628 }, 00:12:36.628 "driver_specific": {} 00:12:36.628 } 00:12:36.628 ] 00:12:36.628 14:05:22 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.628 14:05:22 blockdev_general.bdev_qos -- common/autotest_common.sh@905 -- # return 0 00:12:36.628 14:05:22 blockdev_general.bdev_qos -- bdev/blockdev.sh@457 -- # qos_function_test 00:12:36.628 14:05:22 blockdev_general.bdev_qos -- bdev/blockdev.sh@410 -- # local qos_lower_iops_limit=1000 00:12:36.628 14:05:22 blockdev_general.bdev_qos -- bdev/blockdev.sh@411 -- # local qos_lower_bw_limit=2 00:12:36.628 14:05:22 blockdev_general.bdev_qos -- bdev/blockdev.sh@456 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:36.628 14:05:22 blockdev_general.bdev_qos -- bdev/blockdev.sh@412 -- # local io_result=0 00:12:36.628 14:05:22 blockdev_general.bdev_qos -- bdev/blockdev.sh@413 -- # local iops_limit=0 00:12:36.628 14:05:22 blockdev_general.bdev_qos -- bdev/blockdev.sh@414 -- # local bw_limit=0 00:12:36.628 14:05:22 blockdev_general.bdev_qos -- bdev/blockdev.sh@416 -- # get_io_result IOPS Malloc_0 00:12:36.628 14:05:22 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local limit_type=IOPS 00:12:36.628 14:05:22 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:12:36.628 14:05:22 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # local iostat_result 00:12:36.628 14:05:22 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:12:36.628 14:05:22 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:12:36.628 14:05:22 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # tail -1 00:12:36.628 Running I/O for 60 seconds... 00:12:41.930 14:05:27 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 171828.20 687312.78 0.00 0.00 694272.00 0.00 0.00 ' 00:12:41.930 14:05:27 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # '[' IOPS = IOPS ']' 00:12:41.930 14:05:27 blockdev_general.bdev_qos -- bdev/blockdev.sh@380 -- # awk '{print $2}' 00:12:41.930 14:05:27 blockdev_general.bdev_qos -- bdev/blockdev.sh@380 -- # iostat_result=171828.20 00:12:41.930 14:05:27 blockdev_general.bdev_qos -- bdev/blockdev.sh@385 -- # echo 171828 00:12:41.930 14:05:27 blockdev_general.bdev_qos -- bdev/blockdev.sh@416 -- # io_result=171828 00:12:41.930 14:05:27 blockdev_general.bdev_qos -- bdev/blockdev.sh@418 -- # iops_limit=42000 00:12:41.930 14:05:27 blockdev_general.bdev_qos -- bdev/blockdev.sh@419 -- # '[' 42000 -gt 1000 ']' 00:12:41.931 14:05:27 blockdev_general.bdev_qos -- bdev/blockdev.sh@422 -- # rpc_cmd bdev_set_qos_limit --rw_ios_per_sec 42000 Malloc_0 00:12:41.931 14:05:27 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.931 14:05:27 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:12:41.931 14:05:27 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.931 14:05:27 blockdev_general.bdev_qos -- bdev/blockdev.sh@423 -- # run_test bdev_qos_iops run_qos_test 42000 IOPS Malloc_0 00:12:41.931 14:05:27 blockdev_general.bdev_qos -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:12:41.931 14:05:27 blockdev_general.bdev_qos -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:41.931 14:05:27 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:12:41.931 ************************************ 00:12:41.931 START TEST bdev_qos_iops 00:12:41.931 ************************************ 00:12:41.931 14:05:27 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1123 -- # run_qos_test 42000 IOPS Malloc_0 00:12:41.931 14:05:27 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@389 -- # local qos_limit=42000 00:12:41.931 14:05:27 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@390 -- # local qos_result=0 00:12:41.931 14:05:27 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@392 -- # get_io_result IOPS Malloc_0 00:12:41.931 14:05:27 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@375 -- # local limit_type=IOPS 00:12:41.931 14:05:27 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:12:41.931 14:05:27 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # local iostat_result 00:12:41.931 14:05:27 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # tail -1 00:12:41.931 14:05:27 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:12:41.931 14:05:27 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:12:47.221 14:05:32 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 42011.99 168047.97 0.00 0.00 170016.00 0.00 0.00 ' 00:12:47.221 14:05:32 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@379 -- # '[' IOPS = IOPS ']' 00:12:47.221 14:05:32 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@380 -- # awk '{print $2}' 00:12:47.221 14:05:32 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@380 -- # iostat_result=42011.99 00:12:47.221 14:05:32 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@385 -- # echo 42011 00:12:47.221 14:05:32 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@392 -- # qos_result=42011 00:12:47.221 14:05:32 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@393 -- # '[' IOPS = BANDWIDTH ']' 00:12:47.221 14:05:32 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@396 -- # lower_limit=37800 00:12:47.221 14:05:32 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@397 -- # upper_limit=46200 00:12:47.221 14:05:32 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@400 -- # '[' 42011 -lt 37800 ']' 00:12:47.221 14:05:32 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@400 -- # '[' 42011 -gt 46200 ']' 00:12:47.221 00:12:47.221 real 0m5.238s 00:12:47.221 user 0m0.165s 00:12:47.221 sys 0m0.027s 00:12:47.221 14:05:32 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:47.221 14:05:32 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@10 -- # set +x 00:12:47.221 ************************************ 00:12:47.221 END TEST bdev_qos_iops 00:12:47.221 ************************************ 00:12:47.221 14:05:32 blockdev_general.bdev_qos -- common/autotest_common.sh@1142 -- # return 0 00:12:47.221 14:05:32 blockdev_general.bdev_qos -- bdev/blockdev.sh@427 -- # get_io_result BANDWIDTH Null_1 00:12:47.221 14:05:32 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:12:47.221 14:05:32 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local qos_dev=Null_1 00:12:47.221 14:05:32 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # local iostat_result 00:12:47.221 14:05:33 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:12:47.221 14:05:33 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # grep Null_1 00:12:47.221 14:05:33 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # tail -1 00:12:52.504 14:05:38 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # iostat_result='Null_1 46132.67 184530.69 0.00 0.00 186368.00 0.00 0.00 ' 00:12:52.504 14:05:38 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:12:52.504 14:05:38 blockdev_general.bdev_qos -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:12:52.504 14:05:38 blockdev_general.bdev_qos -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:12:52.504 14:05:38 blockdev_general.bdev_qos -- bdev/blockdev.sh@382 -- # iostat_result=186368.00 00:12:52.504 14:05:38 blockdev_general.bdev_qos -- bdev/blockdev.sh@385 -- # echo 186368 00:12:52.504 14:05:38 blockdev_general.bdev_qos -- bdev/blockdev.sh@427 -- # bw_limit=186368 00:12:52.504 14:05:38 blockdev_general.bdev_qos -- bdev/blockdev.sh@428 -- # bw_limit=18 00:12:52.504 14:05:38 blockdev_general.bdev_qos -- bdev/blockdev.sh@429 -- # '[' 18 -lt 2 ']' 00:12:52.504 14:05:38 blockdev_general.bdev_qos -- bdev/blockdev.sh@432 -- # rpc_cmd bdev_set_qos_limit --rw_mbytes_per_sec 18 Null_1 00:12:52.504 14:05:38 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.504 14:05:38 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:12:52.504 14:05:38 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.504 14:05:38 blockdev_general.bdev_qos -- bdev/blockdev.sh@433 -- # run_test bdev_qos_bw run_qos_test 18 BANDWIDTH Null_1 00:12:52.504 14:05:38 blockdev_general.bdev_qos -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:12:52.504 14:05:38 blockdev_general.bdev_qos -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:52.504 14:05:38 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:12:52.504 ************************************ 00:12:52.504 START TEST bdev_qos_bw 00:12:52.504 ************************************ 00:12:52.504 14:05:38 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1123 -- # run_qos_test 18 BANDWIDTH Null_1 00:12:52.504 14:05:38 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@389 -- # local qos_limit=18 00:12:52.504 14:05:38 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@390 -- # local qos_result=0 00:12:52.504 14:05:38 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@392 -- # get_io_result BANDWIDTH Null_1 00:12:52.504 14:05:38 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:12:52.504 14:05:38 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@376 -- # local qos_dev=Null_1 00:12:52.504 14:05:38 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # local iostat_result 00:12:52.504 14:05:38 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # grep Null_1 00:12:52.504 14:05:38 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:12:52.504 14:05:38 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # tail -1 00:12:57.773 14:05:43 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # iostat_result='Null_1 4608.87 18435.49 0.00 0.00 18616.00 0.00 0.00 ' 00:12:57.773 14:05:43 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:12:57.773 14:05:43 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:12:57.773 14:05:43 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:12:57.773 14:05:43 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@382 -- # iostat_result=18616.00 00:12:57.773 14:05:43 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@385 -- # echo 18616 00:12:57.773 14:05:43 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@392 -- # qos_result=18616 00:12:57.773 14:05:43 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@393 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:12:57.773 14:05:43 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@394 -- # qos_limit=18432 00:12:57.773 14:05:43 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@396 -- # lower_limit=16588 00:12:57.773 14:05:43 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@397 -- # upper_limit=20275 00:12:57.773 14:05:43 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@400 -- # '[' 18616 -lt 16588 ']' 00:12:57.773 14:05:43 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@400 -- # '[' 18616 -gt 20275 ']' 00:12:57.773 00:12:57.773 real 0m5.211s 00:12:57.773 user 0m0.121s 00:12:57.773 sys 0m0.034s 00:12:57.773 14:05:43 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:57.773 14:05:43 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@10 -- # set +x 00:12:57.773 ************************************ 00:12:57.773 END TEST bdev_qos_bw 00:12:57.773 ************************************ 00:12:57.773 14:05:43 blockdev_general.bdev_qos -- common/autotest_common.sh@1142 -- # return 0 00:12:57.773 14:05:43 blockdev_general.bdev_qos -- bdev/blockdev.sh@436 -- # rpc_cmd bdev_set_qos_limit --r_mbytes_per_sec 2 Malloc_0 00:12:57.773 14:05:43 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.773 14:05:43 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:12:57.773 14:05:43 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.773 14:05:43 blockdev_general.bdev_qos -- bdev/blockdev.sh@437 -- # run_test bdev_qos_ro_bw run_qos_test 2 BANDWIDTH Malloc_0 00:12:57.773 14:05:43 blockdev_general.bdev_qos -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:12:57.773 14:05:43 blockdev_general.bdev_qos -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:57.773 14:05:43 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:12:57.773 ************************************ 00:12:57.773 START TEST bdev_qos_ro_bw 00:12:57.773 ************************************ 00:12:57.773 14:05:43 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1123 -- # run_qos_test 2 BANDWIDTH Malloc_0 00:12:57.773 14:05:43 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@389 -- # local qos_limit=2 00:12:57.773 14:05:43 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@390 -- # local qos_result=0 00:12:57.773 14:05:43 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@392 -- # get_io_result BANDWIDTH Malloc_0 00:12:57.773 14:05:43 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:12:57.773 14:05:43 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:12:57.773 14:05:43 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # local iostat_result 00:12:57.773 14:05:43 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:12:57.773 14:05:43 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:12:57.773 14:05:43 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # tail -1 00:13:03.033 14:05:48 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 511.21 2044.85 0.00 0.00 2064.00 0.00 0.00 ' 00:13:03.033 14:05:48 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:13:03.033 14:05:48 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:13:03.033 14:05:48 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:13:03.033 14:05:48 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@382 -- # iostat_result=2064.00 00:13:03.033 14:05:48 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@385 -- # echo 2064 00:13:03.033 14:05:48 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@392 -- # qos_result=2064 00:13:03.033 14:05:48 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@393 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:13:03.033 14:05:48 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@394 -- # qos_limit=2048 00:13:03.033 14:05:48 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@396 -- # lower_limit=1843 00:13:03.033 14:05:48 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@397 -- # upper_limit=2252 00:13:03.033 14:05:48 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@400 -- # '[' 2064 -lt 1843 ']' 00:13:03.033 14:05:48 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@400 -- # '[' 2064 -gt 2252 ']' 00:13:03.033 00:13:03.033 real 0m5.197s 00:13:03.033 user 0m0.122s 00:13:03.033 sys 0m0.036s 00:13:03.033 14:05:48 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:03.033 14:05:48 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@10 -- # set +x 00:13:03.033 ************************************ 00:13:03.033 END TEST bdev_qos_ro_bw 00:13:03.033 ************************************ 00:13:03.033 14:05:48 blockdev_general.bdev_qos -- common/autotest_common.sh@1142 -- # return 0 00:13:03.033 14:05:48 blockdev_general.bdev_qos -- bdev/blockdev.sh@459 -- # rpc_cmd bdev_malloc_delete Malloc_0 00:13:03.033 14:05:48 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.033 14:05:48 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:03.600 14:05:49 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.600 14:05:49 blockdev_general.bdev_qos -- bdev/blockdev.sh@460 -- # rpc_cmd bdev_null_delete Null_1 00:13:03.600 14:05:49 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:03.600 14:05:49 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:03.600 00:13:03.600 Latency(us) 00:13:03.600 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:03.600 Job: Malloc_0 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:13:03.600 Malloc_0 : 26.65 57442.15 224.38 0.00 0.00 4416.28 960.70 503316.48 00:13:03.600 Job: Null_1 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:13:03.600 Null_1 : 26.81 52471.20 204.97 0.00 0.00 4871.89 351.88 163005.91 00:13:03.600 =================================================================================================================== 00:13:03.600 Total : 109913.35 429.35 0.00 0.00 4634.48 351.88 503316.48 00:13:03.600 0 00:13:03.600 14:05:49 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:03.600 14:05:49 blockdev_general.bdev_qos -- bdev/blockdev.sh@461 -- # killprocess 185069 00:13:03.600 14:05:49 blockdev_general.bdev_qos -- common/autotest_common.sh@948 -- # '[' -z 185069 ']' 00:13:03.600 14:05:49 blockdev_general.bdev_qos -- common/autotest_common.sh@952 -- # kill -0 185069 00:13:03.600 14:05:49 blockdev_general.bdev_qos -- common/autotest_common.sh@953 -- # uname 00:13:03.600 14:05:49 blockdev_general.bdev_qos -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:03.600 14:05:49 blockdev_general.bdev_qos -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 185069 00:13:03.600 14:05:49 blockdev_general.bdev_qos -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:03.600 14:05:49 blockdev_general.bdev_qos -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:03.600 14:05:49 blockdev_general.bdev_qos -- common/autotest_common.sh@966 -- # echo 'killing process with pid 185069' 00:13:03.600 killing process with pid 185069 00:13:03.600 14:05:49 blockdev_general.bdev_qos -- common/autotest_common.sh@967 -- # kill 185069 00:13:03.600 Received shutdown signal, test time was about 26.850185 seconds 00:13:03.600 00:13:03.600 Latency(us) 00:13:03.600 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:03.600 =================================================================================================================== 00:13:03.600 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:03.600 14:05:49 blockdev_general.bdev_qos -- common/autotest_common.sh@972 -- # wait 185069 00:13:04.972 14:05:50 blockdev_general.bdev_qos -- bdev/blockdev.sh@462 -- # trap - SIGINT SIGTERM EXIT 00:13:04.972 00:13:04.972 real 0m29.645s 00:13:04.972 user 0m30.392s 00:13:04.972 sys 0m0.733s 00:13:04.972 14:05:50 blockdev_general.bdev_qos -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:04.972 14:05:50 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:04.972 ************************************ 00:13:04.972 END TEST bdev_qos 00:13:04.972 ************************************ 00:13:04.972 14:05:50 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:13:04.972 14:05:50 blockdev_general -- bdev/blockdev.sh@789 -- # run_test bdev_qd_sampling qd_sampling_test_suite '' 00:13:04.972 14:05:50 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:04.972 14:05:50 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:04.972 14:05:50 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:04.972 ************************************ 00:13:04.972 START TEST bdev_qd_sampling 00:13:04.972 ************************************ 00:13:04.972 14:05:50 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1123 -- # qd_sampling_test_suite '' 00:13:04.972 14:05:50 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@538 -- # QD_DEV=Malloc_QD 00:13:04.972 14:05:50 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@541 -- # QD_PID=185540 00:13:04.972 14:05:50 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@540 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 5 -C '' 00:13:04.972 14:05:50 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@542 -- # echo 'Process bdev QD sampling period testing pid: 185540' 00:13:04.972 Process bdev QD sampling period testing pid: 185540 00:13:04.972 14:05:50 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@543 -- # trap 'cleanup; killprocess $QD_PID; exit 1' SIGINT SIGTERM EXIT 00:13:04.972 14:05:50 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@544 -- # waitforlisten 185540 00:13:04.972 14:05:50 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@829 -- # '[' -z 185540 ']' 00:13:04.972 14:05:50 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:04.972 14:05:50 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:04.972 14:05:50 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:04.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:04.972 14:05:50 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:04.972 14:05:50 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:13:04.973 [2024-07-15 14:05:50.857760] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:13:04.973 [2024-07-15 14:05:50.858254] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid185540 ] 00:13:05.230 [2024-07-15 14:05:51.036640] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:05.489 [2024-07-15 14:05:51.252236] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:05.489 [2024-07-15 14:05:51.252237] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:06.055 14:05:51 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:06.055 14:05:51 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@862 -- # return 0 00:13:06.055 14:05:51 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@546 -- # rpc_cmd bdev_malloc_create -b Malloc_QD 128 512 00:13:06.055 14:05:51 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.055 14:05:51 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:13:06.055 Malloc_QD 00:13:06.055 14:05:52 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.055 14:05:52 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@547 -- # waitforbdev Malloc_QD 00:13:06.055 14:05:52 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@897 -- # local bdev_name=Malloc_QD 00:13:06.055 14:05:52 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:06.055 14:05:52 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@899 -- # local i 00:13:06.055 14:05:52 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:06.055 14:05:52 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:06.055 14:05:52 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:13:06.055 14:05:52 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.055 14:05:52 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:13:06.055 14:05:52 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.055 14:05:52 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Malloc_QD -t 2000 00:13:06.055 14:05:52 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.055 14:05:52 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:13:06.314 [ 00:13:06.314 { 00:13:06.314 "name": "Malloc_QD", 00:13:06.314 "aliases": [ 00:13:06.314 "4f9d7ebc-6c05-4cb8-9ca7-08b41f150a42" 00:13:06.314 ], 00:13:06.314 "product_name": "Malloc disk", 00:13:06.314 "block_size": 512, 00:13:06.314 "num_blocks": 262144, 00:13:06.314 "uuid": "4f9d7ebc-6c05-4cb8-9ca7-08b41f150a42", 00:13:06.314 "assigned_rate_limits": { 00:13:06.314 "rw_ios_per_sec": 0, 00:13:06.314 "rw_mbytes_per_sec": 0, 00:13:06.314 "r_mbytes_per_sec": 0, 00:13:06.314 "w_mbytes_per_sec": 0 00:13:06.314 }, 00:13:06.314 "claimed": false, 00:13:06.314 "zoned": false, 00:13:06.314 "supported_io_types": { 00:13:06.314 "read": true, 00:13:06.314 "write": true, 00:13:06.314 "unmap": true, 00:13:06.314 "flush": true, 00:13:06.314 "reset": true, 00:13:06.314 "nvme_admin": false, 00:13:06.314 "nvme_io": false, 00:13:06.314 "nvme_io_md": false, 00:13:06.314 "write_zeroes": true, 00:13:06.314 "zcopy": true, 00:13:06.314 "get_zone_info": false, 00:13:06.314 "zone_management": false, 00:13:06.314 "zone_append": false, 00:13:06.314 "compare": false, 00:13:06.314 "compare_and_write": false, 00:13:06.314 "abort": true, 00:13:06.314 "seek_hole": false, 00:13:06.314 "seek_data": false, 00:13:06.314 "copy": true, 00:13:06.314 "nvme_iov_md": false 00:13:06.314 }, 00:13:06.314 "memory_domains": [ 00:13:06.314 { 00:13:06.314 "dma_device_id": "system", 00:13:06.314 "dma_device_type": 1 00:13:06.314 }, 00:13:06.314 { 00:13:06.314 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:06.314 "dma_device_type": 2 00:13:06.314 } 00:13:06.314 ], 00:13:06.314 "driver_specific": {} 00:13:06.314 } 00:13:06.314 ] 00:13:06.314 14:05:52 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.314 14:05:52 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@905 -- # return 0 00:13:06.314 14:05:52 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@550 -- # sleep 2 00:13:06.314 14:05:52 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@549 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:06.314 Running I/O for 5 seconds... 00:13:08.212 14:05:54 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@551 -- # qd_sampling_function_test Malloc_QD 00:13:08.212 14:05:54 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@519 -- # local bdev_name=Malloc_QD 00:13:08.212 14:05:54 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@520 -- # local sampling_period=10 00:13:08.212 14:05:54 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@521 -- # local iostats 00:13:08.212 14:05:54 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@523 -- # rpc_cmd bdev_set_qd_sampling_period Malloc_QD 10 00:13:08.212 14:05:54 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.212 14:05:54 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:13:08.212 14:05:54 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.212 14:05:54 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@525 -- # rpc_cmd bdev_get_iostat -b Malloc_QD 00:13:08.212 14:05:54 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.212 14:05:54 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:13:08.212 14:05:54 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.212 14:05:54 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@525 -- # iostats='{ 00:13:08.212 "tick_rate": 2200000000, 00:13:08.212 "ticks": 1668395652376, 00:13:08.212 "bdevs": [ 00:13:08.212 { 00:13:08.212 "name": "Malloc_QD", 00:13:08.212 "bytes_read": 2068877824, 00:13:08.212 "num_read_ops": 505091, 00:13:08.212 "bytes_written": 0, 00:13:08.212 "num_write_ops": 0, 00:13:08.212 "bytes_unmapped": 0, 00:13:08.212 "num_unmap_ops": 0, 00:13:08.212 "bytes_copied": 0, 00:13:08.212 "num_copy_ops": 0, 00:13:08.212 "read_latency_ticks": 2166702404610, 00:13:08.212 "max_read_latency_ticks": 6360137, 00:13:08.212 "min_read_latency_ticks": 195718, 00:13:08.212 "write_latency_ticks": 0, 00:13:08.212 "max_write_latency_ticks": 0, 00:13:08.212 "min_write_latency_ticks": 0, 00:13:08.212 "unmap_latency_ticks": 0, 00:13:08.212 "max_unmap_latency_ticks": 0, 00:13:08.212 "min_unmap_latency_ticks": 0, 00:13:08.212 "copy_latency_ticks": 0, 00:13:08.212 "max_copy_latency_ticks": 0, 00:13:08.212 "min_copy_latency_ticks": 0, 00:13:08.212 "io_error": {}, 00:13:08.212 "queue_depth_polling_period": 10, 00:13:08.212 "queue_depth": 512, 00:13:08.212 "io_time": 60, 00:13:08.212 "weighted_io_time": 30720 00:13:08.212 } 00:13:08.212 ] 00:13:08.212 }' 00:13:08.212 14:05:54 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@527 -- # jq -r '.bdevs[0].queue_depth_polling_period' 00:13:08.212 14:05:54 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@527 -- # qd_sampling_period=10 00:13:08.212 14:05:54 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@529 -- # '[' 10 == null ']' 00:13:08.212 14:05:54 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@529 -- # '[' 10 -ne 10 ']' 00:13:08.212 14:05:54 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@553 -- # rpc_cmd bdev_malloc_delete Malloc_QD 00:13:08.212 14:05:54 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.212 14:05:54 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:13:08.212 00:13:08.212 Latency(us) 00:13:08.212 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:08.212 Job: Malloc_QD (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:13:08.212 Malloc_QD : 2.00 128265.55 501.04 0.00 0.00 1992.72 476.63 3187.43 00:13:08.212 Job: Malloc_QD (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:13:08.212 Malloc_QD : 2.00 133830.24 522.77 0.00 0.00 1910.01 331.40 2427.81 00:13:08.212 =================================================================================================================== 00:13:08.212 Total : 262095.79 1023.81 0.00 0.00 1950.48 331.40 3187.43 00:13:08.470 0 00:13:08.470 14:05:54 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.470 14:05:54 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@554 -- # killprocess 185540 00:13:08.470 14:05:54 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@948 -- # '[' -z 185540 ']' 00:13:08.470 14:05:54 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@952 -- # kill -0 185540 00:13:08.470 14:05:54 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@953 -- # uname 00:13:08.470 14:05:54 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:08.470 14:05:54 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 185540 00:13:08.470 14:05:54 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:08.470 14:05:54 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:08.470 14:05:54 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@966 -- # echo 'killing process with pid 185540' 00:13:08.470 killing process with pid 185540 00:13:08.470 14:05:54 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@967 -- # kill 185540 00:13:08.470 Received shutdown signal, test time was about 2.152632 seconds 00:13:08.470 00:13:08.470 Latency(us) 00:13:08.470 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:08.470 =================================================================================================================== 00:13:08.470 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:08.470 14:05:54 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@972 -- # wait 185540 00:13:09.914 14:05:55 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@555 -- # trap - SIGINT SIGTERM EXIT 00:13:09.914 00:13:09.914 real 0m4.929s 00:13:09.914 user 0m9.183s 00:13:09.914 sys 0m0.398s 00:13:09.914 14:05:55 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:09.914 14:05:55 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:13:09.914 ************************************ 00:13:09.914 END TEST bdev_qd_sampling 00:13:09.914 ************************************ 00:13:09.914 14:05:55 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:13:09.914 14:05:55 blockdev_general -- bdev/blockdev.sh@790 -- # run_test bdev_error error_test_suite '' 00:13:09.914 14:05:55 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:09.914 14:05:55 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:09.914 14:05:55 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:09.914 ************************************ 00:13:09.914 START TEST bdev_error 00:13:09.914 ************************************ 00:13:09.914 14:05:55 blockdev_general.bdev_error -- common/autotest_common.sh@1123 -- # error_test_suite '' 00:13:09.914 14:05:55 blockdev_general.bdev_error -- bdev/blockdev.sh@466 -- # DEV_1=Dev_1 00:13:09.914 14:05:55 blockdev_general.bdev_error -- bdev/blockdev.sh@467 -- # DEV_2=Dev_2 00:13:09.914 14:05:55 blockdev_general.bdev_error -- bdev/blockdev.sh@468 -- # ERR_DEV=EE_Dev_1 00:13:09.914 14:05:55 blockdev_general.bdev_error -- bdev/blockdev.sh@472 -- # ERR_PID=185640 00:13:09.914 14:05:55 blockdev_general.bdev_error -- bdev/blockdev.sh@473 -- # echo 'Process error testing pid: 185640' 00:13:09.914 14:05:55 blockdev_general.bdev_error -- bdev/blockdev.sh@471 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 -f '' 00:13:09.914 Process error testing pid: 185640 00:13:09.914 14:05:55 blockdev_general.bdev_error -- bdev/blockdev.sh@474 -- # waitforlisten 185640 00:13:09.914 14:05:55 blockdev_general.bdev_error -- common/autotest_common.sh@829 -- # '[' -z 185640 ']' 00:13:09.914 14:05:55 blockdev_general.bdev_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:09.914 14:05:55 blockdev_general.bdev_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:09.914 14:05:55 blockdev_general.bdev_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:09.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:09.914 14:05:55 blockdev_general.bdev_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:09.914 14:05:55 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:09.914 [2024-07-15 14:05:55.841865] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:13:09.914 [2024-07-15 14:05:55.842331] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid185640 ] 00:13:10.172 [2024-07-15 14:05:56.006335] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:10.430 [2024-07-15 14:05:56.278946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:10.997 14:05:56 blockdev_general.bdev_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:10.997 14:05:56 blockdev_general.bdev_error -- common/autotest_common.sh@862 -- # return 0 00:13:10.997 14:05:56 blockdev_general.bdev_error -- bdev/blockdev.sh@476 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:13:10.997 14:05:56 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:10.997 14:05:56 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:11.255 Dev_1 00:13:11.255 14:05:57 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.255 14:05:57 blockdev_general.bdev_error -- bdev/blockdev.sh@477 -- # waitforbdev Dev_1 00:13:11.255 14:05:57 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local bdev_name=Dev_1 00:13:11.255 14:05:57 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:11.255 14:05:57 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local i 00:13:11.255 14:05:57 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:11.255 14:05:57 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:11.255 14:05:57 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:13:11.255 14:05:57 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.255 14:05:57 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:11.255 14:05:57 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.255 14:05:57 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:13:11.255 14:05:57 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.255 14:05:57 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:11.255 [ 00:13:11.255 { 00:13:11.255 "name": "Dev_1", 00:13:11.256 "aliases": [ 00:13:11.256 "46f08086-86b1-4e61-8b90-7786d45a962e" 00:13:11.256 ], 00:13:11.256 "product_name": "Malloc disk", 00:13:11.256 "block_size": 512, 00:13:11.256 "num_blocks": 262144, 00:13:11.256 "uuid": "46f08086-86b1-4e61-8b90-7786d45a962e", 00:13:11.256 "assigned_rate_limits": { 00:13:11.256 "rw_ios_per_sec": 0, 00:13:11.256 "rw_mbytes_per_sec": 0, 00:13:11.256 "r_mbytes_per_sec": 0, 00:13:11.256 "w_mbytes_per_sec": 0 00:13:11.256 }, 00:13:11.256 "claimed": false, 00:13:11.256 "zoned": false, 00:13:11.256 "supported_io_types": { 00:13:11.256 "read": true, 00:13:11.256 "write": true, 00:13:11.256 "unmap": true, 00:13:11.256 "flush": true, 00:13:11.256 "reset": true, 00:13:11.256 "nvme_admin": false, 00:13:11.256 "nvme_io": false, 00:13:11.256 "nvme_io_md": false, 00:13:11.256 "write_zeroes": true, 00:13:11.256 "zcopy": true, 00:13:11.256 "get_zone_info": false, 00:13:11.256 "zone_management": false, 00:13:11.256 "zone_append": false, 00:13:11.256 "compare": false, 00:13:11.256 "compare_and_write": false, 00:13:11.256 "abort": true, 00:13:11.256 "seek_hole": false, 00:13:11.256 "seek_data": false, 00:13:11.256 "copy": true, 00:13:11.256 "nvme_iov_md": false 00:13:11.256 }, 00:13:11.256 "memory_domains": [ 00:13:11.256 { 00:13:11.256 "dma_device_id": "system", 00:13:11.256 "dma_device_type": 1 00:13:11.256 }, 00:13:11.256 { 00:13:11.256 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:11.256 "dma_device_type": 2 00:13:11.256 } 00:13:11.256 ], 00:13:11.256 "driver_specific": {} 00:13:11.256 } 00:13:11.256 ] 00:13:11.256 14:05:57 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.256 14:05:57 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # return 0 00:13:11.256 14:05:57 blockdev_general.bdev_error -- bdev/blockdev.sh@478 -- # rpc_cmd bdev_error_create Dev_1 00:13:11.256 14:05:57 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.256 14:05:57 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:11.256 true 00:13:11.256 14:05:57 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.256 14:05:57 blockdev_general.bdev_error -- bdev/blockdev.sh@479 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:13:11.256 14:05:57 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.256 14:05:57 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:11.256 Dev_2 00:13:11.256 14:05:57 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.256 14:05:57 blockdev_general.bdev_error -- bdev/blockdev.sh@480 -- # waitforbdev Dev_2 00:13:11.256 14:05:57 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local bdev_name=Dev_2 00:13:11.256 14:05:57 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:11.256 14:05:57 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local i 00:13:11.256 14:05:57 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:11.256 14:05:57 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:11.256 14:05:57 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:13:11.256 14:05:57 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.256 14:05:57 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:11.256 14:05:57 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.256 14:05:57 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:13:11.256 14:05:57 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.256 14:05:57 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:11.256 [ 00:13:11.256 { 00:13:11.256 "name": "Dev_2", 00:13:11.256 "aliases": [ 00:13:11.256 "d6b64a06-7b23-48ff-b764-56c1977c15c3" 00:13:11.256 ], 00:13:11.256 "product_name": "Malloc disk", 00:13:11.256 "block_size": 512, 00:13:11.256 "num_blocks": 262144, 00:13:11.256 "uuid": "d6b64a06-7b23-48ff-b764-56c1977c15c3", 00:13:11.256 "assigned_rate_limits": { 00:13:11.256 "rw_ios_per_sec": 0, 00:13:11.256 "rw_mbytes_per_sec": 0, 00:13:11.256 "r_mbytes_per_sec": 0, 00:13:11.256 "w_mbytes_per_sec": 0 00:13:11.256 }, 00:13:11.256 "claimed": false, 00:13:11.256 "zoned": false, 00:13:11.256 "supported_io_types": { 00:13:11.256 "read": true, 00:13:11.256 "write": true, 00:13:11.256 "unmap": true, 00:13:11.256 "flush": true, 00:13:11.256 "reset": true, 00:13:11.256 "nvme_admin": false, 00:13:11.256 "nvme_io": false, 00:13:11.256 "nvme_io_md": false, 00:13:11.256 "write_zeroes": true, 00:13:11.256 "zcopy": true, 00:13:11.256 "get_zone_info": false, 00:13:11.256 "zone_management": false, 00:13:11.256 "zone_append": false, 00:13:11.256 "compare": false, 00:13:11.256 "compare_and_write": false, 00:13:11.256 "abort": true, 00:13:11.256 "seek_hole": false, 00:13:11.256 "seek_data": false, 00:13:11.256 "copy": true, 00:13:11.256 "nvme_iov_md": false 00:13:11.256 }, 00:13:11.256 "memory_domains": [ 00:13:11.256 { 00:13:11.256 "dma_device_id": "system", 00:13:11.256 "dma_device_type": 1 00:13:11.256 }, 00:13:11.256 { 00:13:11.256 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:11.256 "dma_device_type": 2 00:13:11.256 } 00:13:11.256 ], 00:13:11.256 "driver_specific": {} 00:13:11.256 } 00:13:11.256 ] 00:13:11.256 14:05:57 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.256 14:05:57 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # return 0 00:13:11.256 14:05:57 blockdev_general.bdev_error -- bdev/blockdev.sh@481 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:13:11.256 14:05:57 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:11.256 14:05:57 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:11.514 14:05:57 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:11.514 14:05:57 blockdev_general.bdev_error -- bdev/blockdev.sh@484 -- # sleep 1 00:13:11.514 14:05:57 blockdev_general.bdev_error -- bdev/blockdev.sh@483 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:13:11.514 Running I/O for 5 seconds... 00:13:12.446 14:05:58 blockdev_general.bdev_error -- bdev/blockdev.sh@487 -- # kill -0 185640 00:13:12.446 14:05:58 blockdev_general.bdev_error -- bdev/blockdev.sh@488 -- # echo 'Process is existed as continue on error is set. Pid: 185640' 00:13:12.446 Process is existed as continue on error is set. Pid: 185640 00:13:12.446 14:05:58 blockdev_general.bdev_error -- bdev/blockdev.sh@495 -- # rpc_cmd bdev_error_delete EE_Dev_1 00:13:12.446 14:05:58 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.446 14:05:58 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:12.446 14:05:58 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.446 14:05:58 blockdev_general.bdev_error -- bdev/blockdev.sh@496 -- # rpc_cmd bdev_malloc_delete Dev_1 00:13:12.446 14:05:58 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.446 14:05:58 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:12.446 Timeout while waiting for response: 00:13:12.446 00:13:12.446 00:13:12.704 14:05:58 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.704 14:05:58 blockdev_general.bdev_error -- bdev/blockdev.sh@497 -- # sleep 5 00:13:16.889 00:13:16.889 Latency(us) 00:13:16.889 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:16.889 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:13:16.889 EE_Dev_1 : 0.89 100883.66 394.08 5.61 0.00 157.59 72.61 379.81 00:13:16.889 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:13:16.889 Dev_2 : 5.00 198104.81 773.85 0.00 0.00 79.73 53.06 324105.31 00:13:16.889 =================================================================================================================== 00:13:16.889 Total : 298988.47 1167.92 5.61 0.00 86.20 53.06 324105.31 00:13:17.823 14:06:03 blockdev_general.bdev_error -- bdev/blockdev.sh@499 -- # killprocess 185640 00:13:17.823 14:06:03 blockdev_general.bdev_error -- common/autotest_common.sh@948 -- # '[' -z 185640 ']' 00:13:17.823 14:06:03 blockdev_general.bdev_error -- common/autotest_common.sh@952 -- # kill -0 185640 00:13:17.823 14:06:03 blockdev_general.bdev_error -- common/autotest_common.sh@953 -- # uname 00:13:17.824 14:06:03 blockdev_general.bdev_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:17.824 14:06:03 blockdev_general.bdev_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 185640 00:13:17.824 14:06:03 blockdev_general.bdev_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:17.824 14:06:03 blockdev_general.bdev_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:17.824 14:06:03 blockdev_general.bdev_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 185640' 00:13:17.824 killing process with pid 185640 00:13:17.824 14:06:03 blockdev_general.bdev_error -- common/autotest_common.sh@967 -- # kill 185640 00:13:17.824 Received shutdown signal, test time was about 5.000000 seconds 00:13:17.824 00:13:17.824 Latency(us) 00:13:17.824 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:17.824 =================================================================================================================== 00:13:17.824 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:17.824 14:06:03 blockdev_general.bdev_error -- common/autotest_common.sh@972 -- # wait 185640 00:13:19.211 14:06:05 blockdev_general.bdev_error -- bdev/blockdev.sh@503 -- # ERR_PID=185759 00:13:19.211 14:06:05 blockdev_general.bdev_error -- bdev/blockdev.sh@504 -- # echo 'Process error testing pid: 185759' 00:13:19.211 14:06:05 blockdev_general.bdev_error -- bdev/blockdev.sh@502 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 '' 00:13:19.211 Process error testing pid: 185759 00:13:19.211 14:06:05 blockdev_general.bdev_error -- bdev/blockdev.sh@505 -- # waitforlisten 185759 00:13:19.211 14:06:05 blockdev_general.bdev_error -- common/autotest_common.sh@829 -- # '[' -z 185759 ']' 00:13:19.211 14:06:05 blockdev_general.bdev_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:19.211 14:06:05 blockdev_general.bdev_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:19.211 14:06:05 blockdev_general.bdev_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:19.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:19.211 14:06:05 blockdev_general.bdev_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:19.211 14:06:05 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:19.211 [2024-07-15 14:06:05.149707] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:13:19.211 [2024-07-15 14:06:05.150176] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid185759 ] 00:13:19.470 [2024-07-15 14:06:05.314550] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:19.729 [2024-07-15 14:06:05.531403] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:20.297 14:06:06 blockdev_general.bdev_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:20.297 14:06:06 blockdev_general.bdev_error -- common/autotest_common.sh@862 -- # return 0 00:13:20.297 14:06:06 blockdev_general.bdev_error -- bdev/blockdev.sh@507 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:13:20.297 14:06:06 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:20.297 14:06:06 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:20.297 Dev_1 00:13:20.556 14:06:06 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.556 14:06:06 blockdev_general.bdev_error -- bdev/blockdev.sh@508 -- # waitforbdev Dev_1 00:13:20.556 14:06:06 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local bdev_name=Dev_1 00:13:20.556 14:06:06 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:20.556 14:06:06 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local i 00:13:20.556 14:06:06 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:20.556 14:06:06 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:20.556 14:06:06 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:13:20.556 14:06:06 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:20.556 14:06:06 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:20.556 14:06:06 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.556 14:06:06 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:13:20.556 14:06:06 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:20.556 14:06:06 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:20.557 [ 00:13:20.557 { 00:13:20.557 "name": "Dev_1", 00:13:20.557 "aliases": [ 00:13:20.557 "8236ddf6-b31a-4393-af5c-2bae7934f0ad" 00:13:20.557 ], 00:13:20.557 "product_name": "Malloc disk", 00:13:20.557 "block_size": 512, 00:13:20.557 "num_blocks": 262144, 00:13:20.557 "uuid": "8236ddf6-b31a-4393-af5c-2bae7934f0ad", 00:13:20.557 "assigned_rate_limits": { 00:13:20.557 "rw_ios_per_sec": 0, 00:13:20.557 "rw_mbytes_per_sec": 0, 00:13:20.557 "r_mbytes_per_sec": 0, 00:13:20.557 "w_mbytes_per_sec": 0 00:13:20.557 }, 00:13:20.557 "claimed": false, 00:13:20.557 "zoned": false, 00:13:20.557 "supported_io_types": { 00:13:20.557 "read": true, 00:13:20.557 "write": true, 00:13:20.557 "unmap": true, 00:13:20.557 "flush": true, 00:13:20.557 "reset": true, 00:13:20.557 "nvme_admin": false, 00:13:20.557 "nvme_io": false, 00:13:20.557 "nvme_io_md": false, 00:13:20.557 "write_zeroes": true, 00:13:20.557 "zcopy": true, 00:13:20.557 "get_zone_info": false, 00:13:20.557 "zone_management": false, 00:13:20.557 "zone_append": false, 00:13:20.557 "compare": false, 00:13:20.557 "compare_and_write": false, 00:13:20.557 "abort": true, 00:13:20.557 "seek_hole": false, 00:13:20.557 "seek_data": false, 00:13:20.557 "copy": true, 00:13:20.557 "nvme_iov_md": false 00:13:20.557 }, 00:13:20.557 "memory_domains": [ 00:13:20.557 { 00:13:20.557 "dma_device_id": "system", 00:13:20.557 "dma_device_type": 1 00:13:20.557 }, 00:13:20.557 { 00:13:20.557 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:20.557 "dma_device_type": 2 00:13:20.557 } 00:13:20.557 ], 00:13:20.557 "driver_specific": {} 00:13:20.557 } 00:13:20.557 ] 00:13:20.557 14:06:06 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.557 14:06:06 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # return 0 00:13:20.557 14:06:06 blockdev_general.bdev_error -- bdev/blockdev.sh@509 -- # rpc_cmd bdev_error_create Dev_1 00:13:20.557 14:06:06 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:20.557 14:06:06 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:20.557 true 00:13:20.557 14:06:06 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.557 14:06:06 blockdev_general.bdev_error -- bdev/blockdev.sh@510 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:13:20.557 14:06:06 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:20.557 14:06:06 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:20.557 Dev_2 00:13:20.557 14:06:06 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.557 14:06:06 blockdev_general.bdev_error -- bdev/blockdev.sh@511 -- # waitforbdev Dev_2 00:13:20.557 14:06:06 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local bdev_name=Dev_2 00:13:20.557 14:06:06 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:20.557 14:06:06 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local i 00:13:20.557 14:06:06 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:20.557 14:06:06 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:20.557 14:06:06 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:13:20.557 14:06:06 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:20.557 14:06:06 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:20.557 14:06:06 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.557 14:06:06 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:13:20.557 14:06:06 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:20.557 14:06:06 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:20.557 [ 00:13:20.557 { 00:13:20.557 "name": "Dev_2", 00:13:20.557 "aliases": [ 00:13:20.557 "59a7a320-fc6c-4393-be19-08fd55d897af" 00:13:20.557 ], 00:13:20.557 "product_name": "Malloc disk", 00:13:20.557 "block_size": 512, 00:13:20.557 "num_blocks": 262144, 00:13:20.557 "uuid": "59a7a320-fc6c-4393-be19-08fd55d897af", 00:13:20.557 "assigned_rate_limits": { 00:13:20.557 "rw_ios_per_sec": 0, 00:13:20.557 "rw_mbytes_per_sec": 0, 00:13:20.557 "r_mbytes_per_sec": 0, 00:13:20.557 "w_mbytes_per_sec": 0 00:13:20.557 }, 00:13:20.557 "claimed": false, 00:13:20.557 "zoned": false, 00:13:20.557 "supported_io_types": { 00:13:20.557 "read": true, 00:13:20.557 "write": true, 00:13:20.557 "unmap": true, 00:13:20.557 "flush": true, 00:13:20.557 "reset": true, 00:13:20.557 "nvme_admin": false, 00:13:20.557 "nvme_io": false, 00:13:20.557 "nvme_io_md": false, 00:13:20.557 "write_zeroes": true, 00:13:20.557 "zcopy": true, 00:13:20.557 "get_zone_info": false, 00:13:20.557 "zone_management": false, 00:13:20.557 "zone_append": false, 00:13:20.557 "compare": false, 00:13:20.557 "compare_and_write": false, 00:13:20.557 "abort": true, 00:13:20.557 "seek_hole": false, 00:13:20.557 "seek_data": false, 00:13:20.557 "copy": true, 00:13:20.557 "nvme_iov_md": false 00:13:20.557 }, 00:13:20.557 "memory_domains": [ 00:13:20.557 { 00:13:20.557 "dma_device_id": "system", 00:13:20.557 "dma_device_type": 1 00:13:20.557 }, 00:13:20.557 { 00:13:20.557 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:20.557 "dma_device_type": 2 00:13:20.557 } 00:13:20.557 ], 00:13:20.557 "driver_specific": {} 00:13:20.557 } 00:13:20.557 ] 00:13:20.557 14:06:06 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.557 14:06:06 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # return 0 00:13:20.557 14:06:06 blockdev_general.bdev_error -- bdev/blockdev.sh@512 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:13:20.557 14:06:06 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:20.557 14:06:06 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:20.557 14:06:06 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.557 14:06:06 blockdev_general.bdev_error -- bdev/blockdev.sh@515 -- # NOT wait 185759 00:13:20.557 14:06:06 blockdev_general.bdev_error -- bdev/blockdev.sh@514 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:13:20.557 14:06:06 blockdev_general.bdev_error -- common/autotest_common.sh@648 -- # local es=0 00:13:20.557 14:06:06 blockdev_general.bdev_error -- common/autotest_common.sh@650 -- # valid_exec_arg wait 185759 00:13:20.557 14:06:06 blockdev_general.bdev_error -- common/autotest_common.sh@636 -- # local arg=wait 00:13:20.557 14:06:06 blockdev_general.bdev_error -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:20.557 14:06:06 blockdev_general.bdev_error -- common/autotest_common.sh@640 -- # type -t wait 00:13:20.557 14:06:06 blockdev_general.bdev_error -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:20.557 14:06:06 blockdev_general.bdev_error -- common/autotest_common.sh@651 -- # wait 185759 00:13:20.817 Running I/O for 5 seconds... 00:13:20.817 task offset: 54064 on job bdev=EE_Dev_1 fails 00:13:20.817 00:13:20.817 Latency(us) 00:13:20.817 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:20.817 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:13:20.817 Job: EE_Dev_1 ended in about 0.00 seconds with error 00:13:20.817 EE_Dev_1 : 0.00 50000.00 195.31 11363.64 0.00 216.15 66.09 389.12 00:13:20.817 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:13:20.817 Dev_2 : 0.00 55749.13 217.77 0.00 0.00 168.61 74.47 288.58 00:13:20.817 =================================================================================================================== 00:13:20.817 Total : 105749.13 413.08 11363.64 0.00 190.37 66.09 389.12 00:13:20.817 [2024-07-15 14:06:06.600537] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:20.817 request: 00:13:20.817 { 00:13:20.817 "method": "perform_tests", 00:13:20.817 "req_id": 1 00:13:20.817 } 00:13:20.817 Got JSON-RPC error response 00:13:20.817 response: 00:13:20.817 { 00:13:20.817 "code": -32603, 00:13:20.817 "message": "bdevperf failed with error Operation not permitted" 00:13:20.817 } 00:13:22.720 14:06:08 blockdev_general.bdev_error -- common/autotest_common.sh@651 -- # es=255 00:13:22.720 14:06:08 blockdev_general.bdev_error -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:22.720 14:06:08 blockdev_general.bdev_error -- common/autotest_common.sh@660 -- # es=127 00:13:22.720 14:06:08 blockdev_general.bdev_error -- common/autotest_common.sh@661 -- # case "$es" in 00:13:22.720 14:06:08 blockdev_general.bdev_error -- common/autotest_common.sh@668 -- # es=1 00:13:22.720 14:06:08 blockdev_general.bdev_error -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:22.720 00:13:22.720 real 0m12.570s 00:13:22.720 user 0m12.862s 00:13:22.720 sys 0m0.851s 00:13:22.720 14:06:08 blockdev_general.bdev_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:22.720 14:06:08 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:22.720 ************************************ 00:13:22.720 END TEST bdev_error 00:13:22.720 ************************************ 00:13:22.720 14:06:08 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:13:22.720 14:06:08 blockdev_general -- bdev/blockdev.sh@791 -- # run_test bdev_stat stat_test_suite '' 00:13:22.720 14:06:08 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:22.720 14:06:08 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:22.720 14:06:08 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:22.720 ************************************ 00:13:22.720 START TEST bdev_stat 00:13:22.720 ************************************ 00:13:22.720 14:06:08 blockdev_general.bdev_stat -- common/autotest_common.sh@1123 -- # stat_test_suite '' 00:13:22.720 14:06:08 blockdev_general.bdev_stat -- bdev/blockdev.sh@592 -- # STAT_DEV=Malloc_STAT 00:13:22.720 14:06:08 blockdev_general.bdev_stat -- bdev/blockdev.sh@596 -- # STAT_PID=185822 00:13:22.720 14:06:08 blockdev_general.bdev_stat -- bdev/blockdev.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 10 -C '' 00:13:22.720 14:06:08 blockdev_general.bdev_stat -- bdev/blockdev.sh@597 -- # echo 'Process Bdev IO statistics testing pid: 185822' 00:13:22.720 Process Bdev IO statistics testing pid: 185822 00:13:22.720 14:06:08 blockdev_general.bdev_stat -- bdev/blockdev.sh@598 -- # trap 'cleanup; killprocess $STAT_PID; exit 1' SIGINT SIGTERM EXIT 00:13:22.720 14:06:08 blockdev_general.bdev_stat -- bdev/blockdev.sh@599 -- # waitforlisten 185822 00:13:22.720 14:06:08 blockdev_general.bdev_stat -- common/autotest_common.sh@829 -- # '[' -z 185822 ']' 00:13:22.720 14:06:08 blockdev_general.bdev_stat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:22.720 14:06:08 blockdev_general.bdev_stat -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:22.720 14:06:08 blockdev_general.bdev_stat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:22.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:22.720 14:06:08 blockdev_general.bdev_stat -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:22.720 14:06:08 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:13:22.720 [2024-07-15 14:06:08.465435] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:13:22.720 [2024-07-15 14:06:08.465794] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid185822 ] 00:13:22.720 [2024-07-15 14:06:08.626001] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:22.979 [2024-07-15 14:06:08.886249] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:22.979 [2024-07-15 14:06:08.886253] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:23.546 14:06:09 blockdev_general.bdev_stat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:23.546 14:06:09 blockdev_general.bdev_stat -- common/autotest_common.sh@862 -- # return 0 00:13:23.546 14:06:09 blockdev_general.bdev_stat -- bdev/blockdev.sh@601 -- # rpc_cmd bdev_malloc_create -b Malloc_STAT 128 512 00:13:23.546 14:06:09 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.546 14:06:09 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:13:23.804 Malloc_STAT 00:13:23.804 14:06:09 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.804 14:06:09 blockdev_general.bdev_stat -- bdev/blockdev.sh@602 -- # waitforbdev Malloc_STAT 00:13:23.804 14:06:09 blockdev_general.bdev_stat -- common/autotest_common.sh@897 -- # local bdev_name=Malloc_STAT 00:13:23.804 14:06:09 blockdev_general.bdev_stat -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:23.804 14:06:09 blockdev_general.bdev_stat -- common/autotest_common.sh@899 -- # local i 00:13:23.805 14:06:09 blockdev_general.bdev_stat -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:23.805 14:06:09 blockdev_general.bdev_stat -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:23.805 14:06:09 blockdev_general.bdev_stat -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:13:23.805 14:06:09 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.805 14:06:09 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:13:23.805 14:06:09 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.805 14:06:09 blockdev_general.bdev_stat -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Malloc_STAT -t 2000 00:13:23.805 14:06:09 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.805 14:06:09 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:13:23.805 [ 00:13:23.805 { 00:13:23.805 "name": "Malloc_STAT", 00:13:23.805 "aliases": [ 00:13:23.805 "7bc8eb76-a0a9-4a87-9406-ae3d687a6a7c" 00:13:23.805 ], 00:13:23.805 "product_name": "Malloc disk", 00:13:23.805 "block_size": 512, 00:13:23.805 "num_blocks": 262144, 00:13:23.805 "uuid": "7bc8eb76-a0a9-4a87-9406-ae3d687a6a7c", 00:13:23.805 "assigned_rate_limits": { 00:13:23.805 "rw_ios_per_sec": 0, 00:13:23.805 "rw_mbytes_per_sec": 0, 00:13:23.805 "r_mbytes_per_sec": 0, 00:13:23.805 "w_mbytes_per_sec": 0 00:13:23.805 }, 00:13:23.805 "claimed": false, 00:13:23.805 "zoned": false, 00:13:23.805 "supported_io_types": { 00:13:23.805 "read": true, 00:13:23.805 "write": true, 00:13:23.805 "unmap": true, 00:13:23.805 "flush": true, 00:13:23.805 "reset": true, 00:13:23.805 "nvme_admin": false, 00:13:23.805 "nvme_io": false, 00:13:23.805 "nvme_io_md": false, 00:13:23.805 "write_zeroes": true, 00:13:23.805 "zcopy": true, 00:13:23.805 "get_zone_info": false, 00:13:23.805 "zone_management": false, 00:13:23.805 "zone_append": false, 00:13:23.805 "compare": false, 00:13:23.805 "compare_and_write": false, 00:13:23.805 "abort": true, 00:13:23.805 "seek_hole": false, 00:13:23.805 "seek_data": false, 00:13:23.805 "copy": true, 00:13:23.805 "nvme_iov_md": false 00:13:23.805 }, 00:13:23.805 "memory_domains": [ 00:13:23.805 { 00:13:23.805 "dma_device_id": "system", 00:13:23.805 "dma_device_type": 1 00:13:23.805 }, 00:13:23.805 { 00:13:23.805 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:23.805 "dma_device_type": 2 00:13:23.805 } 00:13:23.805 ], 00:13:23.805 "driver_specific": {} 00:13:23.805 } 00:13:23.805 ] 00:13:23.805 14:06:09 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.805 14:06:09 blockdev_general.bdev_stat -- common/autotest_common.sh@905 -- # return 0 00:13:23.805 14:06:09 blockdev_general.bdev_stat -- bdev/blockdev.sh@605 -- # sleep 2 00:13:23.805 14:06:09 blockdev_general.bdev_stat -- bdev/blockdev.sh@604 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:23.805 Running I/O for 10 seconds... 00:13:25.704 14:06:11 blockdev_general.bdev_stat -- bdev/blockdev.sh@606 -- # stat_function_test Malloc_STAT 00:13:25.704 14:06:11 blockdev_general.bdev_stat -- bdev/blockdev.sh@559 -- # local bdev_name=Malloc_STAT 00:13:25.704 14:06:11 blockdev_general.bdev_stat -- bdev/blockdev.sh@560 -- # local iostats 00:13:25.704 14:06:11 blockdev_general.bdev_stat -- bdev/blockdev.sh@561 -- # local io_count1 00:13:25.704 14:06:11 blockdev_general.bdev_stat -- bdev/blockdev.sh@562 -- # local io_count2 00:13:25.704 14:06:11 blockdev_general.bdev_stat -- bdev/blockdev.sh@563 -- # local iostats_per_channel 00:13:25.704 14:06:11 blockdev_general.bdev_stat -- bdev/blockdev.sh@564 -- # local io_count_per_channel1 00:13:25.704 14:06:11 blockdev_general.bdev_stat -- bdev/blockdev.sh@565 -- # local io_count_per_channel2 00:13:25.704 14:06:11 blockdev_general.bdev_stat -- bdev/blockdev.sh@566 -- # local io_count_per_channel_all=0 00:13:25.704 14:06:11 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:13:25.704 14:06:11 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.704 14:06:11 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:13:25.704 14:06:11 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.704 14:06:11 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # iostats='{ 00:13:25.704 "tick_rate": 2200000000, 00:13:25.704 "ticks": 1706990148109, 00:13:25.704 "bdevs": [ 00:13:25.704 { 00:13:25.704 "name": "Malloc_STAT", 00:13:25.704 "bytes_read": 1979748864, 00:13:25.704 "num_read_ops": 483331, 00:13:25.704 "bytes_written": 0, 00:13:25.704 "num_write_ops": 0, 00:13:25.704 "bytes_unmapped": 0, 00:13:25.704 "num_unmap_ops": 0, 00:13:25.704 "bytes_copied": 0, 00:13:25.704 "num_copy_ops": 0, 00:13:25.704 "read_latency_ticks": 2127008637209, 00:13:25.704 "max_read_latency_ticks": 8220058, 00:13:25.704 "min_read_latency_ticks": 266591, 00:13:25.704 "write_latency_ticks": 0, 00:13:25.704 "max_write_latency_ticks": 0, 00:13:25.704 "min_write_latency_ticks": 0, 00:13:25.704 "unmap_latency_ticks": 0, 00:13:25.704 "max_unmap_latency_ticks": 0, 00:13:25.704 "min_unmap_latency_ticks": 0, 00:13:25.704 "copy_latency_ticks": 0, 00:13:25.704 "max_copy_latency_ticks": 0, 00:13:25.704 "min_copy_latency_ticks": 0, 00:13:25.704 "io_error": {} 00:13:25.704 } 00:13:25.704 ] 00:13:25.704 }' 00:13:25.704 14:06:11 blockdev_general.bdev_stat -- bdev/blockdev.sh@569 -- # jq -r '.bdevs[0].num_read_ops' 00:13:25.704 14:06:11 blockdev_general.bdev_stat -- bdev/blockdev.sh@569 -- # io_count1=483331 00:13:25.704 14:06:11 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT -c 00:13:25.704 14:06:11 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.704 14:06:11 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:13:25.963 14:06:11 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.963 14:06:11 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # iostats_per_channel='{ 00:13:25.963 "tick_rate": 2200000000, 00:13:25.963 "ticks": 1707126984275, 00:13:25.963 "name": "Malloc_STAT", 00:13:25.963 "channels": [ 00:13:25.963 { 00:13:25.963 "thread_id": 2, 00:13:25.963 "bytes_read": 1004535808, 00:13:25.963 "num_read_ops": 245248, 00:13:25.963 "bytes_written": 0, 00:13:25.963 "num_write_ops": 0, 00:13:25.963 "bytes_unmapped": 0, 00:13:25.963 "num_unmap_ops": 0, 00:13:25.963 "bytes_copied": 0, 00:13:25.963 "num_copy_ops": 0, 00:13:25.963 "read_latency_ticks": 1098200052635, 00:13:25.963 "max_read_latency_ticks": 8157572, 00:13:25.963 "min_read_latency_ticks": 2396718, 00:13:25.963 "write_latency_ticks": 0, 00:13:25.963 "max_write_latency_ticks": 0, 00:13:25.963 "min_write_latency_ticks": 0, 00:13:25.963 "unmap_latency_ticks": 0, 00:13:25.963 "max_unmap_latency_ticks": 0, 00:13:25.963 "min_unmap_latency_ticks": 0, 00:13:25.963 "copy_latency_ticks": 0, 00:13:25.963 "max_copy_latency_ticks": 0, 00:13:25.963 "min_copy_latency_ticks": 0 00:13:25.963 }, 00:13:25.963 { 00:13:25.963 "thread_id": 3, 00:13:25.963 "bytes_read": 1040187392, 00:13:25.963 "num_read_ops": 253952, 00:13:25.963 "bytes_written": 0, 00:13:25.963 "num_write_ops": 0, 00:13:25.963 "bytes_unmapped": 0, 00:13:25.963 "num_unmap_ops": 0, 00:13:25.963 "bytes_copied": 0, 00:13:25.963 "num_copy_ops": 0, 00:13:25.963 "read_latency_ticks": 1098601432517, 00:13:25.963 "max_read_latency_ticks": 8220058, 00:13:25.963 "min_read_latency_ticks": 3113276, 00:13:25.963 "write_latency_ticks": 0, 00:13:25.963 "max_write_latency_ticks": 0, 00:13:25.963 "min_write_latency_ticks": 0, 00:13:25.963 "unmap_latency_ticks": 0, 00:13:25.963 "max_unmap_latency_ticks": 0, 00:13:25.963 "min_unmap_latency_ticks": 0, 00:13:25.963 "copy_latency_ticks": 0, 00:13:25.963 "max_copy_latency_ticks": 0, 00:13:25.963 "min_copy_latency_ticks": 0 00:13:25.963 } 00:13:25.963 ] 00:13:25.963 }' 00:13:25.963 14:06:11 blockdev_general.bdev_stat -- bdev/blockdev.sh@572 -- # jq -r '.channels[0].num_read_ops' 00:13:25.963 14:06:11 blockdev_general.bdev_stat -- bdev/blockdev.sh@572 -- # io_count_per_channel1=245248 00:13:25.963 14:06:11 blockdev_general.bdev_stat -- bdev/blockdev.sh@573 -- # io_count_per_channel_all=245248 00:13:25.963 14:06:11 blockdev_general.bdev_stat -- bdev/blockdev.sh@574 -- # jq -r '.channels[1].num_read_ops' 00:13:25.964 14:06:11 blockdev_general.bdev_stat -- bdev/blockdev.sh@574 -- # io_count_per_channel2=253952 00:13:25.964 14:06:11 blockdev_general.bdev_stat -- bdev/blockdev.sh@575 -- # io_count_per_channel_all=499200 00:13:25.964 14:06:11 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:13:25.964 14:06:11 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.964 14:06:11 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:13:25.964 14:06:11 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.964 14:06:11 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # iostats='{ 00:13:25.964 "tick_rate": 2200000000, 00:13:25.964 "ticks": 1707415463169, 00:13:25.964 "bdevs": [ 00:13:25.964 { 00:13:25.964 "name": "Malloc_STAT", 00:13:25.964 "bytes_read": 2184221184, 00:13:25.964 "num_read_ops": 533251, 00:13:25.964 "bytes_written": 0, 00:13:25.964 "num_write_ops": 0, 00:13:25.964 "bytes_unmapped": 0, 00:13:25.964 "num_unmap_ops": 0, 00:13:25.964 "bytes_copied": 0, 00:13:25.964 "num_copy_ops": 0, 00:13:25.964 "read_latency_ticks": 2345178854498, 00:13:25.964 "max_read_latency_ticks": 8220058, 00:13:25.964 "min_read_latency_ticks": 266591, 00:13:25.964 "write_latency_ticks": 0, 00:13:25.964 "max_write_latency_ticks": 0, 00:13:25.964 "min_write_latency_ticks": 0, 00:13:25.964 "unmap_latency_ticks": 0, 00:13:25.964 "max_unmap_latency_ticks": 0, 00:13:25.964 "min_unmap_latency_ticks": 0, 00:13:25.964 "copy_latency_ticks": 0, 00:13:25.964 "max_copy_latency_ticks": 0, 00:13:25.964 "min_copy_latency_ticks": 0, 00:13:25.964 "io_error": {} 00:13:25.964 } 00:13:25.964 ] 00:13:25.964 }' 00:13:25.964 14:06:11 blockdev_general.bdev_stat -- bdev/blockdev.sh@578 -- # jq -r '.bdevs[0].num_read_ops' 00:13:25.964 14:06:11 blockdev_general.bdev_stat -- bdev/blockdev.sh@578 -- # io_count2=533251 00:13:25.964 14:06:11 blockdev_general.bdev_stat -- bdev/blockdev.sh@583 -- # '[' 499200 -lt 483331 ']' 00:13:25.964 14:06:11 blockdev_general.bdev_stat -- bdev/blockdev.sh@583 -- # '[' 499200 -gt 533251 ']' 00:13:25.964 14:06:11 blockdev_general.bdev_stat -- bdev/blockdev.sh@608 -- # rpc_cmd bdev_malloc_delete Malloc_STAT 00:13:25.964 14:06:11 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.964 14:06:11 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:13:25.964 00:13:25.964 Latency(us) 00:13:25.964 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:25.964 Job: Malloc_STAT (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:13:25.964 Malloc_STAT : 2.15 125986.88 492.14 0.00 0.00 2028.69 444.97 3708.74 00:13:25.964 Job: Malloc_STAT (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:13:25.964 Malloc_STAT : 2.15 130000.15 507.81 0.00 0.00 1966.28 316.51 3738.53 00:13:25.964 =================================================================================================================== 00:13:25.964 Total : 255987.03 999.95 0.00 0.00 1996.99 316.51 3738.53 00:13:26.222 0 00:13:26.222 14:06:12 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:26.222 14:06:12 blockdev_general.bdev_stat -- bdev/blockdev.sh@609 -- # killprocess 185822 00:13:26.222 14:06:12 blockdev_general.bdev_stat -- common/autotest_common.sh@948 -- # '[' -z 185822 ']' 00:13:26.222 14:06:12 blockdev_general.bdev_stat -- common/autotest_common.sh@952 -- # kill -0 185822 00:13:26.222 14:06:12 blockdev_general.bdev_stat -- common/autotest_common.sh@953 -- # uname 00:13:26.222 14:06:12 blockdev_general.bdev_stat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:26.222 14:06:12 blockdev_general.bdev_stat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 185822 00:13:26.222 14:06:12 blockdev_general.bdev_stat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:26.222 14:06:12 blockdev_general.bdev_stat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:26.222 14:06:12 blockdev_general.bdev_stat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 185822' 00:13:26.222 killing process with pid 185822 00:13:26.222 14:06:12 blockdev_general.bdev_stat -- common/autotest_common.sh@967 -- # kill 185822 00:13:26.222 Received shutdown signal, test time was about 2.297824 seconds 00:13:26.222 00:13:26.222 Latency(us) 00:13:26.222 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:26.222 =================================================================================================================== 00:13:26.222 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:26.222 14:06:12 blockdev_general.bdev_stat -- common/autotest_common.sh@972 -- # wait 185822 00:13:27.604 14:06:13 blockdev_general.bdev_stat -- bdev/blockdev.sh@610 -- # trap - SIGINT SIGTERM EXIT 00:13:27.604 00:13:27.604 real 0m4.979s 00:13:27.604 user 0m9.414s 00:13:27.604 sys 0m0.419s 00:13:27.604 14:06:13 blockdev_general.bdev_stat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:27.604 14:06:13 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:13:27.604 ************************************ 00:13:27.604 END TEST bdev_stat 00:13:27.604 ************************************ 00:13:27.604 14:06:13 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:13:27.604 14:06:13 blockdev_general -- bdev/blockdev.sh@794 -- # [[ bdev == gpt ]] 00:13:27.604 14:06:13 blockdev_general -- bdev/blockdev.sh@798 -- # [[ bdev == crypto_sw ]] 00:13:27.604 14:06:13 blockdev_general -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:13:27.604 14:06:13 blockdev_general -- bdev/blockdev.sh@811 -- # cleanup 00:13:27.604 14:06:13 blockdev_general -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:13:27.604 14:06:13 blockdev_general -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:27.604 14:06:13 blockdev_general -- bdev/blockdev.sh@26 -- # [[ bdev == rbd ]] 00:13:27.604 14:06:13 blockdev_general -- bdev/blockdev.sh@30 -- # [[ bdev == daos ]] 00:13:27.604 14:06:13 blockdev_general -- bdev/blockdev.sh@34 -- # [[ bdev = \g\p\t ]] 00:13:27.604 14:06:13 blockdev_general -- bdev/blockdev.sh@40 -- # [[ bdev == xnvme ]] 00:13:27.604 00:13:27.604 real 2m27.388s 00:13:27.604 user 5m51.546s 00:13:27.604 sys 0m22.988s 00:13:27.604 14:06:13 blockdev_general -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:27.604 14:06:13 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:27.604 ************************************ 00:13:27.604 END TEST blockdev_general 00:13:27.604 ************************************ 00:13:27.604 14:06:13 -- common/autotest_common.sh@1142 -- # return 0 00:13:27.604 14:06:13 -- spdk/autotest.sh@190 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:13:27.604 14:06:13 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:27.604 14:06:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:27.604 14:06:13 -- common/autotest_common.sh@10 -- # set +x 00:13:27.604 ************************************ 00:13:27.604 START TEST bdev_raid 00:13:27.604 ************************************ 00:13:27.604 14:06:13 bdev_raid -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:13:27.604 * Looking for test storage... 00:13:27.604 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:13:27.604 14:06:13 bdev_raid -- bdev/bdev_raid.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:13:27.604 14:06:13 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:13:27.604 14:06:13 bdev_raid -- bdev/bdev_raid.sh@15 -- # rpc_py='/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock' 00:13:27.604 14:06:13 bdev_raid -- bdev/bdev_raid.sh@851 -- # mkdir -p /raidtest 00:13:27.604 14:06:13 bdev_raid -- bdev/bdev_raid.sh@852 -- # trap 'cleanup; exit 1' EXIT 00:13:27.863 14:06:13 bdev_raid -- bdev/bdev_raid.sh@854 -- # base_blocklen=512 00:13:27.863 14:06:13 bdev_raid -- bdev/bdev_raid.sh@856 -- # uname -s 00:13:27.863 14:06:13 bdev_raid -- bdev/bdev_raid.sh@856 -- # '[' Linux = Linux ']' 00:13:27.863 14:06:13 bdev_raid -- bdev/bdev_raid.sh@856 -- # modprobe -n nbd 00:13:27.863 14:06:13 bdev_raid -- bdev/bdev_raid.sh@857 -- # has_nbd=true 00:13:27.863 14:06:13 bdev_raid -- bdev/bdev_raid.sh@858 -- # modprobe nbd 00:13:27.863 14:06:13 bdev_raid -- bdev/bdev_raid.sh@859 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:13:27.863 14:06:13 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:27.863 14:06:13 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:27.863 14:06:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:27.863 ************************************ 00:13:27.863 START TEST raid_function_test_raid0 00:13:27.863 ************************************ 00:13:27.863 14:06:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1123 -- # raid_function_test raid0 00:13:27.863 14:06:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@80 -- # local raid_level=raid0 00:13:27.863 14:06:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@81 -- # local nbd=/dev/nbd0 00:13:27.863 14:06:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@82 -- # local raid_bdev 00:13:27.863 14:06:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # raid_pid=185976 00:13:27.863 14:06:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@86 -- # echo 'Process raid pid: 185976' 00:13:27.863 14:06:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:27.863 Process raid pid: 185976 00:13:27.863 14:06:13 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@87 -- # waitforlisten 185976 /var/tmp/spdk-raid.sock 00:13:27.863 14:06:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@829 -- # '[' -z 185976 ']' 00:13:27.863 14:06:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:27.863 14:06:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:27.863 14:06:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:27.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:27.863 14:06:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:27.863 14:06:13 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:13:27.863 [2024-07-15 14:06:13.670117] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:13:27.863 [2024-07-15 14:06:13.670427] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:27.863 [2024-07-15 14:06:13.825926] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:28.120 [2024-07-15 14:06:14.040946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:28.378 [2024-07-15 14:06:14.245402] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:28.943 14:06:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:28.943 14:06:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@862 -- # return 0 00:13:28.943 14:06:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # configure_raid_bdev raid0 00:13:28.943 14:06:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_level=raid0 00:13:28.943 14:06:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@67 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:13:28.943 14:06:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # cat 00:13:28.943 14:06:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:13:29.201 [2024-07-15 14:06:15.050833] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:13:29.201 [2024-07-15 14:06:15.052527] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:13:29.201 [2024-07-15 14:06:15.052751] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:13:29.201 [2024-07-15 14:06:15.052916] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:29.201 [2024-07-15 14:06:15.053152] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:13:29.201 [2024-07-15 14:06:15.053518] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:13:29.201 [2024-07-15 14:06:15.053656] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x616000007280 00:13:29.201 [2024-07-15 14:06:15.053927] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:29.201 Base_1 00:13:29.201 Base_2 00:13:29.201 14:06:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@76 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:13:29.201 14:06:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # jq -r '.[0]["name"] | select(.)' 00:13:29.201 14:06:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:13:29.459 14:06:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # raid_bdev=raid 00:13:29.459 14:06:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # '[' raid = '' ']' 00:13:29.459 14:06:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@96 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:13:29.459 14:06:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:29.459 14:06:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:13:29.459 14:06:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:29.459 14:06:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:29.459 14:06:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:29.459 14:06:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:13:29.459 14:06:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:29.459 14:06:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:29.459 14:06:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:13:29.717 [2024-07-15 14:06:15.570982] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:13:29.717 /dev/nbd0 00:13:29.717 14:06:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:29.717 14:06:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:29.717 14:06:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:13:29.717 14:06:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@867 -- # local i 00:13:29.717 14:06:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:29.717 14:06:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:29.717 14:06:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:13:29.717 14:06:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # break 00:13:29.717 14:06:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:29.717 14:06:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:29.717 14:06:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:29.717 1+0 records in 00:13:29.717 1+0 records out 00:13:29.717 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000541077 s, 7.6 MB/s 00:13:29.717 14:06:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:29.717 14:06:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # size=4096 00:13:29.717 14:06:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:29.717 14:06:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:29.717 14:06:15 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@887 -- # return 0 00:13:29.717 14:06:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:29.717 14:06:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:29.717 14:06:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:13:29.717 14:06:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:29.717 14:06:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:13:29.976 14:06:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:29.976 { 00:13:29.976 "nbd_device": "/dev/nbd0", 00:13:29.976 "bdev_name": "raid" 00:13:29.976 } 00:13:29.976 ]' 00:13:29.976 14:06:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:29.976 { 00:13:29.976 "nbd_device": "/dev/nbd0", 00:13:29.976 "bdev_name": "raid" 00:13:29.976 } 00:13:29.976 ]' 00:13:29.976 14:06:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:29.976 14:06:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:13:29.976 14:06:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:13:29.976 14:06:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:29.976 14:06:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:13:29.976 14:06:15 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:13:29.976 14:06:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # count=1 00:13:29.976 14:06:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@98 -- # '[' 1 -ne 1 ']' 00:13:29.976 14:06:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@102 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:13:29.976 14:06:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # hash blkdiscard 00:13:29.976 14:06:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local nbd=/dev/nbd0 00:13:29.976 14:06:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:29.976 14:06:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local blksize 00:13:29.976 14:06:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # lsblk -o LOG-SEC /dev/nbd0 00:13:29.976 14:06:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # cut -d ' ' -f 5 00:13:29.976 14:06:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # grep -v LOG-SEC 00:13:29.976 14:06:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # blksize=512 00:13:29.976 14:06:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local rw_blk_num=4096 00:13:29.976 14:06:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local rw_len=2097152 00:13:29.976 14:06:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # unmap_blk_offs=('0' '1028' '321') 00:13:29.976 14:06:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_blk_offs 00:13:29.976 14:06:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # unmap_blk_nums=('128' '2035' '456') 00:13:29.976 14:06:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_blk_nums 00:13:29.976 14:06:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@27 -- # local unmap_off 00:13:29.976 14:06:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@28 -- # local unmap_len 00:13:29.976 14:06:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:13:29.976 4096+0 records in 00:13:29.976 4096+0 records out 00:13:29.976 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0157375 s, 133 MB/s 00:13:29.976 14:06:15 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@32 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:13:30.235 4096+0 records in 00:13:30.235 4096+0 records out 00:13:30.235 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.251073 s, 8.4 MB/s 00:13:30.235 14:06:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@33 -- # blockdev --flushbufs /dev/nbd0 00:13:30.235 14:06:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:13:30.235 14:06:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i = 0 )) 00:13:30.235 14:06:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:13:30.235 14:06:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=0 00:13:30.235 14:06:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=65536 00:13:30.235 14:06:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:13:30.235 128+0 records in 00:13:30.235 128+0 records out 00:13:30.235 65536 bytes (66 kB, 64 KiB) copied, 0.000829354 s, 79.0 MB/s 00:13:30.235 14:06:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:13:30.235 14:06:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:13:30.235 14:06:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:13:30.494 14:06:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:13:30.494 14:06:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:13:30.494 14:06:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=526336 00:13:30.494 14:06:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=1041920 00:13:30.494 14:06:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:13:30.494 2035+0 records in 00:13:30.494 2035+0 records out 00:13:30.494 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00686021 s, 152 MB/s 00:13:30.494 14:06:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:13:30.494 14:06:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:13:30.494 14:06:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:13:30.494 14:06:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:13:30.494 14:06:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:13:30.494 14:06:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=164352 00:13:30.494 14:06:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=233472 00:13:30.494 14:06:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:13:30.494 456+0 records in 00:13:30.494 456+0 records out 00:13:30.494 233472 bytes (233 kB, 228 KiB) copied, 0.00105461 s, 221 MB/s 00:13:30.494 14:06:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:13:30.494 14:06:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:13:30.494 14:06:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:13:30.494 14:06:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:13:30.494 14:06:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:13:30.494 14:06:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@54 -- # return 0 00:13:30.494 14:06:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@104 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:13:30.494 14:06:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:30.494 14:06:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:30.494 14:06:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:30.494 14:06:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:13:30.495 14:06:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:30.495 14:06:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:13:30.753 14:06:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:30.753 [2024-07-15 14:06:16.577902] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:30.753 14:06:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:30.753 14:06:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:30.753 14:06:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:30.753 14:06:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:30.753 14:06:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:30.753 14:06:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:13:30.753 14:06:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:13:30.753 14:06:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@105 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:13:30.753 14:06:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:30.753 14:06:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:13:31.011 14:06:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:31.011 14:06:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:31.011 14:06:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:31.011 14:06:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:31.011 14:06:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:13:31.011 14:06:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:31.011 14:06:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:13:31.011 14:06:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:13:31.011 14:06:16 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:13:31.011 14:06:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@105 -- # count=0 00:13:31.011 14:06:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@106 -- # '[' 0 -ne 0 ']' 00:13:31.011 14:06:16 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@110 -- # killprocess 185976 00:13:31.011 14:06:16 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@948 -- # '[' -z 185976 ']' 00:13:31.011 14:06:16 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@952 -- # kill -0 185976 00:13:31.011 14:06:16 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@953 -- # uname 00:13:31.011 14:06:16 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:31.011 14:06:16 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 185976 00:13:31.011 14:06:16 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:31.011 14:06:16 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:31.011 14:06:16 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 185976' 00:13:31.011 killing process with pid 185976 00:13:31.011 14:06:16 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@967 -- # kill 185976 00:13:31.011 [2024-07-15 14:06:16.976661] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:31.011 14:06:16 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # wait 185976 00:13:31.011 [2024-07-15 14:06:16.976923] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:31.011 [2024-07-15 14:06:16.977005] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:31.011 [2024-07-15 14:06:16.977106] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name raid, state offline 00:13:31.270 [2024-07-15 14:06:17.142941] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:32.661 14:06:18 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@112 -- # return 0 00:13:32.661 00:13:32.661 real 0m4.640s 00:13:32.661 user 0m5.985s 00:13:32.661 sys 0m1.005s 00:13:32.661 14:06:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:32.661 14:06:18 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:13:32.661 ************************************ 00:13:32.661 END TEST raid_function_test_raid0 00:13:32.661 ************************************ 00:13:32.661 14:06:18 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:13:32.661 14:06:18 bdev_raid -- bdev/bdev_raid.sh@860 -- # run_test raid_function_test_concat raid_function_test concat 00:13:32.661 14:06:18 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:32.661 14:06:18 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:32.661 14:06:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:32.661 ************************************ 00:13:32.661 START TEST raid_function_test_concat 00:13:32.661 ************************************ 00:13:32.661 14:06:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1123 -- # raid_function_test concat 00:13:32.661 14:06:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@80 -- # local raid_level=concat 00:13:32.661 14:06:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@81 -- # local nbd=/dev/nbd0 00:13:32.661 14:06:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@82 -- # local raid_bdev 00:13:32.661 14:06:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # raid_pid=186125 00:13:32.661 14:06:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@86 -- # echo 'Process raid pid: 186125' 00:13:32.661 14:06:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:32.661 Process raid pid: 186125 00:13:32.661 14:06:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@87 -- # waitforlisten 186125 /var/tmp/spdk-raid.sock 00:13:32.661 14:06:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@829 -- # '[' -z 186125 ']' 00:13:32.661 14:06:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:32.661 14:06:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:32.661 14:06:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:32.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:32.661 14:06:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:32.661 14:06:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:13:32.661 [2024-07-15 14:06:18.372052] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:13:32.661 [2024-07-15 14:06:18.372445] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:32.661 [2024-07-15 14:06:18.537183] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:32.919 [2024-07-15 14:06:18.773610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:33.177 [2024-07-15 14:06:18.973840] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:33.434 14:06:19 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:33.434 14:06:19 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@862 -- # return 0 00:13:33.434 14:06:19 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # configure_raid_bdev concat 00:13:33.434 14:06:19 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_level=concat 00:13:33.434 14:06:19 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@67 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:13:33.434 14:06:19 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # cat 00:13:33.434 14:06:19 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:13:34.000 [2024-07-15 14:06:19.698036] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:13:34.000 [2024-07-15 14:06:19.699704] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:13:34.000 [2024-07-15 14:06:19.699952] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:13:34.000 [2024-07-15 14:06:19.700076] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:34.000 [2024-07-15 14:06:19.700216] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:13:34.000 [2024-07-15 14:06:19.700532] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:13:34.000 [2024-07-15 14:06:19.700655] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x616000007280 00:13:34.000 [2024-07-15 14:06:19.700900] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:34.000 Base_1 00:13:34.000 Base_2 00:13:34.000 14:06:19 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@76 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:13:34.000 14:06:19 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:13:34.000 14:06:19 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # jq -r '.[0]["name"] | select(.)' 00:13:34.258 14:06:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # raid_bdev=raid 00:13:34.258 14:06:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # '[' raid = '' ']' 00:13:34.258 14:06:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@96 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:13:34.258 14:06:20 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:34.258 14:06:20 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:13:34.258 14:06:20 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:34.258 14:06:20 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:34.258 14:06:20 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:34.258 14:06:20 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:13:34.258 14:06:20 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:34.258 14:06:20 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:34.258 14:06:20 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:13:34.258 [2024-07-15 14:06:20.234146] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:13:34.258 /dev/nbd0 00:13:34.531 14:06:20 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:34.531 14:06:20 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:34.531 14:06:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:13:34.531 14:06:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@867 -- # local i 00:13:34.531 14:06:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:34.531 14:06:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:34.531 14:06:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:13:34.531 14:06:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # break 00:13:34.531 14:06:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:34.531 14:06:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:34.531 14:06:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:34.531 1+0 records in 00:13:34.531 1+0 records out 00:13:34.531 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000276908 s, 14.8 MB/s 00:13:34.531 14:06:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:34.531 14:06:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # size=4096 00:13:34.531 14:06:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:34.531 14:06:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:34.531 14:06:20 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@887 -- # return 0 00:13:34.531 14:06:20 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:34.531 14:06:20 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:34.531 14:06:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:13:34.531 14:06:20 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:34.531 14:06:20 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:13:34.531 14:06:20 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:34.531 { 00:13:34.531 "nbd_device": "/dev/nbd0", 00:13:34.531 "bdev_name": "raid" 00:13:34.531 } 00:13:34.531 ]' 00:13:34.531 14:06:20 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:34.531 { 00:13:34.531 "nbd_device": "/dev/nbd0", 00:13:34.531 "bdev_name": "raid" 00:13:34.531 } 00:13:34.531 ]' 00:13:34.531 14:06:20 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:34.789 14:06:20 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:13:34.789 14:06:20 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:13:34.789 14:06:20 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:34.789 14:06:20 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:13:34.789 14:06:20 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:13:34.789 14:06:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # count=1 00:13:34.789 14:06:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@98 -- # '[' 1 -ne 1 ']' 00:13:34.789 14:06:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@102 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:13:34.789 14:06:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # hash blkdiscard 00:13:34.789 14:06:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local nbd=/dev/nbd0 00:13:34.789 14:06:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:34.789 14:06:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local blksize 00:13:34.789 14:06:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # grep -v LOG-SEC 00:13:34.789 14:06:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # lsblk -o LOG-SEC /dev/nbd0 00:13:34.789 14:06:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # cut -d ' ' -f 5 00:13:34.789 14:06:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # blksize=512 00:13:34.789 14:06:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local rw_blk_num=4096 00:13:34.789 14:06:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local rw_len=2097152 00:13:34.789 14:06:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # unmap_blk_offs=('0' '1028' '321') 00:13:34.789 14:06:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_blk_offs 00:13:34.789 14:06:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # unmap_blk_nums=('128' '2035' '456') 00:13:34.789 14:06:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_blk_nums 00:13:34.789 14:06:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@27 -- # local unmap_off 00:13:34.789 14:06:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@28 -- # local unmap_len 00:13:34.789 14:06:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:13:34.789 4096+0 records in 00:13:34.789 4096+0 records out 00:13:34.789 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.019403 s, 108 MB/s 00:13:34.789 14:06:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@32 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:13:35.047 4096+0 records in 00:13:35.047 4096+0 records out 00:13:35.047 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.238347 s, 8.8 MB/s 00:13:35.047 14:06:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@33 -- # blockdev --flushbufs /dev/nbd0 00:13:35.047 14:06:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:13:35.047 14:06:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i = 0 )) 00:13:35.047 14:06:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:13:35.047 14:06:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=0 00:13:35.048 14:06:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=65536 00:13:35.048 14:06:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:13:35.048 128+0 records in 00:13:35.048 128+0 records out 00:13:35.048 65536 bytes (66 kB, 64 KiB) copied, 0.00082455 s, 79.5 MB/s 00:13:35.048 14:06:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:13:35.048 14:06:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:13:35.048 14:06:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:13:35.048 14:06:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:13:35.048 14:06:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:13:35.048 14:06:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=526336 00:13:35.048 14:06:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=1041920 00:13:35.048 14:06:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:13:35.048 2035+0 records in 00:13:35.048 2035+0 records out 00:13:35.048 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00623353 s, 167 MB/s 00:13:35.048 14:06:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:13:35.048 14:06:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:13:35.048 14:06:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:13:35.048 14:06:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:13:35.048 14:06:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:13:35.048 14:06:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=164352 00:13:35.048 14:06:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=233472 00:13:35.048 14:06:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:13:35.048 456+0 records in 00:13:35.048 456+0 records out 00:13:35.048 233472 bytes (233 kB, 228 KiB) copied, 0.0025813 s, 90.4 MB/s 00:13:35.048 14:06:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:13:35.048 14:06:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:13:35.048 14:06:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:13:35.048 14:06:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:13:35.048 14:06:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:13:35.048 14:06:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@54 -- # return 0 00:13:35.048 14:06:20 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@104 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:13:35.048 14:06:20 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:35.048 14:06:20 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:35.048 14:06:20 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:35.048 14:06:20 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:13:35.048 14:06:20 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:35.048 14:06:20 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:13:35.306 14:06:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:35.306 [2024-07-15 14:06:21.237512] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:35.306 14:06:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:35.306 14:06:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:35.306 14:06:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:35.306 14:06:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:35.306 14:06:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:35.306 14:06:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:13:35.306 14:06:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:13:35.306 14:06:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@105 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:13:35.306 14:06:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:35.306 14:06:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:13:35.617 14:06:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:35.617 14:06:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:35.617 14:06:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:35.617 14:06:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:35.617 14:06:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:13:35.617 14:06:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:35.617 14:06:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:13:35.617 14:06:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:13:35.617 14:06:21 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:13:35.617 14:06:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@105 -- # count=0 00:13:35.617 14:06:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@106 -- # '[' 0 -ne 0 ']' 00:13:35.617 14:06:21 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@110 -- # killprocess 186125 00:13:35.617 14:06:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@948 -- # '[' -z 186125 ']' 00:13:35.617 14:06:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@952 -- # kill -0 186125 00:13:35.617 14:06:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@953 -- # uname 00:13:35.617 14:06:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:35.617 14:06:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 186125 00:13:35.617 14:06:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:35.617 14:06:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:35.617 14:06:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 186125' 00:13:35.617 killing process with pid 186125 00:13:35.617 14:06:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@967 -- # kill 186125 00:13:35.617 [2024-07-15 14:06:21.587228] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:35.617 14:06:21 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # wait 186125 00:13:35.617 [2024-07-15 14:06:21.587453] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:35.617 [2024-07-15 14:06:21.587505] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:35.617 [2024-07-15 14:06:21.587516] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name raid, state offline 00:13:35.874 [2024-07-15 14:06:21.752681] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:37.249 14:06:22 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@112 -- # return 0 00:13:37.249 00:13:37.249 real 0m4.545s 00:13:37.249 user 0m5.808s 00:13:37.249 sys 0m1.004s 00:13:37.249 14:06:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:37.249 14:06:22 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:13:37.249 ************************************ 00:13:37.249 END TEST raid_function_test_concat 00:13:37.249 ************************************ 00:13:37.249 14:06:22 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:13:37.249 14:06:22 bdev_raid -- bdev/bdev_raid.sh@863 -- # run_test raid0_resize_test raid0_resize_test 00:13:37.249 14:06:22 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:13:37.249 14:06:22 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:37.249 14:06:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:37.249 ************************************ 00:13:37.249 START TEST raid0_resize_test 00:13:37.249 ************************************ 00:13:37.249 14:06:22 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1123 -- # raid0_resize_test 00:13:37.249 14:06:22 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # local blksize=512 00:13:37.249 14:06:22 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@348 -- # local bdev_size_mb=32 00:13:37.249 14:06:22 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # local new_bdev_size_mb=64 00:13:37.249 14:06:22 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # local blkcnt 00:13:37.249 14:06:22 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@351 -- # local raid_size_mb 00:13:37.249 14:06:22 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@352 -- # local new_raid_size_mb 00:13:37.249 14:06:22 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@355 -- # raid_pid=186267 00:13:37.249 14:06:22 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@354 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:37.249 Process raid pid: 186267 00:13:37.249 14:06:22 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # echo 'Process raid pid: 186267' 00:13:37.249 14:06:22 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@357 -- # waitforlisten 186267 /var/tmp/spdk-raid.sock 00:13:37.249 14:06:22 bdev_raid.raid0_resize_test -- common/autotest_common.sh@829 -- # '[' -z 186267 ']' 00:13:37.249 14:06:22 bdev_raid.raid0_resize_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:37.249 14:06:22 bdev_raid.raid0_resize_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:37.249 14:06:22 bdev_raid.raid0_resize_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:37.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:37.249 14:06:22 bdev_raid.raid0_resize_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:37.249 14:06:22 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.249 [2024-07-15 14:06:22.966883] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:13:37.249 [2024-07-15 14:06:22.967190] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:37.249 [2024-07-15 14:06:23.117439] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:37.507 [2024-07-15 14:06:23.369702] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:37.764 [2024-07-15 14:06:23.575931] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:38.021 14:06:23 bdev_raid.raid0_resize_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:38.021 14:06:23 bdev_raid.raid0_resize_test -- common/autotest_common.sh@862 -- # return 0 00:13:38.021 14:06:23 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:13:38.279 Base_1 00:13:38.279 14:06:24 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:13:38.537 Base_2 00:13:38.537 14:06:24 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r 0 -b 'Base_1 Base_2' -n Raid 00:13:38.795 [2024-07-15 14:06:24.682153] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:13:38.795 [2024-07-15 14:06:24.683958] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:13:38.795 [2024-07-15 14:06:24.684128] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:13:38.795 [2024-07-15 14:06:24.684263] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:38.795 [2024-07-15 14:06:24.684446] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:13:38.795 [2024-07-15 14:06:24.684712] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:13:38.795 [2024-07-15 14:06:24.684893] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x616000007280 00:13:38.795 [2024-07-15 14:06:24.685163] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:38.795 14:06:24 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@365 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:13:39.053 [2024-07-15 14:06:24.914172] bdev_raid.c:2262:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:13:39.053 [2024-07-15 14:06:24.914395] bdev_raid.c:2275:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:13:39.053 true 00:13:39.053 14:06:24 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:13:39.053 14:06:24 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@368 -- # jq '.[].num_blocks' 00:13:39.311 [2024-07-15 14:06:25.222349] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:39.311 14:06:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@368 -- # blkcnt=131072 00:13:39.311 14:06:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@369 -- # raid_size_mb=64 00:13:39.311 14:06:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@370 -- # '[' 64 '!=' 64 ']' 00:13:39.311 14:06:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:13:39.582 [2024-07-15 14:06:25.466278] bdev_raid.c:2262:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:13:39.582 [2024-07-15 14:06:25.466464] bdev_raid.c:2275:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:13:39.582 [2024-07-15 14:06:25.466820] bdev_raid.c:2289:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:13:39.582 true 00:13:39.582 14:06:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:13:39.582 14:06:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@379 -- # jq '.[].num_blocks' 00:13:39.872 [2024-07-15 14:06:25.702374] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:39.872 14:06:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@379 -- # blkcnt=262144 00:13:39.872 14:06:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@380 -- # raid_size_mb=128 00:13:39.872 14:06:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@381 -- # '[' 128 '!=' 128 ']' 00:13:39.872 14:06:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@386 -- # killprocess 186267 00:13:39.872 14:06:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@948 -- # '[' -z 186267 ']' 00:13:39.872 14:06:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@952 -- # kill -0 186267 00:13:39.872 14:06:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@953 -- # uname 00:13:39.872 14:06:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:39.872 14:06:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 186267 00:13:39.872 14:06:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:39.872 14:06:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:39.872 14:06:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 186267' 00:13:39.872 killing process with pid 186267 00:13:39.872 14:06:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@967 -- # kill 186267 00:13:39.872 [2024-07-15 14:06:25.750873] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:39.872 14:06:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # wait 186267 00:13:39.872 [2024-07-15 14:06:25.751133] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:39.872 [2024-07-15 14:06:25.751268] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:39.872 [2024-07-15 14:06:25.751366] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Raid, state offline 00:13:39.872 [2024-07-15 14:06:25.751866] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:41.269 14:06:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@388 -- # return 0 00:13:41.269 00:13:41.269 real 0m3.947s 00:13:41.269 user 0m5.651s 00:13:41.269 sys 0m0.524s 00:13:41.269 14:06:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:41.269 14:06:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.269 ************************************ 00:13:41.269 END TEST raid0_resize_test 00:13:41.269 ************************************ 00:13:41.269 14:06:26 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:13:41.269 14:06:26 bdev_raid -- bdev/bdev_raid.sh@865 -- # for n in {2..4} 00:13:41.269 14:06:26 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:13:41.269 14:06:26 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:13:41.269 14:06:26 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:13:41.269 14:06:26 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:41.269 14:06:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:41.269 ************************************ 00:13:41.269 START TEST raid_state_function_test 00:13:41.269 ************************************ 00:13:41.269 14:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 2 false 00:13:41.269 14:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:13:41.269 14:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:13:41.269 14:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:13:41.269 14:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:13:41.269 14:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:13:41.269 14:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:41.269 14:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:13:41.269 14:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:41.269 14:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:41.269 14:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:13:41.269 14:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:41.269 14:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:41.269 14:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:41.269 14:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:13:41.269 14:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:13:41.269 14:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:13:41.269 14:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:13:41.269 14:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:13:41.269 14:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:13:41.269 14:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:13:41.269 14:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:13:41.269 14:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:13:41.269 14:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:13:41.269 14:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=186361 00:13:41.269 14:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:41.269 14:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 186361' 00:13:41.269 Process raid pid: 186361 00:13:41.269 14:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 186361 /var/tmp/spdk-raid.sock 00:13:41.269 14:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 186361 ']' 00:13:41.269 14:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:41.269 14:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:41.269 14:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:41.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:41.270 14:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:41.270 14:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.270 [2024-07-15 14:06:26.985865] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:13:41.270 [2024-07-15 14:06:26.986341] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:41.270 [2024-07-15 14:06:27.145618] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:41.528 [2024-07-15 14:06:27.403669] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:41.787 [2024-07-15 14:06:27.610146] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:42.046 14:06:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:42.046 14:06:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:13:42.046 14:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:42.305 [2024-07-15 14:06:28.230444] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:42.305 [2024-07-15 14:06:28.231355] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:42.305 [2024-07-15 14:06:28.231522] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:42.305 [2024-07-15 14:06:28.231797] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:42.305 14:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:13:42.305 14:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:42.305 14:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:42.305 14:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:42.305 14:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:42.305 14:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:42.305 14:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:42.305 14:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:42.305 14:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:42.305 14:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:42.305 14:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:42.305 14:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:42.564 14:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:42.564 "name": "Existed_Raid", 00:13:42.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.564 "strip_size_kb": 64, 00:13:42.564 "state": "configuring", 00:13:42.564 "raid_level": "raid0", 00:13:42.564 "superblock": false, 00:13:42.564 "num_base_bdevs": 2, 00:13:42.564 "num_base_bdevs_discovered": 0, 00:13:42.564 "num_base_bdevs_operational": 2, 00:13:42.564 "base_bdevs_list": [ 00:13:42.564 { 00:13:42.564 "name": "BaseBdev1", 00:13:42.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.564 "is_configured": false, 00:13:42.564 "data_offset": 0, 00:13:42.564 "data_size": 0 00:13:42.564 }, 00:13:42.564 { 00:13:42.564 "name": "BaseBdev2", 00:13:42.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.564 "is_configured": false, 00:13:42.564 "data_offset": 0, 00:13:42.564 "data_size": 0 00:13:42.564 } 00:13:42.564 ] 00:13:42.564 }' 00:13:42.564 14:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:42.564 14:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.130 14:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:43.389 [2024-07-15 14:06:29.386437] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:43.389 [2024-07-15 14:06:29.386704] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:13:43.648 14:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:43.649 [2024-07-15 14:06:29.618501] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:43.649 [2024-07-15 14:06:29.619156] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:43.649 [2024-07-15 14:06:29.619306] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:43.649 [2024-07-15 14:06:29.619440] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:43.649 14:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:43.908 [2024-07-15 14:06:29.886957] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:43.908 BaseBdev1 00:13:43.908 14:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:13:43.908 14:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:13:43.908 14:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:43.908 14:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:13:43.908 14:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:43.908 14:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:43.908 14:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:44.167 14:06:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:44.426 [ 00:13:44.426 { 00:13:44.426 "name": "BaseBdev1", 00:13:44.426 "aliases": [ 00:13:44.426 "1681cc02-71dd-4de6-9072-09ceceb32d69" 00:13:44.426 ], 00:13:44.426 "product_name": "Malloc disk", 00:13:44.426 "block_size": 512, 00:13:44.426 "num_blocks": 65536, 00:13:44.426 "uuid": "1681cc02-71dd-4de6-9072-09ceceb32d69", 00:13:44.426 "assigned_rate_limits": { 00:13:44.426 "rw_ios_per_sec": 0, 00:13:44.426 "rw_mbytes_per_sec": 0, 00:13:44.426 "r_mbytes_per_sec": 0, 00:13:44.426 "w_mbytes_per_sec": 0 00:13:44.426 }, 00:13:44.426 "claimed": true, 00:13:44.426 "claim_type": "exclusive_write", 00:13:44.426 "zoned": false, 00:13:44.426 "supported_io_types": { 00:13:44.426 "read": true, 00:13:44.426 "write": true, 00:13:44.426 "unmap": true, 00:13:44.426 "flush": true, 00:13:44.426 "reset": true, 00:13:44.426 "nvme_admin": false, 00:13:44.426 "nvme_io": false, 00:13:44.426 "nvme_io_md": false, 00:13:44.426 "write_zeroes": true, 00:13:44.426 "zcopy": true, 00:13:44.426 "get_zone_info": false, 00:13:44.426 "zone_management": false, 00:13:44.426 "zone_append": false, 00:13:44.426 "compare": false, 00:13:44.426 "compare_and_write": false, 00:13:44.426 "abort": true, 00:13:44.426 "seek_hole": false, 00:13:44.426 "seek_data": false, 00:13:44.426 "copy": true, 00:13:44.426 "nvme_iov_md": false 00:13:44.426 }, 00:13:44.426 "memory_domains": [ 00:13:44.426 { 00:13:44.426 "dma_device_id": "system", 00:13:44.426 "dma_device_type": 1 00:13:44.426 }, 00:13:44.426 { 00:13:44.426 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:44.426 "dma_device_type": 2 00:13:44.426 } 00:13:44.426 ], 00:13:44.426 "driver_specific": {} 00:13:44.426 } 00:13:44.426 ] 00:13:44.426 14:06:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:13:44.426 14:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:13:44.426 14:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:44.426 14:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:44.426 14:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:44.426 14:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:44.426 14:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:44.426 14:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:44.426 14:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:44.426 14:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:44.426 14:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:44.426 14:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:44.426 14:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:44.685 14:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:44.685 "name": "Existed_Raid", 00:13:44.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.685 "strip_size_kb": 64, 00:13:44.685 "state": "configuring", 00:13:44.685 "raid_level": "raid0", 00:13:44.685 "superblock": false, 00:13:44.685 "num_base_bdevs": 2, 00:13:44.685 "num_base_bdevs_discovered": 1, 00:13:44.685 "num_base_bdevs_operational": 2, 00:13:44.685 "base_bdevs_list": [ 00:13:44.685 { 00:13:44.685 "name": "BaseBdev1", 00:13:44.685 "uuid": "1681cc02-71dd-4de6-9072-09ceceb32d69", 00:13:44.685 "is_configured": true, 00:13:44.685 "data_offset": 0, 00:13:44.685 "data_size": 65536 00:13:44.685 }, 00:13:44.685 { 00:13:44.685 "name": "BaseBdev2", 00:13:44.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.685 "is_configured": false, 00:13:44.685 "data_offset": 0, 00:13:44.685 "data_size": 0 00:13:44.685 } 00:13:44.685 ] 00:13:44.685 }' 00:13:44.685 14:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:44.685 14:06:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.620 14:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:45.620 [2024-07-15 14:06:31.519280] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:45.620 [2024-07-15 14:06:31.519549] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:13:45.620 14:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:45.879 [2024-07-15 14:06:31.751380] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:45.879 [2024-07-15 14:06:31.753351] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:45.879 [2024-07-15 14:06:31.753948] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:45.879 14:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:13:45.879 14:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:45.879 14:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:13:45.879 14:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:45.879 14:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:45.879 14:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:45.879 14:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:45.879 14:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:45.879 14:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:45.879 14:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:45.879 14:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:45.879 14:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:45.879 14:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:45.879 14:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:46.138 14:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:46.138 "name": "Existed_Raid", 00:13:46.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.138 "strip_size_kb": 64, 00:13:46.138 "state": "configuring", 00:13:46.138 "raid_level": "raid0", 00:13:46.138 "superblock": false, 00:13:46.138 "num_base_bdevs": 2, 00:13:46.138 "num_base_bdevs_discovered": 1, 00:13:46.138 "num_base_bdevs_operational": 2, 00:13:46.138 "base_bdevs_list": [ 00:13:46.138 { 00:13:46.138 "name": "BaseBdev1", 00:13:46.138 "uuid": "1681cc02-71dd-4de6-9072-09ceceb32d69", 00:13:46.138 "is_configured": true, 00:13:46.138 "data_offset": 0, 00:13:46.138 "data_size": 65536 00:13:46.138 }, 00:13:46.138 { 00:13:46.138 "name": "BaseBdev2", 00:13:46.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.138 "is_configured": false, 00:13:46.138 "data_offset": 0, 00:13:46.138 "data_size": 0 00:13:46.138 } 00:13:46.138 ] 00:13:46.138 }' 00:13:46.138 14:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:46.138 14:06:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.703 14:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:46.963 [2024-07-15 14:06:32.927412] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:46.964 [2024-07-15 14:06:32.927656] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:13:46.964 [2024-07-15 14:06:32.927707] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:46.964 [2024-07-15 14:06:32.927923] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:13:46.964 [2024-07-15 14:06:32.928311] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:13:46.964 [2024-07-15 14:06:32.928440] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:13:46.964 [2024-07-15 14:06:32.928796] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:46.964 BaseBdev2 00:13:46.964 14:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:13:46.964 14:06:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:13:46.964 14:06:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:46.964 14:06:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:13:46.964 14:06:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:46.964 14:06:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:46.964 14:06:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:47.222 14:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:47.480 [ 00:13:47.480 { 00:13:47.480 "name": "BaseBdev2", 00:13:47.480 "aliases": [ 00:13:47.480 "5eea16c2-dbef-4ef2-9ec0-73ccbd6c2b07" 00:13:47.480 ], 00:13:47.480 "product_name": "Malloc disk", 00:13:47.480 "block_size": 512, 00:13:47.480 "num_blocks": 65536, 00:13:47.480 "uuid": "5eea16c2-dbef-4ef2-9ec0-73ccbd6c2b07", 00:13:47.480 "assigned_rate_limits": { 00:13:47.480 "rw_ios_per_sec": 0, 00:13:47.480 "rw_mbytes_per_sec": 0, 00:13:47.480 "r_mbytes_per_sec": 0, 00:13:47.480 "w_mbytes_per_sec": 0 00:13:47.480 }, 00:13:47.480 "claimed": true, 00:13:47.480 "claim_type": "exclusive_write", 00:13:47.480 "zoned": false, 00:13:47.480 "supported_io_types": { 00:13:47.480 "read": true, 00:13:47.480 "write": true, 00:13:47.480 "unmap": true, 00:13:47.480 "flush": true, 00:13:47.480 "reset": true, 00:13:47.480 "nvme_admin": false, 00:13:47.480 "nvme_io": false, 00:13:47.480 "nvme_io_md": false, 00:13:47.480 "write_zeroes": true, 00:13:47.480 "zcopy": true, 00:13:47.480 "get_zone_info": false, 00:13:47.480 "zone_management": false, 00:13:47.480 "zone_append": false, 00:13:47.480 "compare": false, 00:13:47.480 "compare_and_write": false, 00:13:47.480 "abort": true, 00:13:47.480 "seek_hole": false, 00:13:47.480 "seek_data": false, 00:13:47.480 "copy": true, 00:13:47.480 "nvme_iov_md": false 00:13:47.480 }, 00:13:47.480 "memory_domains": [ 00:13:47.480 { 00:13:47.480 "dma_device_id": "system", 00:13:47.480 "dma_device_type": 1 00:13:47.480 }, 00:13:47.480 { 00:13:47.480 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:47.480 "dma_device_type": 2 00:13:47.480 } 00:13:47.480 ], 00:13:47.480 "driver_specific": {} 00:13:47.480 } 00:13:47.480 ] 00:13:47.480 14:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:13:47.480 14:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:13:47.480 14:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:47.480 14:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:13:47.480 14:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:47.480 14:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:47.480 14:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:47.480 14:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:47.480 14:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:47.480 14:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:47.480 14:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:47.480 14:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:47.480 14:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:47.480 14:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:47.481 14:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:47.738 14:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:47.738 "name": "Existed_Raid", 00:13:47.738 "uuid": "bc7bb41c-da10-4a98-9052-6f5b320a69f7", 00:13:47.738 "strip_size_kb": 64, 00:13:47.738 "state": "online", 00:13:47.738 "raid_level": "raid0", 00:13:47.738 "superblock": false, 00:13:47.738 "num_base_bdevs": 2, 00:13:47.738 "num_base_bdevs_discovered": 2, 00:13:47.738 "num_base_bdevs_operational": 2, 00:13:47.738 "base_bdevs_list": [ 00:13:47.738 { 00:13:47.738 "name": "BaseBdev1", 00:13:47.738 "uuid": "1681cc02-71dd-4de6-9072-09ceceb32d69", 00:13:47.738 "is_configured": true, 00:13:47.738 "data_offset": 0, 00:13:47.738 "data_size": 65536 00:13:47.738 }, 00:13:47.738 { 00:13:47.738 "name": "BaseBdev2", 00:13:47.738 "uuid": "5eea16c2-dbef-4ef2-9ec0-73ccbd6c2b07", 00:13:47.738 "is_configured": true, 00:13:47.738 "data_offset": 0, 00:13:47.738 "data_size": 65536 00:13:47.738 } 00:13:47.738 ] 00:13:47.738 }' 00:13:47.738 14:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:47.738 14:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.302 14:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:13:48.302 14:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:13:48.302 14:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:48.302 14:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:48.302 14:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:48.302 14:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:13:48.302 14:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:48.302 14:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:13:48.560 [2024-07-15 14:06:34.531986] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:48.560 14:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:48.560 "name": "Existed_Raid", 00:13:48.560 "aliases": [ 00:13:48.560 "bc7bb41c-da10-4a98-9052-6f5b320a69f7" 00:13:48.560 ], 00:13:48.560 "product_name": "Raid Volume", 00:13:48.560 "block_size": 512, 00:13:48.560 "num_blocks": 131072, 00:13:48.560 "uuid": "bc7bb41c-da10-4a98-9052-6f5b320a69f7", 00:13:48.560 "assigned_rate_limits": { 00:13:48.560 "rw_ios_per_sec": 0, 00:13:48.560 "rw_mbytes_per_sec": 0, 00:13:48.560 "r_mbytes_per_sec": 0, 00:13:48.560 "w_mbytes_per_sec": 0 00:13:48.560 }, 00:13:48.560 "claimed": false, 00:13:48.560 "zoned": false, 00:13:48.560 "supported_io_types": { 00:13:48.560 "read": true, 00:13:48.560 "write": true, 00:13:48.560 "unmap": true, 00:13:48.560 "flush": true, 00:13:48.560 "reset": true, 00:13:48.560 "nvme_admin": false, 00:13:48.560 "nvme_io": false, 00:13:48.560 "nvme_io_md": false, 00:13:48.560 "write_zeroes": true, 00:13:48.560 "zcopy": false, 00:13:48.560 "get_zone_info": false, 00:13:48.560 "zone_management": false, 00:13:48.560 "zone_append": false, 00:13:48.560 "compare": false, 00:13:48.560 "compare_and_write": false, 00:13:48.560 "abort": false, 00:13:48.560 "seek_hole": false, 00:13:48.560 "seek_data": false, 00:13:48.560 "copy": false, 00:13:48.560 "nvme_iov_md": false 00:13:48.560 }, 00:13:48.560 "memory_domains": [ 00:13:48.560 { 00:13:48.560 "dma_device_id": "system", 00:13:48.560 "dma_device_type": 1 00:13:48.560 }, 00:13:48.560 { 00:13:48.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:48.560 "dma_device_type": 2 00:13:48.560 }, 00:13:48.560 { 00:13:48.560 "dma_device_id": "system", 00:13:48.560 "dma_device_type": 1 00:13:48.560 }, 00:13:48.560 { 00:13:48.561 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:48.561 "dma_device_type": 2 00:13:48.561 } 00:13:48.561 ], 00:13:48.561 "driver_specific": { 00:13:48.561 "raid": { 00:13:48.561 "uuid": "bc7bb41c-da10-4a98-9052-6f5b320a69f7", 00:13:48.561 "strip_size_kb": 64, 00:13:48.561 "state": "online", 00:13:48.561 "raid_level": "raid0", 00:13:48.561 "superblock": false, 00:13:48.561 "num_base_bdevs": 2, 00:13:48.561 "num_base_bdevs_discovered": 2, 00:13:48.561 "num_base_bdevs_operational": 2, 00:13:48.561 "base_bdevs_list": [ 00:13:48.561 { 00:13:48.561 "name": "BaseBdev1", 00:13:48.561 "uuid": "1681cc02-71dd-4de6-9072-09ceceb32d69", 00:13:48.561 "is_configured": true, 00:13:48.561 "data_offset": 0, 00:13:48.561 "data_size": 65536 00:13:48.561 }, 00:13:48.561 { 00:13:48.561 "name": "BaseBdev2", 00:13:48.561 "uuid": "5eea16c2-dbef-4ef2-9ec0-73ccbd6c2b07", 00:13:48.561 "is_configured": true, 00:13:48.561 "data_offset": 0, 00:13:48.561 "data_size": 65536 00:13:48.561 } 00:13:48.561 ] 00:13:48.561 } 00:13:48.561 } 00:13:48.561 }' 00:13:48.561 14:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:48.818 14:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:13:48.818 BaseBdev2' 00:13:48.818 14:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:48.818 14:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:48.818 14:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:13:49.077 14:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:49.077 "name": "BaseBdev1", 00:13:49.077 "aliases": [ 00:13:49.077 "1681cc02-71dd-4de6-9072-09ceceb32d69" 00:13:49.077 ], 00:13:49.077 "product_name": "Malloc disk", 00:13:49.077 "block_size": 512, 00:13:49.077 "num_blocks": 65536, 00:13:49.077 "uuid": "1681cc02-71dd-4de6-9072-09ceceb32d69", 00:13:49.077 "assigned_rate_limits": { 00:13:49.077 "rw_ios_per_sec": 0, 00:13:49.077 "rw_mbytes_per_sec": 0, 00:13:49.077 "r_mbytes_per_sec": 0, 00:13:49.077 "w_mbytes_per_sec": 0 00:13:49.077 }, 00:13:49.077 "claimed": true, 00:13:49.077 "claim_type": "exclusive_write", 00:13:49.077 "zoned": false, 00:13:49.077 "supported_io_types": { 00:13:49.077 "read": true, 00:13:49.077 "write": true, 00:13:49.077 "unmap": true, 00:13:49.077 "flush": true, 00:13:49.077 "reset": true, 00:13:49.077 "nvme_admin": false, 00:13:49.077 "nvme_io": false, 00:13:49.077 "nvme_io_md": false, 00:13:49.077 "write_zeroes": true, 00:13:49.077 "zcopy": true, 00:13:49.077 "get_zone_info": false, 00:13:49.077 "zone_management": false, 00:13:49.077 "zone_append": false, 00:13:49.077 "compare": false, 00:13:49.077 "compare_and_write": false, 00:13:49.077 "abort": true, 00:13:49.077 "seek_hole": false, 00:13:49.077 "seek_data": false, 00:13:49.077 "copy": true, 00:13:49.077 "nvme_iov_md": false 00:13:49.077 }, 00:13:49.077 "memory_domains": [ 00:13:49.077 { 00:13:49.077 "dma_device_id": "system", 00:13:49.077 "dma_device_type": 1 00:13:49.077 }, 00:13:49.077 { 00:13:49.077 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:49.077 "dma_device_type": 2 00:13:49.078 } 00:13:49.078 ], 00:13:49.078 "driver_specific": {} 00:13:49.078 }' 00:13:49.078 14:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:49.078 14:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:49.078 14:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:49.078 14:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:49.078 14:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:49.078 14:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:49.078 14:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:49.336 14:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:49.336 14:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:49.336 14:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:49.336 14:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:49.336 14:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:49.336 14:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:49.336 14:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:13:49.336 14:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:49.593 14:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:49.593 "name": "BaseBdev2", 00:13:49.593 "aliases": [ 00:13:49.593 "5eea16c2-dbef-4ef2-9ec0-73ccbd6c2b07" 00:13:49.593 ], 00:13:49.593 "product_name": "Malloc disk", 00:13:49.593 "block_size": 512, 00:13:49.593 "num_blocks": 65536, 00:13:49.593 "uuid": "5eea16c2-dbef-4ef2-9ec0-73ccbd6c2b07", 00:13:49.593 "assigned_rate_limits": { 00:13:49.593 "rw_ios_per_sec": 0, 00:13:49.594 "rw_mbytes_per_sec": 0, 00:13:49.594 "r_mbytes_per_sec": 0, 00:13:49.594 "w_mbytes_per_sec": 0 00:13:49.594 }, 00:13:49.594 "claimed": true, 00:13:49.594 "claim_type": "exclusive_write", 00:13:49.594 "zoned": false, 00:13:49.594 "supported_io_types": { 00:13:49.594 "read": true, 00:13:49.594 "write": true, 00:13:49.594 "unmap": true, 00:13:49.594 "flush": true, 00:13:49.594 "reset": true, 00:13:49.594 "nvme_admin": false, 00:13:49.594 "nvme_io": false, 00:13:49.594 "nvme_io_md": false, 00:13:49.594 "write_zeroes": true, 00:13:49.594 "zcopy": true, 00:13:49.594 "get_zone_info": false, 00:13:49.594 "zone_management": false, 00:13:49.594 "zone_append": false, 00:13:49.594 "compare": false, 00:13:49.594 "compare_and_write": false, 00:13:49.594 "abort": true, 00:13:49.594 "seek_hole": false, 00:13:49.594 "seek_data": false, 00:13:49.594 "copy": true, 00:13:49.594 "nvme_iov_md": false 00:13:49.594 }, 00:13:49.594 "memory_domains": [ 00:13:49.594 { 00:13:49.594 "dma_device_id": "system", 00:13:49.594 "dma_device_type": 1 00:13:49.594 }, 00:13:49.594 { 00:13:49.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:49.594 "dma_device_type": 2 00:13:49.594 } 00:13:49.594 ], 00:13:49.594 "driver_specific": {} 00:13:49.594 }' 00:13:49.594 14:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:49.594 14:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:49.850 14:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:49.850 14:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:49.850 14:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:49.850 14:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:49.850 14:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:49.850 14:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:49.850 14:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:49.850 14:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:50.108 14:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:50.108 14:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:50.108 14:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:50.365 [2024-07-15 14:06:36.140006] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:50.365 [2024-07-15 14:06:36.140225] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:50.366 [2024-07-15 14:06:36.140410] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:50.366 14:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:13:50.366 14:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:13:50.366 14:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:50.366 14:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:13:50.366 14:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:13:50.366 14:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:13:50.366 14:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:50.366 14:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:13:50.366 14:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:50.366 14:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:50.366 14:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:13:50.366 14:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:50.366 14:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:50.366 14:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:50.366 14:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:50.366 14:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:50.366 14:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:50.684 14:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:50.684 "name": "Existed_Raid", 00:13:50.684 "uuid": "bc7bb41c-da10-4a98-9052-6f5b320a69f7", 00:13:50.684 "strip_size_kb": 64, 00:13:50.684 "state": "offline", 00:13:50.684 "raid_level": "raid0", 00:13:50.684 "superblock": false, 00:13:50.684 "num_base_bdevs": 2, 00:13:50.684 "num_base_bdevs_discovered": 1, 00:13:50.684 "num_base_bdevs_operational": 1, 00:13:50.684 "base_bdevs_list": [ 00:13:50.684 { 00:13:50.684 "name": null, 00:13:50.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.684 "is_configured": false, 00:13:50.684 "data_offset": 0, 00:13:50.684 "data_size": 65536 00:13:50.684 }, 00:13:50.684 { 00:13:50.684 "name": "BaseBdev2", 00:13:50.684 "uuid": "5eea16c2-dbef-4ef2-9ec0-73ccbd6c2b07", 00:13:50.684 "is_configured": true, 00:13:50.684 "data_offset": 0, 00:13:50.684 "data_size": 65536 00:13:50.684 } 00:13:50.684 ] 00:13:50.684 }' 00:13:50.684 14:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:50.684 14:06:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.250 14:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:13:51.250 14:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:51.250 14:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:13:51.250 14:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:51.508 14:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:13:51.508 14:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:51.508 14:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:13:51.765 [2024-07-15 14:06:37.739672] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:51.765 [2024-07-15 14:06:37.740053] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:13:52.022 14:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:13:52.022 14:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:52.022 14:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:52.022 14:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:13:52.280 14:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:13:52.280 14:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:13:52.280 14:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:13:52.280 14:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 186361 00:13:52.280 14:06:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 186361 ']' 00:13:52.280 14:06:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 186361 00:13:52.280 14:06:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:13:52.280 14:06:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:52.280 14:06:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 186361 00:13:52.280 14:06:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:52.280 14:06:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:52.280 14:06:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 186361' 00:13:52.280 killing process with pid 186361 00:13:52.280 14:06:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 186361 00:13:52.280 [2024-07-15 14:06:38.163304] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:52.280 14:06:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 186361 00:13:52.280 [2024-07-15 14:06:38.163589] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:53.653 14:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:13:53.653 00:13:53.653 real 0m12.361s 00:13:53.653 user 0m21.621s 00:13:53.653 sys 0m1.430s 00:13:53.653 14:06:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:53.653 14:06:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.653 ************************************ 00:13:53.653 END TEST raid_state_function_test 00:13:53.653 ************************************ 00:13:53.653 14:06:39 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:13:53.653 14:06:39 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:13:53.653 14:06:39 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:13:53.653 14:06:39 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:53.653 14:06:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:53.653 ************************************ 00:13:53.653 START TEST raid_state_function_test_sb 00:13:53.653 ************************************ 00:13:53.653 14:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 2 true 00:13:53.654 14:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:13:53.654 14:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:13:53.654 14:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:13:53.654 14:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:13:53.654 14:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:13:53.654 14:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:53.654 14:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:13:53.654 14:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:53.654 14:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:53.654 14:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:13:53.654 14:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:53.654 14:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:53.654 14:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:53.654 14:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:13:53.654 14:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:13:53.654 14:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:13:53.654 14:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:13:53.654 14:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:13:53.654 14:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:13:53.654 14:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:13:53.654 14:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:13:53.654 14:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:13:53.654 14:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:13:53.654 14:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=186747 00:13:53.654 14:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 186747' 00:13:53.654 Process raid pid: 186747 00:13:53.654 14:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:53.654 14:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 186747 /var/tmp/spdk-raid.sock 00:13:53.654 14:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 186747 ']' 00:13:53.654 14:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:53.654 14:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:53.654 14:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:53.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:53.654 14:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:53.654 14:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.654 [2024-07-15 14:06:39.401098] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:13:53.654 [2024-07-15 14:06:39.401786] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:53.654 [2024-07-15 14:06:39.554837] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:53.911 [2024-07-15 14:06:39.774687] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:54.169 [2024-07-15 14:06:39.980456] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:54.426 14:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:54.426 14:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:13:54.426 14:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:54.684 [2024-07-15 14:06:40.616301] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:54.684 [2024-07-15 14:06:40.616979] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:54.684 [2024-07-15 14:06:40.617124] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:54.684 [2024-07-15 14:06:40.617263] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:54.684 14:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:13:54.684 14:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:54.684 14:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:54.684 14:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:54.684 14:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:54.684 14:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:54.684 14:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:54.684 14:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:54.684 14:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:54.684 14:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:54.684 14:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:54.684 14:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:54.943 14:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:54.943 "name": "Existed_Raid", 00:13:54.943 "uuid": "1ba0e88b-179c-4309-955a-b3962c12fd3d", 00:13:54.943 "strip_size_kb": 64, 00:13:54.943 "state": "configuring", 00:13:54.943 "raid_level": "raid0", 00:13:54.943 "superblock": true, 00:13:54.943 "num_base_bdevs": 2, 00:13:54.943 "num_base_bdevs_discovered": 0, 00:13:54.943 "num_base_bdevs_operational": 2, 00:13:54.943 "base_bdevs_list": [ 00:13:54.943 { 00:13:54.943 "name": "BaseBdev1", 00:13:54.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.943 "is_configured": false, 00:13:54.943 "data_offset": 0, 00:13:54.943 "data_size": 0 00:13:54.943 }, 00:13:54.943 { 00:13:54.943 "name": "BaseBdev2", 00:13:54.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.943 "is_configured": false, 00:13:54.943 "data_offset": 0, 00:13:54.943 "data_size": 0 00:13:54.943 } 00:13:54.943 ] 00:13:54.943 }' 00:13:54.943 14:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:54.943 14:06:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.876 14:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:55.876 [2024-07-15 14:06:41.760372] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:55.876 [2024-07-15 14:06:41.760554] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:13:55.876 14:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:56.135 [2024-07-15 14:06:42.000470] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:56.135 [2024-07-15 14:06:42.001193] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:56.135 [2024-07-15 14:06:42.001399] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:56.135 [2024-07-15 14:06:42.001539] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:56.135 14:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:56.434 [2024-07-15 14:06:42.268089] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:56.434 BaseBdev1 00:13:56.434 14:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:13:56.434 14:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:13:56.434 14:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:56.434 14:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:13:56.434 14:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:56.434 14:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:56.434 14:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:56.692 14:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:56.951 [ 00:13:56.951 { 00:13:56.951 "name": "BaseBdev1", 00:13:56.951 "aliases": [ 00:13:56.951 "b3c091f2-0d3d-4d77-9366-8262685e99b0" 00:13:56.951 ], 00:13:56.951 "product_name": "Malloc disk", 00:13:56.951 "block_size": 512, 00:13:56.951 "num_blocks": 65536, 00:13:56.951 "uuid": "b3c091f2-0d3d-4d77-9366-8262685e99b0", 00:13:56.951 "assigned_rate_limits": { 00:13:56.951 "rw_ios_per_sec": 0, 00:13:56.951 "rw_mbytes_per_sec": 0, 00:13:56.951 "r_mbytes_per_sec": 0, 00:13:56.951 "w_mbytes_per_sec": 0 00:13:56.951 }, 00:13:56.951 "claimed": true, 00:13:56.951 "claim_type": "exclusive_write", 00:13:56.951 "zoned": false, 00:13:56.951 "supported_io_types": { 00:13:56.951 "read": true, 00:13:56.951 "write": true, 00:13:56.951 "unmap": true, 00:13:56.951 "flush": true, 00:13:56.951 "reset": true, 00:13:56.951 "nvme_admin": false, 00:13:56.951 "nvme_io": false, 00:13:56.951 "nvme_io_md": false, 00:13:56.951 "write_zeroes": true, 00:13:56.951 "zcopy": true, 00:13:56.951 "get_zone_info": false, 00:13:56.951 "zone_management": false, 00:13:56.951 "zone_append": false, 00:13:56.951 "compare": false, 00:13:56.951 "compare_and_write": false, 00:13:56.951 "abort": true, 00:13:56.951 "seek_hole": false, 00:13:56.951 "seek_data": false, 00:13:56.951 "copy": true, 00:13:56.951 "nvme_iov_md": false 00:13:56.951 }, 00:13:56.951 "memory_domains": [ 00:13:56.951 { 00:13:56.951 "dma_device_id": "system", 00:13:56.951 "dma_device_type": 1 00:13:56.951 }, 00:13:56.951 { 00:13:56.951 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:56.951 "dma_device_type": 2 00:13:56.951 } 00:13:56.951 ], 00:13:56.951 "driver_specific": {} 00:13:56.951 } 00:13:56.951 ] 00:13:56.951 14:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:13:56.952 14:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:13:56.952 14:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:56.952 14:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:56.952 14:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:56.952 14:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:56.952 14:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:56.952 14:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:56.952 14:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:56.952 14:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:56.952 14:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:56.952 14:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:56.952 14:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:57.211 14:06:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:57.211 "name": "Existed_Raid", 00:13:57.211 "uuid": "d609b1ac-53f8-4136-9b50-eba4daa2090a", 00:13:57.211 "strip_size_kb": 64, 00:13:57.211 "state": "configuring", 00:13:57.211 "raid_level": "raid0", 00:13:57.211 "superblock": true, 00:13:57.211 "num_base_bdevs": 2, 00:13:57.211 "num_base_bdevs_discovered": 1, 00:13:57.211 "num_base_bdevs_operational": 2, 00:13:57.211 "base_bdevs_list": [ 00:13:57.211 { 00:13:57.211 "name": "BaseBdev1", 00:13:57.211 "uuid": "b3c091f2-0d3d-4d77-9366-8262685e99b0", 00:13:57.211 "is_configured": true, 00:13:57.211 "data_offset": 2048, 00:13:57.211 "data_size": 63488 00:13:57.211 }, 00:13:57.211 { 00:13:57.211 "name": "BaseBdev2", 00:13:57.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.211 "is_configured": false, 00:13:57.211 "data_offset": 0, 00:13:57.211 "data_size": 0 00:13:57.211 } 00:13:57.211 ] 00:13:57.211 }' 00:13:57.211 14:06:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:57.211 14:06:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.777 14:06:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:58.035 [2024-07-15 14:06:43.968411] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:58.035 [2024-07-15 14:06:43.968683] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:13:58.035 14:06:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:58.294 [2024-07-15 14:06:44.268490] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:58.294 [2024-07-15 14:06:44.270258] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:58.294 [2024-07-15 14:06:44.270862] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:58.294 14:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:13:58.294 14:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:58.294 14:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:13:58.294 14:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:58.294 14:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:58.294 14:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:58.294 14:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:58.294 14:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:58.294 14:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:58.294 14:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:58.294 14:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:58.294 14:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:58.294 14:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:58.294 14:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:58.552 14:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:58.552 "name": "Existed_Raid", 00:13:58.552 "uuid": "91a17b8f-0285-4327-89ce-70ac11731769", 00:13:58.552 "strip_size_kb": 64, 00:13:58.552 "state": "configuring", 00:13:58.552 "raid_level": "raid0", 00:13:58.552 "superblock": true, 00:13:58.552 "num_base_bdevs": 2, 00:13:58.552 "num_base_bdevs_discovered": 1, 00:13:58.552 "num_base_bdevs_operational": 2, 00:13:58.552 "base_bdevs_list": [ 00:13:58.552 { 00:13:58.552 "name": "BaseBdev1", 00:13:58.552 "uuid": "b3c091f2-0d3d-4d77-9366-8262685e99b0", 00:13:58.552 "is_configured": true, 00:13:58.552 "data_offset": 2048, 00:13:58.552 "data_size": 63488 00:13:58.552 }, 00:13:58.552 { 00:13:58.552 "name": "BaseBdev2", 00:13:58.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.552 "is_configured": false, 00:13:58.552 "data_offset": 0, 00:13:58.552 "data_size": 0 00:13:58.552 } 00:13:58.552 ] 00:13:58.552 }' 00:13:58.552 14:06:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:58.552 14:06:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:59.487 14:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:59.746 [2024-07-15 14:06:45.496986] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:59.746 [2024-07-15 14:06:45.497277] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:13:59.746 [2024-07-15 14:06:45.497295] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:59.746 [2024-07-15 14:06:45.497388] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:13:59.746 [2024-07-15 14:06:45.497658] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:13:59.746 [2024-07-15 14:06:45.497683] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:13:59.746 [2024-07-15 14:06:45.497811] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:59.746 BaseBdev2 00:13:59.746 14:06:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:13:59.746 14:06:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:13:59.746 14:06:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:59.747 14:06:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:13:59.747 14:06:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:59.747 14:06:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:59.747 14:06:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:00.005 14:06:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:00.005 [ 00:14:00.005 { 00:14:00.005 "name": "BaseBdev2", 00:14:00.005 "aliases": [ 00:14:00.005 "6cf3a32d-478c-478f-b120-45b0d8f647ad" 00:14:00.005 ], 00:14:00.005 "product_name": "Malloc disk", 00:14:00.005 "block_size": 512, 00:14:00.005 "num_blocks": 65536, 00:14:00.005 "uuid": "6cf3a32d-478c-478f-b120-45b0d8f647ad", 00:14:00.005 "assigned_rate_limits": { 00:14:00.005 "rw_ios_per_sec": 0, 00:14:00.005 "rw_mbytes_per_sec": 0, 00:14:00.005 "r_mbytes_per_sec": 0, 00:14:00.005 "w_mbytes_per_sec": 0 00:14:00.005 }, 00:14:00.005 "claimed": true, 00:14:00.005 "claim_type": "exclusive_write", 00:14:00.005 "zoned": false, 00:14:00.005 "supported_io_types": { 00:14:00.005 "read": true, 00:14:00.005 "write": true, 00:14:00.005 "unmap": true, 00:14:00.005 "flush": true, 00:14:00.006 "reset": true, 00:14:00.006 "nvme_admin": false, 00:14:00.006 "nvme_io": false, 00:14:00.006 "nvme_io_md": false, 00:14:00.006 "write_zeroes": true, 00:14:00.006 "zcopy": true, 00:14:00.006 "get_zone_info": false, 00:14:00.006 "zone_management": false, 00:14:00.006 "zone_append": false, 00:14:00.006 "compare": false, 00:14:00.006 "compare_and_write": false, 00:14:00.006 "abort": true, 00:14:00.006 "seek_hole": false, 00:14:00.006 "seek_data": false, 00:14:00.006 "copy": true, 00:14:00.006 "nvme_iov_md": false 00:14:00.006 }, 00:14:00.006 "memory_domains": [ 00:14:00.006 { 00:14:00.006 "dma_device_id": "system", 00:14:00.006 "dma_device_type": 1 00:14:00.006 }, 00:14:00.006 { 00:14:00.006 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:00.006 "dma_device_type": 2 00:14:00.006 } 00:14:00.006 ], 00:14:00.006 "driver_specific": {} 00:14:00.006 } 00:14:00.006 ] 00:14:00.006 14:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:14:00.006 14:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:00.006 14:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:00.006 14:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:14:00.006 14:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:00.006 14:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:00.006 14:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:00.006 14:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:00.006 14:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:00.006 14:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:00.006 14:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:00.006 14:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:00.006 14:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:00.264 14:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:00.264 14:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:00.522 14:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:00.522 "name": "Existed_Raid", 00:14:00.522 "uuid": "91a17b8f-0285-4327-89ce-70ac11731769", 00:14:00.522 "strip_size_kb": 64, 00:14:00.522 "state": "online", 00:14:00.522 "raid_level": "raid0", 00:14:00.522 "superblock": true, 00:14:00.522 "num_base_bdevs": 2, 00:14:00.522 "num_base_bdevs_discovered": 2, 00:14:00.522 "num_base_bdevs_operational": 2, 00:14:00.522 "base_bdevs_list": [ 00:14:00.522 { 00:14:00.522 "name": "BaseBdev1", 00:14:00.522 "uuid": "b3c091f2-0d3d-4d77-9366-8262685e99b0", 00:14:00.522 "is_configured": true, 00:14:00.522 "data_offset": 2048, 00:14:00.522 "data_size": 63488 00:14:00.522 }, 00:14:00.522 { 00:14:00.522 "name": "BaseBdev2", 00:14:00.522 "uuid": "6cf3a32d-478c-478f-b120-45b0d8f647ad", 00:14:00.522 "is_configured": true, 00:14:00.522 "data_offset": 2048, 00:14:00.522 "data_size": 63488 00:14:00.522 } 00:14:00.522 ] 00:14:00.522 }' 00:14:00.522 14:06:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:00.522 14:06:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.088 14:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:14:01.088 14:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:14:01.088 14:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:01.088 14:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:01.088 14:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:01.088 14:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:14:01.088 14:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:01.088 14:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:01.656 [2024-07-15 14:06:47.365546] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:01.656 14:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:01.656 "name": "Existed_Raid", 00:14:01.656 "aliases": [ 00:14:01.656 "91a17b8f-0285-4327-89ce-70ac11731769" 00:14:01.656 ], 00:14:01.656 "product_name": "Raid Volume", 00:14:01.656 "block_size": 512, 00:14:01.656 "num_blocks": 126976, 00:14:01.656 "uuid": "91a17b8f-0285-4327-89ce-70ac11731769", 00:14:01.656 "assigned_rate_limits": { 00:14:01.656 "rw_ios_per_sec": 0, 00:14:01.656 "rw_mbytes_per_sec": 0, 00:14:01.656 "r_mbytes_per_sec": 0, 00:14:01.656 "w_mbytes_per_sec": 0 00:14:01.656 }, 00:14:01.656 "claimed": false, 00:14:01.656 "zoned": false, 00:14:01.656 "supported_io_types": { 00:14:01.656 "read": true, 00:14:01.656 "write": true, 00:14:01.656 "unmap": true, 00:14:01.656 "flush": true, 00:14:01.656 "reset": true, 00:14:01.656 "nvme_admin": false, 00:14:01.656 "nvme_io": false, 00:14:01.656 "nvme_io_md": false, 00:14:01.656 "write_zeroes": true, 00:14:01.656 "zcopy": false, 00:14:01.656 "get_zone_info": false, 00:14:01.656 "zone_management": false, 00:14:01.656 "zone_append": false, 00:14:01.656 "compare": false, 00:14:01.656 "compare_and_write": false, 00:14:01.656 "abort": false, 00:14:01.656 "seek_hole": false, 00:14:01.656 "seek_data": false, 00:14:01.656 "copy": false, 00:14:01.656 "nvme_iov_md": false 00:14:01.656 }, 00:14:01.656 "memory_domains": [ 00:14:01.656 { 00:14:01.656 "dma_device_id": "system", 00:14:01.656 "dma_device_type": 1 00:14:01.656 }, 00:14:01.656 { 00:14:01.656 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:01.656 "dma_device_type": 2 00:14:01.656 }, 00:14:01.656 { 00:14:01.656 "dma_device_id": "system", 00:14:01.656 "dma_device_type": 1 00:14:01.656 }, 00:14:01.656 { 00:14:01.656 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:01.656 "dma_device_type": 2 00:14:01.656 } 00:14:01.656 ], 00:14:01.656 "driver_specific": { 00:14:01.656 "raid": { 00:14:01.656 "uuid": "91a17b8f-0285-4327-89ce-70ac11731769", 00:14:01.656 "strip_size_kb": 64, 00:14:01.656 "state": "online", 00:14:01.656 "raid_level": "raid0", 00:14:01.656 "superblock": true, 00:14:01.656 "num_base_bdevs": 2, 00:14:01.656 "num_base_bdevs_discovered": 2, 00:14:01.656 "num_base_bdevs_operational": 2, 00:14:01.656 "base_bdevs_list": [ 00:14:01.656 { 00:14:01.656 "name": "BaseBdev1", 00:14:01.656 "uuid": "b3c091f2-0d3d-4d77-9366-8262685e99b0", 00:14:01.656 "is_configured": true, 00:14:01.656 "data_offset": 2048, 00:14:01.656 "data_size": 63488 00:14:01.656 }, 00:14:01.656 { 00:14:01.656 "name": "BaseBdev2", 00:14:01.656 "uuid": "6cf3a32d-478c-478f-b120-45b0d8f647ad", 00:14:01.656 "is_configured": true, 00:14:01.656 "data_offset": 2048, 00:14:01.656 "data_size": 63488 00:14:01.656 } 00:14:01.656 ] 00:14:01.656 } 00:14:01.656 } 00:14:01.656 }' 00:14:01.656 14:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:01.656 14:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:14:01.656 BaseBdev2' 00:14:01.656 14:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:01.656 14:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:14:01.656 14:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:01.915 14:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:01.915 "name": "BaseBdev1", 00:14:01.915 "aliases": [ 00:14:01.915 "b3c091f2-0d3d-4d77-9366-8262685e99b0" 00:14:01.915 ], 00:14:01.915 "product_name": "Malloc disk", 00:14:01.915 "block_size": 512, 00:14:01.915 "num_blocks": 65536, 00:14:01.915 "uuid": "b3c091f2-0d3d-4d77-9366-8262685e99b0", 00:14:01.915 "assigned_rate_limits": { 00:14:01.915 "rw_ios_per_sec": 0, 00:14:01.915 "rw_mbytes_per_sec": 0, 00:14:01.915 "r_mbytes_per_sec": 0, 00:14:01.915 "w_mbytes_per_sec": 0 00:14:01.915 }, 00:14:01.915 "claimed": true, 00:14:01.915 "claim_type": "exclusive_write", 00:14:01.915 "zoned": false, 00:14:01.915 "supported_io_types": { 00:14:01.915 "read": true, 00:14:01.915 "write": true, 00:14:01.915 "unmap": true, 00:14:01.915 "flush": true, 00:14:01.915 "reset": true, 00:14:01.915 "nvme_admin": false, 00:14:01.915 "nvme_io": false, 00:14:01.915 "nvme_io_md": false, 00:14:01.915 "write_zeroes": true, 00:14:01.915 "zcopy": true, 00:14:01.915 "get_zone_info": false, 00:14:01.915 "zone_management": false, 00:14:01.915 "zone_append": false, 00:14:01.915 "compare": false, 00:14:01.915 "compare_and_write": false, 00:14:01.915 "abort": true, 00:14:01.915 "seek_hole": false, 00:14:01.915 "seek_data": false, 00:14:01.915 "copy": true, 00:14:01.915 "nvme_iov_md": false 00:14:01.915 }, 00:14:01.915 "memory_domains": [ 00:14:01.915 { 00:14:01.915 "dma_device_id": "system", 00:14:01.915 "dma_device_type": 1 00:14:01.915 }, 00:14:01.915 { 00:14:01.915 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:01.915 "dma_device_type": 2 00:14:01.915 } 00:14:01.915 ], 00:14:01.915 "driver_specific": {} 00:14:01.915 }' 00:14:01.915 14:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:01.915 14:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:01.915 14:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:01.915 14:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:01.915 14:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:01.915 14:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:01.915 14:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:02.174 14:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:02.174 14:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:02.174 14:06:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:02.174 14:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:02.174 14:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:02.174 14:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:02.174 14:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:02.174 14:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:02.433 14:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:02.433 "name": "BaseBdev2", 00:14:02.433 "aliases": [ 00:14:02.433 "6cf3a32d-478c-478f-b120-45b0d8f647ad" 00:14:02.433 ], 00:14:02.433 "product_name": "Malloc disk", 00:14:02.433 "block_size": 512, 00:14:02.433 "num_blocks": 65536, 00:14:02.433 "uuid": "6cf3a32d-478c-478f-b120-45b0d8f647ad", 00:14:02.433 "assigned_rate_limits": { 00:14:02.433 "rw_ios_per_sec": 0, 00:14:02.433 "rw_mbytes_per_sec": 0, 00:14:02.433 "r_mbytes_per_sec": 0, 00:14:02.433 "w_mbytes_per_sec": 0 00:14:02.433 }, 00:14:02.433 "claimed": true, 00:14:02.433 "claim_type": "exclusive_write", 00:14:02.433 "zoned": false, 00:14:02.433 "supported_io_types": { 00:14:02.433 "read": true, 00:14:02.433 "write": true, 00:14:02.433 "unmap": true, 00:14:02.433 "flush": true, 00:14:02.433 "reset": true, 00:14:02.433 "nvme_admin": false, 00:14:02.433 "nvme_io": false, 00:14:02.433 "nvme_io_md": false, 00:14:02.433 "write_zeroes": true, 00:14:02.433 "zcopy": true, 00:14:02.433 "get_zone_info": false, 00:14:02.433 "zone_management": false, 00:14:02.433 "zone_append": false, 00:14:02.433 "compare": false, 00:14:02.433 "compare_and_write": false, 00:14:02.433 "abort": true, 00:14:02.433 "seek_hole": false, 00:14:02.433 "seek_data": false, 00:14:02.433 "copy": true, 00:14:02.433 "nvme_iov_md": false 00:14:02.433 }, 00:14:02.433 "memory_domains": [ 00:14:02.433 { 00:14:02.433 "dma_device_id": "system", 00:14:02.433 "dma_device_type": 1 00:14:02.433 }, 00:14:02.433 { 00:14:02.433 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:02.433 "dma_device_type": 2 00:14:02.433 } 00:14:02.433 ], 00:14:02.433 "driver_specific": {} 00:14:02.433 }' 00:14:02.433 14:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:02.433 14:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:02.433 14:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:02.433 14:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:02.691 14:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:02.691 14:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:02.691 14:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:02.691 14:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:02.691 14:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:02.691 14:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:02.691 14:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:02.949 14:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:02.949 14:06:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:03.207 [2024-07-15 14:06:48.977674] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:03.207 [2024-07-15 14:06:48.977717] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:03.207 [2024-07-15 14:06:48.977831] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:03.207 14:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:14:03.207 14:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:14:03.207 14:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:03.207 14:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:14:03.207 14:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:14:03.207 14:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:14:03.207 14:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:03.207 14:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:14:03.207 14:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:03.207 14:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:03.207 14:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:14:03.207 14:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:03.207 14:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:03.207 14:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:03.207 14:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:03.207 14:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:03.207 14:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:03.464 14:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:03.464 "name": "Existed_Raid", 00:14:03.464 "uuid": "91a17b8f-0285-4327-89ce-70ac11731769", 00:14:03.464 "strip_size_kb": 64, 00:14:03.464 "state": "offline", 00:14:03.464 "raid_level": "raid0", 00:14:03.464 "superblock": true, 00:14:03.464 "num_base_bdevs": 2, 00:14:03.464 "num_base_bdevs_discovered": 1, 00:14:03.464 "num_base_bdevs_operational": 1, 00:14:03.464 "base_bdevs_list": [ 00:14:03.464 { 00:14:03.464 "name": null, 00:14:03.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.464 "is_configured": false, 00:14:03.464 "data_offset": 2048, 00:14:03.464 "data_size": 63488 00:14:03.464 }, 00:14:03.464 { 00:14:03.464 "name": "BaseBdev2", 00:14:03.464 "uuid": "6cf3a32d-478c-478f-b120-45b0d8f647ad", 00:14:03.464 "is_configured": true, 00:14:03.464 "data_offset": 2048, 00:14:03.464 "data_size": 63488 00:14:03.464 } 00:14:03.464 ] 00:14:03.464 }' 00:14:03.464 14:06:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:03.464 14:06:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.030 14:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:14:04.030 14:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:04.030 14:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:04.030 14:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:04.597 14:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:04.597 14:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:04.597 14:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:04.856 [2024-07-15 14:06:50.613352] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:04.856 [2024-07-15 14:06:50.613431] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:14:04.856 14:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:04.856 14:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:04.856 14:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:04.856 14:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:14:05.114 14:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:14:05.114 14:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:14:05.114 14:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:14:05.114 14:06:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 186747 00:14:05.114 14:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 186747 ']' 00:14:05.114 14:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 186747 00:14:05.114 14:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:14:05.114 14:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:05.114 14:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 186747 00:14:05.114 14:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:05.114 14:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:05.114 14:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 186747' 00:14:05.114 killing process with pid 186747 00:14:05.114 14:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 186747 00:14:05.114 [2024-07-15 14:06:50.979250] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:05.114 [2024-07-15 14:06:50.979371] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:05.114 14:06:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 186747 00:14:06.495 14:06:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:14:06.495 00:14:06.495 real 0m12.748s 00:14:06.495 user 0m22.501s 00:14:06.495 sys 0m1.428s 00:14:06.495 14:06:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:06.495 14:06:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.495 ************************************ 00:14:06.495 END TEST raid_state_function_test_sb 00:14:06.495 ************************************ 00:14:06.495 14:06:52 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:14:06.495 14:06:52 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:14:06.495 14:06:52 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:14:06.496 14:06:52 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:06.496 14:06:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:06.496 ************************************ 00:14:06.496 START TEST raid_superblock_test 00:14:06.496 ************************************ 00:14:06.496 14:06:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid0 2 00:14:06.496 14:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid0 00:14:06.496 14:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:14:06.496 14:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:14:06.496 14:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:14:06.496 14:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:14:06.496 14:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:14:06.496 14:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:14:06.496 14:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:14:06.496 14:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:14:06.496 14:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:14:06.496 14:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:14:06.496 14:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:14:06.496 14:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:14:06.496 14:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid0 '!=' raid1 ']' 00:14:06.496 14:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:14:06.496 14:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:14:06.496 14:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=187139 00:14:06.496 14:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 187139 /var/tmp/spdk-raid.sock 00:14:06.496 14:06:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:14:06.496 14:06:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 187139 ']' 00:14:06.496 14:06:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:06.496 14:06:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:06.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:06.496 14:06:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:06.496 14:06:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:06.496 14:06:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.496 [2024-07-15 14:06:52.198383] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:14:06.496 [2024-07-15 14:06:52.198937] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid187139 ] 00:14:06.496 [2024-07-15 14:06:52.348531] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:06.754 [2024-07-15 14:06:52.561502] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:07.011 [2024-07-15 14:06:52.758662] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:07.578 14:06:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:07.578 14:06:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:14:07.578 14:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:14:07.578 14:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:14:07.578 14:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:14:07.578 14:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:14:07.578 14:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:07.578 14:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:07.578 14:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:14:07.578 14:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:07.578 14:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:14:07.836 malloc1 00:14:07.836 14:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:08.094 [2024-07-15 14:06:53.872261] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:08.094 [2024-07-15 14:06:53.872747] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:08.094 [2024-07-15 14:06:53.872871] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:14:08.094 [2024-07-15 14:06:53.872956] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:08.094 [2024-07-15 14:06:53.874746] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:08.094 [2024-07-15 14:06:53.874878] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:08.094 pt1 00:14:08.094 14:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:14:08.094 14:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:14:08.094 14:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:14:08.094 14:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:14:08.094 14:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:08.094 14:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:08.094 14:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:14:08.094 14:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:08.094 14:06:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:14:08.351 malloc2 00:14:08.351 14:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:08.609 [2024-07-15 14:06:54.522224] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:08.609 [2024-07-15 14:06:54.522346] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:08.609 [2024-07-15 14:06:54.522387] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:14:08.609 [2024-07-15 14:06:54.522411] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:08.609 [2024-07-15 14:06:54.524165] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:08.609 [2024-07-15 14:06:54.524219] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:08.609 pt2 00:14:08.609 14:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:14:08.609 14:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:14:08.609 14:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2' -n raid_bdev1 -s 00:14:08.868 [2024-07-15 14:06:54.750280] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:08.868 [2024-07-15 14:06:54.751771] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:08.868 [2024-07-15 14:06:54.751936] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:14:08.868 [2024-07-15 14:06:54.751953] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:08.868 [2024-07-15 14:06:54.752060] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:14:08.868 [2024-07-15 14:06:54.752342] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:14:08.869 [2024-07-15 14:06:54.752367] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:14:08.869 [2024-07-15 14:06:54.752495] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:08.869 14:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:14:08.869 14:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:08.869 14:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:08.869 14:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:08.869 14:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:08.869 14:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:08.869 14:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:08.869 14:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:08.869 14:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:08.869 14:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:08.869 14:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:08.869 14:06:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.127 14:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:09.127 "name": "raid_bdev1", 00:14:09.127 "uuid": "9180a247-74bf-48ec-8293-08fe3f4fced2", 00:14:09.127 "strip_size_kb": 64, 00:14:09.127 "state": "online", 00:14:09.127 "raid_level": "raid0", 00:14:09.127 "superblock": true, 00:14:09.127 "num_base_bdevs": 2, 00:14:09.127 "num_base_bdevs_discovered": 2, 00:14:09.127 "num_base_bdevs_operational": 2, 00:14:09.127 "base_bdevs_list": [ 00:14:09.127 { 00:14:09.127 "name": "pt1", 00:14:09.127 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:09.127 "is_configured": true, 00:14:09.127 "data_offset": 2048, 00:14:09.127 "data_size": 63488 00:14:09.127 }, 00:14:09.127 { 00:14:09.127 "name": "pt2", 00:14:09.127 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:09.127 "is_configured": true, 00:14:09.127 "data_offset": 2048, 00:14:09.127 "data_size": 63488 00:14:09.127 } 00:14:09.127 ] 00:14:09.127 }' 00:14:09.127 14:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:09.127 14:06:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.694 14:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:14:09.694 14:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:14:09.694 14:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:09.694 14:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:09.694 14:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:09.694 14:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:14:09.695 14:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:09.695 14:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:09.953 [2024-07-15 14:06:55.938619] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:10.211 14:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:10.211 "name": "raid_bdev1", 00:14:10.211 "aliases": [ 00:14:10.211 "9180a247-74bf-48ec-8293-08fe3f4fced2" 00:14:10.211 ], 00:14:10.211 "product_name": "Raid Volume", 00:14:10.211 "block_size": 512, 00:14:10.211 "num_blocks": 126976, 00:14:10.211 "uuid": "9180a247-74bf-48ec-8293-08fe3f4fced2", 00:14:10.211 "assigned_rate_limits": { 00:14:10.211 "rw_ios_per_sec": 0, 00:14:10.211 "rw_mbytes_per_sec": 0, 00:14:10.211 "r_mbytes_per_sec": 0, 00:14:10.211 "w_mbytes_per_sec": 0 00:14:10.211 }, 00:14:10.211 "claimed": false, 00:14:10.211 "zoned": false, 00:14:10.211 "supported_io_types": { 00:14:10.211 "read": true, 00:14:10.211 "write": true, 00:14:10.211 "unmap": true, 00:14:10.211 "flush": true, 00:14:10.211 "reset": true, 00:14:10.211 "nvme_admin": false, 00:14:10.211 "nvme_io": false, 00:14:10.211 "nvme_io_md": false, 00:14:10.211 "write_zeroes": true, 00:14:10.211 "zcopy": false, 00:14:10.211 "get_zone_info": false, 00:14:10.211 "zone_management": false, 00:14:10.211 "zone_append": false, 00:14:10.211 "compare": false, 00:14:10.212 "compare_and_write": false, 00:14:10.212 "abort": false, 00:14:10.212 "seek_hole": false, 00:14:10.212 "seek_data": false, 00:14:10.212 "copy": false, 00:14:10.212 "nvme_iov_md": false 00:14:10.212 }, 00:14:10.212 "memory_domains": [ 00:14:10.212 { 00:14:10.212 "dma_device_id": "system", 00:14:10.212 "dma_device_type": 1 00:14:10.212 }, 00:14:10.212 { 00:14:10.212 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:10.212 "dma_device_type": 2 00:14:10.212 }, 00:14:10.212 { 00:14:10.212 "dma_device_id": "system", 00:14:10.212 "dma_device_type": 1 00:14:10.212 }, 00:14:10.212 { 00:14:10.212 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:10.212 "dma_device_type": 2 00:14:10.212 } 00:14:10.212 ], 00:14:10.212 "driver_specific": { 00:14:10.212 "raid": { 00:14:10.212 "uuid": "9180a247-74bf-48ec-8293-08fe3f4fced2", 00:14:10.212 "strip_size_kb": 64, 00:14:10.212 "state": "online", 00:14:10.212 "raid_level": "raid0", 00:14:10.212 "superblock": true, 00:14:10.212 "num_base_bdevs": 2, 00:14:10.212 "num_base_bdevs_discovered": 2, 00:14:10.212 "num_base_bdevs_operational": 2, 00:14:10.212 "base_bdevs_list": [ 00:14:10.212 { 00:14:10.212 "name": "pt1", 00:14:10.212 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:10.212 "is_configured": true, 00:14:10.212 "data_offset": 2048, 00:14:10.212 "data_size": 63488 00:14:10.212 }, 00:14:10.212 { 00:14:10.212 "name": "pt2", 00:14:10.212 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:10.212 "is_configured": true, 00:14:10.212 "data_offset": 2048, 00:14:10.212 "data_size": 63488 00:14:10.212 } 00:14:10.212 ] 00:14:10.212 } 00:14:10.212 } 00:14:10.212 }' 00:14:10.212 14:06:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:10.212 14:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:14:10.212 pt2' 00:14:10.212 14:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:10.212 14:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:14:10.212 14:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:10.470 14:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:10.470 "name": "pt1", 00:14:10.470 "aliases": [ 00:14:10.470 "00000000-0000-0000-0000-000000000001" 00:14:10.470 ], 00:14:10.470 "product_name": "passthru", 00:14:10.470 "block_size": 512, 00:14:10.470 "num_blocks": 65536, 00:14:10.470 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:10.470 "assigned_rate_limits": { 00:14:10.470 "rw_ios_per_sec": 0, 00:14:10.470 "rw_mbytes_per_sec": 0, 00:14:10.470 "r_mbytes_per_sec": 0, 00:14:10.470 "w_mbytes_per_sec": 0 00:14:10.470 }, 00:14:10.470 "claimed": true, 00:14:10.470 "claim_type": "exclusive_write", 00:14:10.470 "zoned": false, 00:14:10.470 "supported_io_types": { 00:14:10.470 "read": true, 00:14:10.470 "write": true, 00:14:10.470 "unmap": true, 00:14:10.470 "flush": true, 00:14:10.470 "reset": true, 00:14:10.470 "nvme_admin": false, 00:14:10.470 "nvme_io": false, 00:14:10.470 "nvme_io_md": false, 00:14:10.470 "write_zeroes": true, 00:14:10.470 "zcopy": true, 00:14:10.470 "get_zone_info": false, 00:14:10.470 "zone_management": false, 00:14:10.470 "zone_append": false, 00:14:10.470 "compare": false, 00:14:10.470 "compare_and_write": false, 00:14:10.470 "abort": true, 00:14:10.470 "seek_hole": false, 00:14:10.470 "seek_data": false, 00:14:10.470 "copy": true, 00:14:10.470 "nvme_iov_md": false 00:14:10.470 }, 00:14:10.470 "memory_domains": [ 00:14:10.470 { 00:14:10.470 "dma_device_id": "system", 00:14:10.470 "dma_device_type": 1 00:14:10.470 }, 00:14:10.470 { 00:14:10.470 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:10.470 "dma_device_type": 2 00:14:10.470 } 00:14:10.470 ], 00:14:10.470 "driver_specific": { 00:14:10.470 "passthru": { 00:14:10.470 "name": "pt1", 00:14:10.470 "base_bdev_name": "malloc1" 00:14:10.470 } 00:14:10.470 } 00:14:10.470 }' 00:14:10.470 14:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:10.470 14:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:10.470 14:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:10.470 14:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:10.470 14:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:10.470 14:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:10.470 14:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:10.729 14:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:10.729 14:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:10.729 14:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:10.729 14:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:10.729 14:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:10.729 14:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:10.729 14:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:14:10.729 14:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:10.988 14:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:10.988 "name": "pt2", 00:14:10.988 "aliases": [ 00:14:10.988 "00000000-0000-0000-0000-000000000002" 00:14:10.988 ], 00:14:10.988 "product_name": "passthru", 00:14:10.988 "block_size": 512, 00:14:10.988 "num_blocks": 65536, 00:14:10.988 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:10.988 "assigned_rate_limits": { 00:14:10.988 "rw_ios_per_sec": 0, 00:14:10.988 "rw_mbytes_per_sec": 0, 00:14:10.988 "r_mbytes_per_sec": 0, 00:14:10.988 "w_mbytes_per_sec": 0 00:14:10.988 }, 00:14:10.988 "claimed": true, 00:14:10.988 "claim_type": "exclusive_write", 00:14:10.988 "zoned": false, 00:14:10.988 "supported_io_types": { 00:14:10.988 "read": true, 00:14:10.988 "write": true, 00:14:10.988 "unmap": true, 00:14:10.988 "flush": true, 00:14:10.988 "reset": true, 00:14:10.988 "nvme_admin": false, 00:14:10.988 "nvme_io": false, 00:14:10.988 "nvme_io_md": false, 00:14:10.988 "write_zeroes": true, 00:14:10.988 "zcopy": true, 00:14:10.988 "get_zone_info": false, 00:14:10.988 "zone_management": false, 00:14:10.988 "zone_append": false, 00:14:10.988 "compare": false, 00:14:10.988 "compare_and_write": false, 00:14:10.988 "abort": true, 00:14:10.988 "seek_hole": false, 00:14:10.988 "seek_data": false, 00:14:10.988 "copy": true, 00:14:10.988 "nvme_iov_md": false 00:14:10.988 }, 00:14:10.988 "memory_domains": [ 00:14:10.988 { 00:14:10.988 "dma_device_id": "system", 00:14:10.988 "dma_device_type": 1 00:14:10.988 }, 00:14:10.988 { 00:14:10.989 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:10.989 "dma_device_type": 2 00:14:10.989 } 00:14:10.989 ], 00:14:10.989 "driver_specific": { 00:14:10.989 "passthru": { 00:14:10.989 "name": "pt2", 00:14:10.989 "base_bdev_name": "malloc2" 00:14:10.989 } 00:14:10.989 } 00:14:10.989 }' 00:14:10.989 14:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:10.989 14:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:10.989 14:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:10.989 14:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:11.258 14:06:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:11.259 14:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:11.259 14:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:11.259 14:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:11.259 14:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:11.259 14:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:11.259 14:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:11.259 14:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:11.259 14:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:11.259 14:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:14:11.553 [2024-07-15 14:06:57.414901] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:11.553 14:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=9180a247-74bf-48ec-8293-08fe3f4fced2 00:14:11.553 14:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 9180a247-74bf-48ec-8293-08fe3f4fced2 ']' 00:14:11.553 14:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:11.811 [2024-07-15 14:06:57.698754] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:11.811 [2024-07-15 14:06:57.698793] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:11.811 [2024-07-15 14:06:57.698884] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:11.811 [2024-07-15 14:06:57.698924] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:11.811 [2024-07-15 14:06:57.698934] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:14:11.811 14:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:11.811 14:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:14:12.069 14:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:14:12.069 14:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:14:12.069 14:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:14:12.069 14:06:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:14:12.327 14:06:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:14:12.327 14:06:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:12.586 14:06:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:12.586 14:06:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:14:12.844 14:06:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:14:12.844 14:06:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:14:12.844 14:06:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:14:12.844 14:06:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:14:12.844 14:06:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:12.844 14:06:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:12.844 14:06:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:12.844 14:06:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:12.844 14:06:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:12.844 14:06:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:12.844 14:06:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:12.844 14:06:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:12.844 14:06:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:14:13.102 [2024-07-15 14:06:59.039166] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:13.102 [2024-07-15 14:06:59.040700] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:13.102 [2024-07-15 14:06:59.040791] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:13.102 [2024-07-15 14:06:59.040896] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:13.102 [2024-07-15 14:06:59.040941] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:13.102 [2024-07-15 14:06:59.040969] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state configuring 00:14:13.102 request: 00:14:13.102 { 00:14:13.102 "name": "raid_bdev1", 00:14:13.102 "raid_level": "raid0", 00:14:13.102 "base_bdevs": [ 00:14:13.102 "malloc1", 00:14:13.102 "malloc2" 00:14:13.102 ], 00:14:13.102 "strip_size_kb": 64, 00:14:13.102 "superblock": false, 00:14:13.102 "method": "bdev_raid_create", 00:14:13.102 "req_id": 1 00:14:13.102 } 00:14:13.102 Got JSON-RPC error response 00:14:13.102 response: 00:14:13.102 { 00:14:13.102 "code": -17, 00:14:13.102 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:13.102 } 00:14:13.102 14:06:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:14:13.102 14:06:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:13.102 14:06:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:13.102 14:06:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:13.102 14:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:14:13.102 14:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:13.668 14:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:14:13.668 14:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:14:13.668 14:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:13.668 [2024-07-15 14:06:59.582274] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:13.668 [2024-07-15 14:06:59.582370] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:13.668 [2024-07-15 14:06:59.582429] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:13.668 [2024-07-15 14:06:59.582461] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:13.668 [2024-07-15 14:06:59.584349] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:13.668 [2024-07-15 14:06:59.584421] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:13.668 [2024-07-15 14:06:59.584513] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:13.668 [2024-07-15 14:06:59.584566] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:13.668 pt1 00:14:13.668 14:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:14:13.668 14:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:13.668 14:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:13.668 14:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:13.668 14:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:13.668 14:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:13.668 14:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:13.668 14:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:13.668 14:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:13.668 14:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:13.668 14:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:13.668 14:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:13.926 14:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:13.926 "name": "raid_bdev1", 00:14:13.926 "uuid": "9180a247-74bf-48ec-8293-08fe3f4fced2", 00:14:13.926 "strip_size_kb": 64, 00:14:13.926 "state": "configuring", 00:14:13.926 "raid_level": "raid0", 00:14:13.926 "superblock": true, 00:14:13.926 "num_base_bdevs": 2, 00:14:13.926 "num_base_bdevs_discovered": 1, 00:14:13.926 "num_base_bdevs_operational": 2, 00:14:13.926 "base_bdevs_list": [ 00:14:13.926 { 00:14:13.926 "name": "pt1", 00:14:13.926 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:13.926 "is_configured": true, 00:14:13.926 "data_offset": 2048, 00:14:13.926 "data_size": 63488 00:14:13.926 }, 00:14:13.926 { 00:14:13.926 "name": null, 00:14:13.926 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:13.926 "is_configured": false, 00:14:13.926 "data_offset": 2048, 00:14:13.926 "data_size": 63488 00:14:13.926 } 00:14:13.926 ] 00:14:13.926 }' 00:14:13.926 14:06:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:13.926 14:06:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.857 14:07:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:14:14.857 14:07:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:14:14.857 14:07:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:14:14.857 14:07:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:14.857 [2024-07-15 14:07:00.761148] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:14.858 [2024-07-15 14:07:00.761344] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:14.858 [2024-07-15 14:07:00.761407] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:14.858 [2024-07-15 14:07:00.761450] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:14.858 [2024-07-15 14:07:00.761993] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:14.858 [2024-07-15 14:07:00.762126] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:14.858 [2024-07-15 14:07:00.762270] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:14.858 [2024-07-15 14:07:00.762304] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:14.858 [2024-07-15 14:07:00.762431] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:14:14.858 [2024-07-15 14:07:00.762460] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:14.858 [2024-07-15 14:07:00.762591] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:14.858 [2024-07-15 14:07:00.762916] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:14:14.858 [2024-07-15 14:07:00.762949] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:14:14.858 [2024-07-15 14:07:00.763099] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:14.858 pt2 00:14:14.858 14:07:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:14:14.858 14:07:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:14:14.858 14:07:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:14:14.858 14:07:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:14.858 14:07:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:14.858 14:07:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:14.858 14:07:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:14.858 14:07:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:14.858 14:07:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:14.858 14:07:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:14.858 14:07:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:14.858 14:07:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:14.858 14:07:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:14.858 14:07:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:15.115 14:07:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:15.115 "name": "raid_bdev1", 00:14:15.115 "uuid": "9180a247-74bf-48ec-8293-08fe3f4fced2", 00:14:15.115 "strip_size_kb": 64, 00:14:15.115 "state": "online", 00:14:15.115 "raid_level": "raid0", 00:14:15.115 "superblock": true, 00:14:15.115 "num_base_bdevs": 2, 00:14:15.115 "num_base_bdevs_discovered": 2, 00:14:15.115 "num_base_bdevs_operational": 2, 00:14:15.115 "base_bdevs_list": [ 00:14:15.115 { 00:14:15.115 "name": "pt1", 00:14:15.115 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:15.115 "is_configured": true, 00:14:15.115 "data_offset": 2048, 00:14:15.115 "data_size": 63488 00:14:15.115 }, 00:14:15.115 { 00:14:15.115 "name": "pt2", 00:14:15.115 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:15.115 "is_configured": true, 00:14:15.115 "data_offset": 2048, 00:14:15.115 "data_size": 63488 00:14:15.115 } 00:14:15.115 ] 00:14:15.115 }' 00:14:15.115 14:07:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:15.115 14:07:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.685 14:07:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:14:15.685 14:07:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:14:15.685 14:07:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:15.685 14:07:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:15.685 14:07:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:15.685 14:07:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:14:15.685 14:07:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:15.685 14:07:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:15.957 [2024-07-15 14:07:01.925607] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:15.958 14:07:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:15.958 "name": "raid_bdev1", 00:14:15.958 "aliases": [ 00:14:15.958 "9180a247-74bf-48ec-8293-08fe3f4fced2" 00:14:15.958 ], 00:14:15.958 "product_name": "Raid Volume", 00:14:15.958 "block_size": 512, 00:14:15.958 "num_blocks": 126976, 00:14:15.958 "uuid": "9180a247-74bf-48ec-8293-08fe3f4fced2", 00:14:15.958 "assigned_rate_limits": { 00:14:15.958 "rw_ios_per_sec": 0, 00:14:15.958 "rw_mbytes_per_sec": 0, 00:14:15.958 "r_mbytes_per_sec": 0, 00:14:15.958 "w_mbytes_per_sec": 0 00:14:15.958 }, 00:14:15.958 "claimed": false, 00:14:15.958 "zoned": false, 00:14:15.958 "supported_io_types": { 00:14:15.958 "read": true, 00:14:15.958 "write": true, 00:14:15.958 "unmap": true, 00:14:15.958 "flush": true, 00:14:15.958 "reset": true, 00:14:15.958 "nvme_admin": false, 00:14:15.958 "nvme_io": false, 00:14:15.958 "nvme_io_md": false, 00:14:15.958 "write_zeroes": true, 00:14:15.958 "zcopy": false, 00:14:15.958 "get_zone_info": false, 00:14:15.958 "zone_management": false, 00:14:15.958 "zone_append": false, 00:14:15.958 "compare": false, 00:14:15.958 "compare_and_write": false, 00:14:15.958 "abort": false, 00:14:15.958 "seek_hole": false, 00:14:15.958 "seek_data": false, 00:14:15.958 "copy": false, 00:14:15.958 "nvme_iov_md": false 00:14:15.958 }, 00:14:15.958 "memory_domains": [ 00:14:15.958 { 00:14:15.958 "dma_device_id": "system", 00:14:15.958 "dma_device_type": 1 00:14:15.958 }, 00:14:15.958 { 00:14:15.958 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:15.958 "dma_device_type": 2 00:14:15.958 }, 00:14:15.958 { 00:14:15.958 "dma_device_id": "system", 00:14:15.958 "dma_device_type": 1 00:14:15.958 }, 00:14:15.958 { 00:14:15.958 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:15.958 "dma_device_type": 2 00:14:15.958 } 00:14:15.958 ], 00:14:15.958 "driver_specific": { 00:14:15.958 "raid": { 00:14:15.958 "uuid": "9180a247-74bf-48ec-8293-08fe3f4fced2", 00:14:15.958 "strip_size_kb": 64, 00:14:15.958 "state": "online", 00:14:15.958 "raid_level": "raid0", 00:14:15.958 "superblock": true, 00:14:15.958 "num_base_bdevs": 2, 00:14:15.958 "num_base_bdevs_discovered": 2, 00:14:15.958 "num_base_bdevs_operational": 2, 00:14:15.958 "base_bdevs_list": [ 00:14:15.958 { 00:14:15.958 "name": "pt1", 00:14:15.958 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:15.958 "is_configured": true, 00:14:15.958 "data_offset": 2048, 00:14:15.958 "data_size": 63488 00:14:15.958 }, 00:14:15.958 { 00:14:15.958 "name": "pt2", 00:14:15.958 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:15.958 "is_configured": true, 00:14:15.958 "data_offset": 2048, 00:14:15.958 "data_size": 63488 00:14:15.958 } 00:14:15.958 ] 00:14:15.958 } 00:14:15.958 } 00:14:15.958 }' 00:14:15.958 14:07:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:16.216 14:07:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:14:16.216 pt2' 00:14:16.216 14:07:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:16.216 14:07:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:14:16.216 14:07:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:16.473 14:07:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:16.473 "name": "pt1", 00:14:16.473 "aliases": [ 00:14:16.473 "00000000-0000-0000-0000-000000000001" 00:14:16.473 ], 00:14:16.474 "product_name": "passthru", 00:14:16.474 "block_size": 512, 00:14:16.474 "num_blocks": 65536, 00:14:16.474 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:16.474 "assigned_rate_limits": { 00:14:16.474 "rw_ios_per_sec": 0, 00:14:16.474 "rw_mbytes_per_sec": 0, 00:14:16.474 "r_mbytes_per_sec": 0, 00:14:16.474 "w_mbytes_per_sec": 0 00:14:16.474 }, 00:14:16.474 "claimed": true, 00:14:16.474 "claim_type": "exclusive_write", 00:14:16.474 "zoned": false, 00:14:16.474 "supported_io_types": { 00:14:16.474 "read": true, 00:14:16.474 "write": true, 00:14:16.474 "unmap": true, 00:14:16.474 "flush": true, 00:14:16.474 "reset": true, 00:14:16.474 "nvme_admin": false, 00:14:16.474 "nvme_io": false, 00:14:16.474 "nvme_io_md": false, 00:14:16.474 "write_zeroes": true, 00:14:16.474 "zcopy": true, 00:14:16.474 "get_zone_info": false, 00:14:16.474 "zone_management": false, 00:14:16.474 "zone_append": false, 00:14:16.474 "compare": false, 00:14:16.474 "compare_and_write": false, 00:14:16.474 "abort": true, 00:14:16.474 "seek_hole": false, 00:14:16.474 "seek_data": false, 00:14:16.474 "copy": true, 00:14:16.474 "nvme_iov_md": false 00:14:16.474 }, 00:14:16.474 "memory_domains": [ 00:14:16.474 { 00:14:16.474 "dma_device_id": "system", 00:14:16.474 "dma_device_type": 1 00:14:16.474 }, 00:14:16.474 { 00:14:16.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:16.474 "dma_device_type": 2 00:14:16.474 } 00:14:16.474 ], 00:14:16.474 "driver_specific": { 00:14:16.474 "passthru": { 00:14:16.474 "name": "pt1", 00:14:16.474 "base_bdev_name": "malloc1" 00:14:16.474 } 00:14:16.474 } 00:14:16.474 }' 00:14:16.474 14:07:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:16.474 14:07:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:16.474 14:07:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:16.474 14:07:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:16.731 14:07:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:16.731 14:07:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:16.731 14:07:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:16.731 14:07:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:16.731 14:07:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:16.731 14:07:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:16.731 14:07:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:16.989 14:07:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:16.989 14:07:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:16.989 14:07:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:16.989 14:07:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:14:17.246 14:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:17.246 "name": "pt2", 00:14:17.246 "aliases": [ 00:14:17.246 "00000000-0000-0000-0000-000000000002" 00:14:17.246 ], 00:14:17.246 "product_name": "passthru", 00:14:17.246 "block_size": 512, 00:14:17.246 "num_blocks": 65536, 00:14:17.247 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:17.247 "assigned_rate_limits": { 00:14:17.247 "rw_ios_per_sec": 0, 00:14:17.247 "rw_mbytes_per_sec": 0, 00:14:17.247 "r_mbytes_per_sec": 0, 00:14:17.247 "w_mbytes_per_sec": 0 00:14:17.247 }, 00:14:17.247 "claimed": true, 00:14:17.247 "claim_type": "exclusive_write", 00:14:17.247 "zoned": false, 00:14:17.247 "supported_io_types": { 00:14:17.247 "read": true, 00:14:17.247 "write": true, 00:14:17.247 "unmap": true, 00:14:17.247 "flush": true, 00:14:17.247 "reset": true, 00:14:17.247 "nvme_admin": false, 00:14:17.247 "nvme_io": false, 00:14:17.247 "nvme_io_md": false, 00:14:17.247 "write_zeroes": true, 00:14:17.247 "zcopy": true, 00:14:17.247 "get_zone_info": false, 00:14:17.247 "zone_management": false, 00:14:17.247 "zone_append": false, 00:14:17.247 "compare": false, 00:14:17.247 "compare_and_write": false, 00:14:17.247 "abort": true, 00:14:17.247 "seek_hole": false, 00:14:17.247 "seek_data": false, 00:14:17.247 "copy": true, 00:14:17.247 "nvme_iov_md": false 00:14:17.247 }, 00:14:17.247 "memory_domains": [ 00:14:17.247 { 00:14:17.247 "dma_device_id": "system", 00:14:17.247 "dma_device_type": 1 00:14:17.247 }, 00:14:17.247 { 00:14:17.247 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:17.247 "dma_device_type": 2 00:14:17.247 } 00:14:17.247 ], 00:14:17.247 "driver_specific": { 00:14:17.247 "passthru": { 00:14:17.247 "name": "pt2", 00:14:17.247 "base_bdev_name": "malloc2" 00:14:17.247 } 00:14:17.247 } 00:14:17.247 }' 00:14:17.247 14:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:17.247 14:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:17.247 14:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:17.247 14:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:17.504 14:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:17.504 14:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:17.504 14:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:17.504 14:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:17.504 14:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:17.504 14:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:17.504 14:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:17.504 14:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:17.504 14:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:17.504 14:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:14:17.761 [2024-07-15 14:07:03.745958] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:18.019 14:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 9180a247-74bf-48ec-8293-08fe3f4fced2 '!=' 9180a247-74bf-48ec-8293-08fe3f4fced2 ']' 00:14:18.019 14:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid0 00:14:18.019 14:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:18.019 14:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:14:18.019 14:07:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 187139 00:14:18.019 14:07:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 187139 ']' 00:14:18.019 14:07:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 187139 00:14:18.019 14:07:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:14:18.019 14:07:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:18.019 14:07:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 187139 00:14:18.019 14:07:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:18.019 14:07:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:18.019 killing process with pid 187139 00:14:18.019 14:07:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 187139' 00:14:18.019 14:07:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 187139 00:14:18.019 [2024-07-15 14:07:03.788907] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:18.019 [2024-07-15 14:07:03.788969] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:18.019 [2024-07-15 14:07:03.789006] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:18.019 [2024-07-15 14:07:03.789022] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:14:18.019 14:07:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 187139 00:14:18.020 [2024-07-15 14:07:03.977835] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:19.398 14:07:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:14:19.398 00:14:19.398 real 0m12.954s 00:14:19.398 user 0m22.898s 00:14:19.398 sys 0m1.403s 00:14:19.398 14:07:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:19.398 14:07:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.398 ************************************ 00:14:19.398 END TEST raid_superblock_test 00:14:19.398 ************************************ 00:14:19.398 14:07:05 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:14:19.398 14:07:05 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:14:19.398 14:07:05 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:14:19.398 14:07:05 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:19.398 14:07:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:19.398 ************************************ 00:14:19.398 START TEST raid_read_error_test 00:14:19.398 ************************************ 00:14:19.398 14:07:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 2 read 00:14:19.398 14:07:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:14:19.398 14:07:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:14:19.398 14:07:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:14:19.398 14:07:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:14:19.398 14:07:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:19.398 14:07:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:14:19.398 14:07:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:14:19.398 14:07:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:19.398 14:07:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:14:19.398 14:07:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:14:19.398 14:07:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:19.398 14:07:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:19.398 14:07:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:14:19.398 14:07:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:14:19.398 14:07:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:14:19.398 14:07:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:14:19.398 14:07:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:14:19.398 14:07:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:14:19.398 14:07:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:14:19.398 14:07:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:14:19.398 14:07:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:14:19.398 14:07:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:14:19.398 14:07:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.OCd2XNBoai 00:14:19.398 14:07:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=187532 00:14:19.398 14:07:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 187532 /var/tmp/spdk-raid.sock 00:14:19.398 14:07:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 187532 ']' 00:14:19.398 14:07:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:19.398 14:07:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:19.398 14:07:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:19.398 14:07:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:19.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:19.398 14:07:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:19.398 14:07:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.398 [2024-07-15 14:07:05.221527] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:14:19.398 [2024-07-15 14:07:05.222071] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid187532 ] 00:14:19.398 [2024-07-15 14:07:05.373319] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:19.657 [2024-07-15 14:07:05.587104] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:19.915 [2024-07-15 14:07:05.784806] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:20.175 14:07:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:20.175 14:07:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:14:20.175 14:07:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:20.175 14:07:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:20.434 BaseBdev1_malloc 00:14:20.434 14:07:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:14:20.692 true 00:14:20.692 14:07:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:20.950 [2024-07-15 14:07:06.858705] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:20.950 [2024-07-15 14:07:06.859180] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:20.950 [2024-07-15 14:07:06.859315] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:14:20.950 [2024-07-15 14:07:06.859416] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:20.950 [2024-07-15 14:07:06.861268] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:20.950 [2024-07-15 14:07:06.861407] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:20.950 BaseBdev1 00:14:20.950 14:07:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:20.950 14:07:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:21.209 BaseBdev2_malloc 00:14:21.209 14:07:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:14:21.466 true 00:14:21.466 14:07:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:21.724 [2024-07-15 14:07:07.641586] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:21.724 [2024-07-15 14:07:07.642038] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:21.724 [2024-07-15 14:07:07.642311] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:21.724 [2024-07-15 14:07:07.642531] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:21.724 [2024-07-15 14:07:07.644405] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:21.724 [2024-07-15 14:07:07.644637] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:21.724 BaseBdev2 00:14:21.724 14:07:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:14:21.982 [2024-07-15 14:07:07.909697] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:21.982 [2024-07-15 14:07:07.911425] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:21.982 [2024-07-15 14:07:07.911762] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008480 00:14:21.982 [2024-07-15 14:07:07.911898] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:21.982 [2024-07-15 14:07:07.912047] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:14:21.982 [2024-07-15 14:07:07.912365] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008480 00:14:21.982 [2024-07-15 14:07:07.912417] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008480 00:14:21.982 [2024-07-15 14:07:07.912697] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:21.982 14:07:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:14:21.982 14:07:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:21.982 14:07:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:21.982 14:07:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:21.982 14:07:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:21.982 14:07:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:21.982 14:07:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:21.982 14:07:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:21.982 14:07:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:21.982 14:07:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:21.982 14:07:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:21.982 14:07:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.241 14:07:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:22.241 "name": "raid_bdev1", 00:14:22.241 "uuid": "af3aea29-9ff0-40b0-a9df-3e5591931ed0", 00:14:22.241 "strip_size_kb": 64, 00:14:22.241 "state": "online", 00:14:22.241 "raid_level": "raid0", 00:14:22.241 "superblock": true, 00:14:22.241 "num_base_bdevs": 2, 00:14:22.241 "num_base_bdevs_discovered": 2, 00:14:22.241 "num_base_bdevs_operational": 2, 00:14:22.241 "base_bdevs_list": [ 00:14:22.241 { 00:14:22.241 "name": "BaseBdev1", 00:14:22.241 "uuid": "16934893-e1c1-55f6-8feb-c8b9d538e262", 00:14:22.241 "is_configured": true, 00:14:22.241 "data_offset": 2048, 00:14:22.241 "data_size": 63488 00:14:22.241 }, 00:14:22.241 { 00:14:22.241 "name": "BaseBdev2", 00:14:22.241 "uuid": "9dcdb1b5-b9ea-5c34-a740-3481deca5252", 00:14:22.241 "is_configured": true, 00:14:22.241 "data_offset": 2048, 00:14:22.241 "data_size": 63488 00:14:22.241 } 00:14:22.241 ] 00:14:22.241 }' 00:14:22.241 14:07:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:22.241 14:07:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.176 14:07:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:14:23.176 14:07:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:14:23.176 [2024-07-15 14:07:08.927091] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:14:24.110 14:07:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:14:24.369 14:07:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:14:24.369 14:07:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:14:24.369 14:07:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:14:24.369 14:07:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:14:24.369 14:07:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:24.369 14:07:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:24.369 14:07:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:24.369 14:07:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:24.369 14:07:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:24.369 14:07:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:24.369 14:07:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:24.369 14:07:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:24.369 14:07:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:24.369 14:07:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:24.369 14:07:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.627 14:07:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:24.627 "name": "raid_bdev1", 00:14:24.627 "uuid": "af3aea29-9ff0-40b0-a9df-3e5591931ed0", 00:14:24.627 "strip_size_kb": 64, 00:14:24.627 "state": "online", 00:14:24.627 "raid_level": "raid0", 00:14:24.627 "superblock": true, 00:14:24.627 "num_base_bdevs": 2, 00:14:24.627 "num_base_bdevs_discovered": 2, 00:14:24.627 "num_base_bdevs_operational": 2, 00:14:24.627 "base_bdevs_list": [ 00:14:24.627 { 00:14:24.627 "name": "BaseBdev1", 00:14:24.627 "uuid": "16934893-e1c1-55f6-8feb-c8b9d538e262", 00:14:24.627 "is_configured": true, 00:14:24.627 "data_offset": 2048, 00:14:24.627 "data_size": 63488 00:14:24.627 }, 00:14:24.627 { 00:14:24.627 "name": "BaseBdev2", 00:14:24.627 "uuid": "9dcdb1b5-b9ea-5c34-a740-3481deca5252", 00:14:24.627 "is_configured": true, 00:14:24.627 "data_offset": 2048, 00:14:24.627 "data_size": 63488 00:14:24.627 } 00:14:24.627 ] 00:14:24.627 }' 00:14:24.627 14:07:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:24.627 14:07:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.193 14:07:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:25.452 [2024-07-15 14:07:11.339188] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:25.452 [2024-07-15 14:07:11.339487] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:25.452 [2024-07-15 14:07:11.341016] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:25.452 [2024-07-15 14:07:11.341223] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:25.452 [2024-07-15 14:07:11.341290] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:25.452 [2024-07-15 14:07:11.341401] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state offline 00:14:25.452 0 00:14:25.452 14:07:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 187532 00:14:25.452 14:07:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 187532 ']' 00:14:25.452 14:07:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 187532 00:14:25.452 14:07:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:14:25.452 14:07:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:25.452 14:07:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 187532 00:14:25.452 14:07:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:25.452 14:07:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:25.452 14:07:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 187532' 00:14:25.452 killing process with pid 187532 00:14:25.452 14:07:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 187532 00:14:25.452 [2024-07-15 14:07:11.395104] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:25.452 14:07:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 187532 00:14:25.710 [2024-07-15 14:07:11.509928] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:27.082 14:07:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:14:27.082 14:07:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.OCd2XNBoai 00:14:27.082 14:07:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:14:27.082 14:07:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.41 00:14:27.082 14:07:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:14:27.082 14:07:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:27.082 14:07:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:14:27.082 14:07:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.41 != \0\.\0\0 ]] 00:14:27.082 00:14:27.082 real 0m7.528s 00:14:27.082 user 0m11.385s 00:14:27.082 sys 0m0.793s 00:14:27.082 14:07:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:27.082 14:07:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.082 ************************************ 00:14:27.082 END TEST raid_read_error_test 00:14:27.083 ************************************ 00:14:27.083 14:07:12 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:14:27.083 14:07:12 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:14:27.083 14:07:12 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:14:27.083 14:07:12 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:27.083 14:07:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:27.083 ************************************ 00:14:27.083 START TEST raid_write_error_test 00:14:27.083 ************************************ 00:14:27.083 14:07:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 2 write 00:14:27.083 14:07:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:14:27.083 14:07:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:14:27.083 14:07:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:14:27.083 14:07:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:14:27.083 14:07:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:27.083 14:07:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:14:27.083 14:07:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:14:27.083 14:07:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:27.083 14:07:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:14:27.083 14:07:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:14:27.083 14:07:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:27.083 14:07:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:27.083 14:07:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:14:27.083 14:07:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:14:27.083 14:07:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:14:27.083 14:07:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:14:27.083 14:07:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:14:27.083 14:07:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:14:27.083 14:07:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:14:27.083 14:07:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:14:27.083 14:07:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:14:27.083 14:07:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:14:27.083 14:07:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.F5Q0wLe8ed 00:14:27.083 14:07:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=187720 00:14:27.083 14:07:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 187720 /var/tmp/spdk-raid.sock 00:14:27.083 14:07:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 187720 ']' 00:14:27.083 14:07:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:27.083 14:07:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:27.083 14:07:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:27.083 14:07:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:27.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:27.083 14:07:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:27.083 14:07:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.083 [2024-07-15 14:07:12.818278] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:14:27.083 [2024-07-15 14:07:12.818694] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid187720 ] 00:14:27.083 [2024-07-15 14:07:12.981628] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:27.343 [2024-07-15 14:07:13.198376] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:27.600 [2024-07-15 14:07:13.398444] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:27.858 14:07:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:27.858 14:07:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:14:27.858 14:07:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:27.858 14:07:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:28.117 BaseBdev1_malloc 00:14:28.117 14:07:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:14:28.375 true 00:14:28.375 14:07:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:28.941 [2024-07-15 14:07:14.637656] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:28.941 [2024-07-15 14:07:14.638381] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:28.941 [2024-07-15 14:07:14.638625] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:14:28.941 [2024-07-15 14:07:14.638845] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:28.941 [2024-07-15 14:07:14.640875] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:28.941 [2024-07-15 14:07:14.641122] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:28.941 BaseBdev1 00:14:28.941 14:07:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:28.941 14:07:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:28.941 BaseBdev2_malloc 00:14:28.941 14:07:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:14:29.506 true 00:14:29.506 14:07:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:29.506 [2024-07-15 14:07:15.460283] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:29.506 [2024-07-15 14:07:15.460843] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:29.506 [2024-07-15 14:07:15.461080] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:29.506 [2024-07-15 14:07:15.461293] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:29.506 [2024-07-15 14:07:15.463373] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:29.506 [2024-07-15 14:07:15.463636] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:29.506 BaseBdev2 00:14:29.506 14:07:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:14:30.090 [2024-07-15 14:07:15.808549] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:30.090 [2024-07-15 14:07:15.810545] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:30.090 [2024-07-15 14:07:15.810866] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008480 00:14:30.090 [2024-07-15 14:07:15.810999] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:30.090 [2024-07-15 14:07:15.811155] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:14:30.090 [2024-07-15 14:07:15.811484] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008480 00:14:30.090 [2024-07-15 14:07:15.811532] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008480 00:14:30.090 [2024-07-15 14:07:15.811792] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:30.090 14:07:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:14:30.090 14:07:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:30.090 14:07:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:30.090 14:07:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:30.090 14:07:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:30.090 14:07:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:30.090 14:07:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:30.090 14:07:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:30.090 14:07:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:30.090 14:07:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:30.090 14:07:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:30.090 14:07:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.347 14:07:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:30.347 "name": "raid_bdev1", 00:14:30.347 "uuid": "3cc8168d-5e93-4114-a716-7593c24e8146", 00:14:30.347 "strip_size_kb": 64, 00:14:30.347 "state": "online", 00:14:30.347 "raid_level": "raid0", 00:14:30.347 "superblock": true, 00:14:30.347 "num_base_bdevs": 2, 00:14:30.347 "num_base_bdevs_discovered": 2, 00:14:30.347 "num_base_bdevs_operational": 2, 00:14:30.347 "base_bdevs_list": [ 00:14:30.347 { 00:14:30.347 "name": "BaseBdev1", 00:14:30.347 "uuid": "ec975fb2-1b35-5d06-95a0-730f511d4514", 00:14:30.347 "is_configured": true, 00:14:30.347 "data_offset": 2048, 00:14:30.347 "data_size": 63488 00:14:30.347 }, 00:14:30.347 { 00:14:30.347 "name": "BaseBdev2", 00:14:30.347 "uuid": "b559ca18-d3c4-589c-9b52-0cef4a0e794f", 00:14:30.347 "is_configured": true, 00:14:30.347 "data_offset": 2048, 00:14:30.347 "data_size": 63488 00:14:30.347 } 00:14:30.347 ] 00:14:30.347 }' 00:14:30.347 14:07:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:30.347 14:07:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.913 14:07:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:14:30.913 14:07:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:14:30.913 [2024-07-15 14:07:16.885943] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:14:31.866 14:07:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:14:32.124 14:07:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:14:32.124 14:07:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:14:32.124 14:07:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:14:32.124 14:07:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:14:32.124 14:07:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:32.124 14:07:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:32.124 14:07:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:32.124 14:07:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:32.124 14:07:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:32.124 14:07:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:32.124 14:07:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:32.124 14:07:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:32.124 14:07:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:32.124 14:07:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:32.124 14:07:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:32.383 14:07:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:32.384 "name": "raid_bdev1", 00:14:32.384 "uuid": "3cc8168d-5e93-4114-a716-7593c24e8146", 00:14:32.384 "strip_size_kb": 64, 00:14:32.384 "state": "online", 00:14:32.384 "raid_level": "raid0", 00:14:32.384 "superblock": true, 00:14:32.384 "num_base_bdevs": 2, 00:14:32.384 "num_base_bdevs_discovered": 2, 00:14:32.384 "num_base_bdevs_operational": 2, 00:14:32.384 "base_bdevs_list": [ 00:14:32.384 { 00:14:32.384 "name": "BaseBdev1", 00:14:32.384 "uuid": "ec975fb2-1b35-5d06-95a0-730f511d4514", 00:14:32.384 "is_configured": true, 00:14:32.384 "data_offset": 2048, 00:14:32.384 "data_size": 63488 00:14:32.384 }, 00:14:32.384 { 00:14:32.384 "name": "BaseBdev2", 00:14:32.384 "uuid": "b559ca18-d3c4-589c-9b52-0cef4a0e794f", 00:14:32.384 "is_configured": true, 00:14:32.384 "data_offset": 2048, 00:14:32.384 "data_size": 63488 00:14:32.384 } 00:14:32.384 ] 00:14:32.384 }' 00:14:32.384 14:07:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:32.384 14:07:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.316 14:07:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:33.316 [2024-07-15 14:07:19.210535] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:33.316 [2024-07-15 14:07:19.210866] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:33.316 [2024-07-15 14:07:19.212301] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:33.316 [2024-07-15 14:07:19.212485] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:33.316 [2024-07-15 14:07:19.212628] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:33.316 [2024-07-15 14:07:19.212749] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state offline 00:14:33.316 0 00:14:33.316 14:07:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 187720 00:14:33.316 14:07:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 187720 ']' 00:14:33.316 14:07:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 187720 00:14:33.316 14:07:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:14:33.316 14:07:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:33.316 14:07:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 187720 00:14:33.316 14:07:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:33.316 14:07:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:33.316 14:07:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 187720' 00:14:33.316 killing process with pid 187720 00:14:33.316 14:07:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 187720 00:14:33.316 [2024-07-15 14:07:19.263977] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:33.316 14:07:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 187720 00:14:33.573 [2024-07-15 14:07:19.376174] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:34.946 14:07:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.F5Q0wLe8ed 00:14:34.946 14:07:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:14:34.946 14:07:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:14:34.946 14:07:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.43 00:14:34.946 14:07:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:14:34.946 14:07:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:34.946 14:07:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:14:34.946 14:07:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.43 != \0\.\0\0 ]] 00:14:34.946 00:14:34.946 real 0m7.831s 00:14:34.946 user 0m11.897s 00:14:34.946 sys 0m0.840s 00:14:34.946 14:07:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:34.946 14:07:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.946 ************************************ 00:14:34.946 END TEST raid_write_error_test 00:14:34.946 ************************************ 00:14:34.946 14:07:20 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:14:34.946 14:07:20 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:14:34.946 14:07:20 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:14:34.946 14:07:20 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:14:34.946 14:07:20 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:34.946 14:07:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:34.946 ************************************ 00:14:34.946 START TEST raid_state_function_test 00:14:34.946 ************************************ 00:14:34.946 14:07:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 2 false 00:14:34.946 14:07:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:14:34.946 14:07:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:14:34.946 14:07:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:14:34.946 14:07:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:14:34.946 14:07:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:14:34.946 14:07:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:34.946 14:07:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:14:34.946 14:07:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:34.946 14:07:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:34.946 14:07:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:14:34.946 14:07:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:34.946 14:07:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:34.946 14:07:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:34.946 14:07:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:14:34.946 14:07:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:14:34.946 14:07:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:14:34.946 14:07:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:14:34.946 14:07:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:14:34.946 14:07:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:14:34.946 14:07:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:14:34.946 14:07:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:14:34.946 14:07:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:14:34.947 14:07:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:14:34.947 14:07:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=187920 00:14:34.947 14:07:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:34.947 14:07:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 187920' 00:14:34.947 Process raid pid: 187920 00:14:34.947 14:07:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 187920 /var/tmp/spdk-raid.sock 00:14:34.947 14:07:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 187920 ']' 00:14:34.947 14:07:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:34.947 14:07:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:34.947 14:07:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:34.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:34.947 14:07:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:34.947 14:07:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.947 [2024-07-15 14:07:20.699066] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:14:34.947 [2024-07-15 14:07:20.700045] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:34.947 [2024-07-15 14:07:20.874509] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:35.205 [2024-07-15 14:07:21.097920] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:35.464 [2024-07-15 14:07:21.301845] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:36.029 14:07:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:36.029 14:07:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:14:36.029 14:07:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:36.029 [2024-07-15 14:07:22.015064] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:36.029 [2024-07-15 14:07:22.015723] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:36.029 [2024-07-15 14:07:22.015909] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:36.029 [2024-07-15 14:07:22.016050] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:36.287 14:07:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:36.287 14:07:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:36.287 14:07:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:36.287 14:07:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:36.287 14:07:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:36.287 14:07:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:36.287 14:07:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:36.287 14:07:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:36.287 14:07:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:36.287 14:07:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:36.287 14:07:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:36.287 14:07:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:36.546 14:07:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:36.546 "name": "Existed_Raid", 00:14:36.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.546 "strip_size_kb": 64, 00:14:36.546 "state": "configuring", 00:14:36.546 "raid_level": "concat", 00:14:36.546 "superblock": false, 00:14:36.546 "num_base_bdevs": 2, 00:14:36.546 "num_base_bdevs_discovered": 0, 00:14:36.546 "num_base_bdevs_operational": 2, 00:14:36.546 "base_bdevs_list": [ 00:14:36.546 { 00:14:36.546 "name": "BaseBdev1", 00:14:36.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.546 "is_configured": false, 00:14:36.546 "data_offset": 0, 00:14:36.546 "data_size": 0 00:14:36.546 }, 00:14:36.546 { 00:14:36.546 "name": "BaseBdev2", 00:14:36.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.546 "is_configured": false, 00:14:36.546 "data_offset": 0, 00:14:36.546 "data_size": 0 00:14:36.546 } 00:14:36.546 ] 00:14:36.546 }' 00:14:36.546 14:07:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:36.546 14:07:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.112 14:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:37.372 [2024-07-15 14:07:23.235216] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:37.372 [2024-07-15 14:07:23.235517] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:14:37.372 14:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:37.632 [2024-07-15 14:07:23.479288] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:37.632 [2024-07-15 14:07:23.479976] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:37.632 [2024-07-15 14:07:23.480114] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:37.632 [2024-07-15 14:07:23.480273] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:37.632 14:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:37.890 [2024-07-15 14:07:23.754146] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:37.890 BaseBdev1 00:14:37.890 14:07:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:14:37.890 14:07:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:14:37.890 14:07:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:37.890 14:07:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:14:37.890 14:07:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:37.890 14:07:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:37.890 14:07:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:38.176 14:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:38.434 [ 00:14:38.434 { 00:14:38.434 "name": "BaseBdev1", 00:14:38.434 "aliases": [ 00:14:38.434 "e17e787f-851b-42ea-b0fd-d37c0e924e95" 00:14:38.434 ], 00:14:38.434 "product_name": "Malloc disk", 00:14:38.434 "block_size": 512, 00:14:38.434 "num_blocks": 65536, 00:14:38.434 "uuid": "e17e787f-851b-42ea-b0fd-d37c0e924e95", 00:14:38.434 "assigned_rate_limits": { 00:14:38.434 "rw_ios_per_sec": 0, 00:14:38.434 "rw_mbytes_per_sec": 0, 00:14:38.434 "r_mbytes_per_sec": 0, 00:14:38.434 "w_mbytes_per_sec": 0 00:14:38.434 }, 00:14:38.434 "claimed": true, 00:14:38.434 "claim_type": "exclusive_write", 00:14:38.434 "zoned": false, 00:14:38.434 "supported_io_types": { 00:14:38.434 "read": true, 00:14:38.434 "write": true, 00:14:38.434 "unmap": true, 00:14:38.434 "flush": true, 00:14:38.434 "reset": true, 00:14:38.434 "nvme_admin": false, 00:14:38.434 "nvme_io": false, 00:14:38.434 "nvme_io_md": false, 00:14:38.434 "write_zeroes": true, 00:14:38.434 "zcopy": true, 00:14:38.434 "get_zone_info": false, 00:14:38.434 "zone_management": false, 00:14:38.434 "zone_append": false, 00:14:38.434 "compare": false, 00:14:38.434 "compare_and_write": false, 00:14:38.434 "abort": true, 00:14:38.434 "seek_hole": false, 00:14:38.434 "seek_data": false, 00:14:38.434 "copy": true, 00:14:38.434 "nvme_iov_md": false 00:14:38.434 }, 00:14:38.434 "memory_domains": [ 00:14:38.434 { 00:14:38.434 "dma_device_id": "system", 00:14:38.434 "dma_device_type": 1 00:14:38.434 }, 00:14:38.434 { 00:14:38.434 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:38.434 "dma_device_type": 2 00:14:38.434 } 00:14:38.434 ], 00:14:38.434 "driver_specific": {} 00:14:38.434 } 00:14:38.434 ] 00:14:38.434 14:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:14:38.434 14:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:38.434 14:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:38.434 14:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:38.434 14:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:38.435 14:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:38.435 14:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:38.435 14:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:38.435 14:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:38.435 14:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:38.435 14:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:38.435 14:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:38.435 14:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:38.693 14:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:38.693 "name": "Existed_Raid", 00:14:38.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.693 "strip_size_kb": 64, 00:14:38.693 "state": "configuring", 00:14:38.693 "raid_level": "concat", 00:14:38.693 "superblock": false, 00:14:38.693 "num_base_bdevs": 2, 00:14:38.693 "num_base_bdevs_discovered": 1, 00:14:38.693 "num_base_bdevs_operational": 2, 00:14:38.693 "base_bdevs_list": [ 00:14:38.693 { 00:14:38.693 "name": "BaseBdev1", 00:14:38.693 "uuid": "e17e787f-851b-42ea-b0fd-d37c0e924e95", 00:14:38.693 "is_configured": true, 00:14:38.693 "data_offset": 0, 00:14:38.693 "data_size": 65536 00:14:38.693 }, 00:14:38.693 { 00:14:38.693 "name": "BaseBdev2", 00:14:38.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.693 "is_configured": false, 00:14:38.693 "data_offset": 0, 00:14:38.693 "data_size": 0 00:14:38.693 } 00:14:38.693 ] 00:14:38.693 }' 00:14:38.693 14:07:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:38.693 14:07:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.259 14:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:39.518 [2024-07-15 14:07:25.446570] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:39.518 [2024-07-15 14:07:25.446846] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:14:39.518 14:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:39.777 [2024-07-15 14:07:25.698576] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:39.777 [2024-07-15 14:07:25.700426] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:39.777 [2024-07-15 14:07:25.701031] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:39.777 14:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:14:39.777 14:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:39.777 14:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:39.777 14:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:39.777 14:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:39.777 14:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:39.777 14:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:39.777 14:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:39.777 14:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:39.777 14:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:39.777 14:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:39.777 14:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:39.777 14:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:39.777 14:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:40.035 14:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:40.035 "name": "Existed_Raid", 00:14:40.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.035 "strip_size_kb": 64, 00:14:40.035 "state": "configuring", 00:14:40.035 "raid_level": "concat", 00:14:40.035 "superblock": false, 00:14:40.035 "num_base_bdevs": 2, 00:14:40.035 "num_base_bdevs_discovered": 1, 00:14:40.035 "num_base_bdevs_operational": 2, 00:14:40.035 "base_bdevs_list": [ 00:14:40.035 { 00:14:40.035 "name": "BaseBdev1", 00:14:40.035 "uuid": "e17e787f-851b-42ea-b0fd-d37c0e924e95", 00:14:40.035 "is_configured": true, 00:14:40.035 "data_offset": 0, 00:14:40.035 "data_size": 65536 00:14:40.035 }, 00:14:40.035 { 00:14:40.035 "name": "BaseBdev2", 00:14:40.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.035 "is_configured": false, 00:14:40.035 "data_offset": 0, 00:14:40.035 "data_size": 0 00:14:40.035 } 00:14:40.035 ] 00:14:40.035 }' 00:14:40.035 14:07:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:40.035 14:07:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.603 14:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:41.170 [2024-07-15 14:07:26.870350] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:41.171 [2024-07-15 14:07:26.870613] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:14:41.171 [2024-07-15 14:07:26.870661] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:41.171 [2024-07-15 14:07:26.870892] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:14:41.171 [2024-07-15 14:07:26.871256] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:14:41.171 [2024-07-15 14:07:26.871400] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:14:41.171 [2024-07-15 14:07:26.871701] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:41.171 BaseBdev2 00:14:41.171 14:07:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:14:41.171 14:07:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:14:41.171 14:07:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:41.171 14:07:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:14:41.171 14:07:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:41.171 14:07:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:41.171 14:07:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:41.428 14:07:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:41.687 [ 00:14:41.687 { 00:14:41.687 "name": "BaseBdev2", 00:14:41.687 "aliases": [ 00:14:41.687 "514eefca-8fda-49a4-8521-714fd3c14caa" 00:14:41.687 ], 00:14:41.687 "product_name": "Malloc disk", 00:14:41.687 "block_size": 512, 00:14:41.687 "num_blocks": 65536, 00:14:41.687 "uuid": "514eefca-8fda-49a4-8521-714fd3c14caa", 00:14:41.687 "assigned_rate_limits": { 00:14:41.687 "rw_ios_per_sec": 0, 00:14:41.687 "rw_mbytes_per_sec": 0, 00:14:41.687 "r_mbytes_per_sec": 0, 00:14:41.687 "w_mbytes_per_sec": 0 00:14:41.687 }, 00:14:41.687 "claimed": true, 00:14:41.687 "claim_type": "exclusive_write", 00:14:41.687 "zoned": false, 00:14:41.687 "supported_io_types": { 00:14:41.687 "read": true, 00:14:41.687 "write": true, 00:14:41.687 "unmap": true, 00:14:41.687 "flush": true, 00:14:41.687 "reset": true, 00:14:41.687 "nvme_admin": false, 00:14:41.687 "nvme_io": false, 00:14:41.687 "nvme_io_md": false, 00:14:41.687 "write_zeroes": true, 00:14:41.687 "zcopy": true, 00:14:41.687 "get_zone_info": false, 00:14:41.687 "zone_management": false, 00:14:41.687 "zone_append": false, 00:14:41.687 "compare": false, 00:14:41.687 "compare_and_write": false, 00:14:41.687 "abort": true, 00:14:41.687 "seek_hole": false, 00:14:41.687 "seek_data": false, 00:14:41.687 "copy": true, 00:14:41.687 "nvme_iov_md": false 00:14:41.687 }, 00:14:41.687 "memory_domains": [ 00:14:41.687 { 00:14:41.687 "dma_device_id": "system", 00:14:41.687 "dma_device_type": 1 00:14:41.687 }, 00:14:41.687 { 00:14:41.687 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:41.687 "dma_device_type": 2 00:14:41.687 } 00:14:41.687 ], 00:14:41.687 "driver_specific": {} 00:14:41.687 } 00:14:41.687 ] 00:14:41.687 14:07:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:14:41.687 14:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:41.687 14:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:41.687 14:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:14:41.687 14:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:41.687 14:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:41.687 14:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:41.687 14:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:41.687 14:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:41.687 14:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:41.687 14:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:41.687 14:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:41.687 14:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:41.687 14:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:41.687 14:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:41.946 14:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:41.946 "name": "Existed_Raid", 00:14:41.946 "uuid": "9e18100e-acbf-40a5-9a77-d9b4b0c0f526", 00:14:41.946 "strip_size_kb": 64, 00:14:41.946 "state": "online", 00:14:41.946 "raid_level": "concat", 00:14:41.946 "superblock": false, 00:14:41.946 "num_base_bdevs": 2, 00:14:41.946 "num_base_bdevs_discovered": 2, 00:14:41.946 "num_base_bdevs_operational": 2, 00:14:41.946 "base_bdevs_list": [ 00:14:41.946 { 00:14:41.946 "name": "BaseBdev1", 00:14:41.946 "uuid": "e17e787f-851b-42ea-b0fd-d37c0e924e95", 00:14:41.946 "is_configured": true, 00:14:41.946 "data_offset": 0, 00:14:41.946 "data_size": 65536 00:14:41.946 }, 00:14:41.946 { 00:14:41.946 "name": "BaseBdev2", 00:14:41.946 "uuid": "514eefca-8fda-49a4-8521-714fd3c14caa", 00:14:41.946 "is_configured": true, 00:14:41.946 "data_offset": 0, 00:14:41.946 "data_size": 65536 00:14:41.946 } 00:14:41.946 ] 00:14:41.946 }' 00:14:41.946 14:07:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:41.946 14:07:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.513 14:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:14:42.513 14:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:14:42.513 14:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:42.513 14:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:42.513 14:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:42.513 14:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:14:42.513 14:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:42.513 14:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:42.773 [2024-07-15 14:07:28.618909] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:42.773 14:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:42.773 "name": "Existed_Raid", 00:14:42.773 "aliases": [ 00:14:42.773 "9e18100e-acbf-40a5-9a77-d9b4b0c0f526" 00:14:42.773 ], 00:14:42.773 "product_name": "Raid Volume", 00:14:42.773 "block_size": 512, 00:14:42.773 "num_blocks": 131072, 00:14:42.773 "uuid": "9e18100e-acbf-40a5-9a77-d9b4b0c0f526", 00:14:42.773 "assigned_rate_limits": { 00:14:42.773 "rw_ios_per_sec": 0, 00:14:42.773 "rw_mbytes_per_sec": 0, 00:14:42.773 "r_mbytes_per_sec": 0, 00:14:42.773 "w_mbytes_per_sec": 0 00:14:42.773 }, 00:14:42.773 "claimed": false, 00:14:42.773 "zoned": false, 00:14:42.773 "supported_io_types": { 00:14:42.773 "read": true, 00:14:42.773 "write": true, 00:14:42.773 "unmap": true, 00:14:42.773 "flush": true, 00:14:42.773 "reset": true, 00:14:42.773 "nvme_admin": false, 00:14:42.773 "nvme_io": false, 00:14:42.773 "nvme_io_md": false, 00:14:42.773 "write_zeroes": true, 00:14:42.773 "zcopy": false, 00:14:42.773 "get_zone_info": false, 00:14:42.773 "zone_management": false, 00:14:42.773 "zone_append": false, 00:14:42.773 "compare": false, 00:14:42.773 "compare_and_write": false, 00:14:42.773 "abort": false, 00:14:42.773 "seek_hole": false, 00:14:42.773 "seek_data": false, 00:14:42.773 "copy": false, 00:14:42.773 "nvme_iov_md": false 00:14:42.773 }, 00:14:42.773 "memory_domains": [ 00:14:42.773 { 00:14:42.773 "dma_device_id": "system", 00:14:42.773 "dma_device_type": 1 00:14:42.773 }, 00:14:42.773 { 00:14:42.773 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:42.773 "dma_device_type": 2 00:14:42.773 }, 00:14:42.773 { 00:14:42.773 "dma_device_id": "system", 00:14:42.773 "dma_device_type": 1 00:14:42.773 }, 00:14:42.773 { 00:14:42.773 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:42.773 "dma_device_type": 2 00:14:42.773 } 00:14:42.773 ], 00:14:42.773 "driver_specific": { 00:14:42.773 "raid": { 00:14:42.773 "uuid": "9e18100e-acbf-40a5-9a77-d9b4b0c0f526", 00:14:42.773 "strip_size_kb": 64, 00:14:42.773 "state": "online", 00:14:42.773 "raid_level": "concat", 00:14:42.773 "superblock": false, 00:14:42.773 "num_base_bdevs": 2, 00:14:42.773 "num_base_bdevs_discovered": 2, 00:14:42.773 "num_base_bdevs_operational": 2, 00:14:42.773 "base_bdevs_list": [ 00:14:42.773 { 00:14:42.773 "name": "BaseBdev1", 00:14:42.773 "uuid": "e17e787f-851b-42ea-b0fd-d37c0e924e95", 00:14:42.773 "is_configured": true, 00:14:42.773 "data_offset": 0, 00:14:42.773 "data_size": 65536 00:14:42.773 }, 00:14:42.773 { 00:14:42.773 "name": "BaseBdev2", 00:14:42.773 "uuid": "514eefca-8fda-49a4-8521-714fd3c14caa", 00:14:42.773 "is_configured": true, 00:14:42.773 "data_offset": 0, 00:14:42.773 "data_size": 65536 00:14:42.773 } 00:14:42.773 ] 00:14:42.773 } 00:14:42.773 } 00:14:42.773 }' 00:14:42.773 14:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:42.773 14:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:14:42.773 BaseBdev2' 00:14:42.773 14:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:42.773 14:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:14:42.773 14:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:43.031 14:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:43.031 "name": "BaseBdev1", 00:14:43.031 "aliases": [ 00:14:43.032 "e17e787f-851b-42ea-b0fd-d37c0e924e95" 00:14:43.032 ], 00:14:43.032 "product_name": "Malloc disk", 00:14:43.032 "block_size": 512, 00:14:43.032 "num_blocks": 65536, 00:14:43.032 "uuid": "e17e787f-851b-42ea-b0fd-d37c0e924e95", 00:14:43.032 "assigned_rate_limits": { 00:14:43.032 "rw_ios_per_sec": 0, 00:14:43.032 "rw_mbytes_per_sec": 0, 00:14:43.032 "r_mbytes_per_sec": 0, 00:14:43.032 "w_mbytes_per_sec": 0 00:14:43.032 }, 00:14:43.032 "claimed": true, 00:14:43.032 "claim_type": "exclusive_write", 00:14:43.032 "zoned": false, 00:14:43.032 "supported_io_types": { 00:14:43.032 "read": true, 00:14:43.032 "write": true, 00:14:43.032 "unmap": true, 00:14:43.032 "flush": true, 00:14:43.032 "reset": true, 00:14:43.032 "nvme_admin": false, 00:14:43.032 "nvme_io": false, 00:14:43.032 "nvme_io_md": false, 00:14:43.032 "write_zeroes": true, 00:14:43.032 "zcopy": true, 00:14:43.032 "get_zone_info": false, 00:14:43.032 "zone_management": false, 00:14:43.032 "zone_append": false, 00:14:43.032 "compare": false, 00:14:43.032 "compare_and_write": false, 00:14:43.032 "abort": true, 00:14:43.032 "seek_hole": false, 00:14:43.032 "seek_data": false, 00:14:43.032 "copy": true, 00:14:43.032 "nvme_iov_md": false 00:14:43.032 }, 00:14:43.032 "memory_domains": [ 00:14:43.032 { 00:14:43.032 "dma_device_id": "system", 00:14:43.032 "dma_device_type": 1 00:14:43.032 }, 00:14:43.032 { 00:14:43.032 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:43.032 "dma_device_type": 2 00:14:43.032 } 00:14:43.032 ], 00:14:43.032 "driver_specific": {} 00:14:43.032 }' 00:14:43.032 14:07:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:43.290 14:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:43.290 14:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:43.290 14:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:43.290 14:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:43.290 14:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:43.290 14:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:43.290 14:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:43.290 14:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:43.290 14:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:43.549 14:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:43.549 14:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:43.549 14:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:43.549 14:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:43.549 14:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:43.807 14:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:43.807 "name": "BaseBdev2", 00:14:43.807 "aliases": [ 00:14:43.808 "514eefca-8fda-49a4-8521-714fd3c14caa" 00:14:43.808 ], 00:14:43.808 "product_name": "Malloc disk", 00:14:43.808 "block_size": 512, 00:14:43.808 "num_blocks": 65536, 00:14:43.808 "uuid": "514eefca-8fda-49a4-8521-714fd3c14caa", 00:14:43.808 "assigned_rate_limits": { 00:14:43.808 "rw_ios_per_sec": 0, 00:14:43.808 "rw_mbytes_per_sec": 0, 00:14:43.808 "r_mbytes_per_sec": 0, 00:14:43.808 "w_mbytes_per_sec": 0 00:14:43.808 }, 00:14:43.808 "claimed": true, 00:14:43.808 "claim_type": "exclusive_write", 00:14:43.808 "zoned": false, 00:14:43.808 "supported_io_types": { 00:14:43.808 "read": true, 00:14:43.808 "write": true, 00:14:43.808 "unmap": true, 00:14:43.808 "flush": true, 00:14:43.808 "reset": true, 00:14:43.808 "nvme_admin": false, 00:14:43.808 "nvme_io": false, 00:14:43.808 "nvme_io_md": false, 00:14:43.808 "write_zeroes": true, 00:14:43.808 "zcopy": true, 00:14:43.808 "get_zone_info": false, 00:14:43.808 "zone_management": false, 00:14:43.808 "zone_append": false, 00:14:43.808 "compare": false, 00:14:43.808 "compare_and_write": false, 00:14:43.808 "abort": true, 00:14:43.808 "seek_hole": false, 00:14:43.808 "seek_data": false, 00:14:43.808 "copy": true, 00:14:43.808 "nvme_iov_md": false 00:14:43.808 }, 00:14:43.808 "memory_domains": [ 00:14:43.808 { 00:14:43.808 "dma_device_id": "system", 00:14:43.808 "dma_device_type": 1 00:14:43.808 }, 00:14:43.808 { 00:14:43.808 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:43.808 "dma_device_type": 2 00:14:43.808 } 00:14:43.808 ], 00:14:43.808 "driver_specific": {} 00:14:43.808 }' 00:14:43.808 14:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:43.808 14:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:43.808 14:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:43.808 14:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:43.808 14:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:44.080 14:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:44.080 14:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:44.080 14:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:44.080 14:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:44.080 14:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:44.080 14:07:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:44.080 14:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:44.080 14:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:44.338 [2024-07-15 14:07:30.262998] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:44.338 [2024-07-15 14:07:30.263795] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:44.338 [2024-07-15 14:07:30.264007] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:44.596 14:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:14:44.596 14:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:14:44.596 14:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:44.596 14:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:14:44.596 14:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:14:44.596 14:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:14:44.596 14:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:44.596 14:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:14:44.596 14:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:44.596 14:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:44.596 14:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:14:44.596 14:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:44.596 14:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:44.596 14:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:44.596 14:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:44.596 14:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:44.596 14:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:44.855 14:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:44.855 "name": "Existed_Raid", 00:14:44.855 "uuid": "9e18100e-acbf-40a5-9a77-d9b4b0c0f526", 00:14:44.855 "strip_size_kb": 64, 00:14:44.855 "state": "offline", 00:14:44.855 "raid_level": "concat", 00:14:44.855 "superblock": false, 00:14:44.855 "num_base_bdevs": 2, 00:14:44.855 "num_base_bdevs_discovered": 1, 00:14:44.855 "num_base_bdevs_operational": 1, 00:14:44.855 "base_bdevs_list": [ 00:14:44.855 { 00:14:44.855 "name": null, 00:14:44.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.855 "is_configured": false, 00:14:44.855 "data_offset": 0, 00:14:44.855 "data_size": 65536 00:14:44.855 }, 00:14:44.855 { 00:14:44.855 "name": "BaseBdev2", 00:14:44.855 "uuid": "514eefca-8fda-49a4-8521-714fd3c14caa", 00:14:44.855 "is_configured": true, 00:14:44.855 "data_offset": 0, 00:14:44.855 "data_size": 65536 00:14:44.855 } 00:14:44.855 ] 00:14:44.855 }' 00:14:44.855 14:07:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:44.855 14:07:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.423 14:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:14:45.423 14:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:45.423 14:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:45.423 14:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:45.990 14:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:45.990 14:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:45.990 14:07:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:45.990 [2024-07-15 14:07:31.967229] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:45.990 [2024-07-15 14:07:31.967493] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:14:46.248 14:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:46.248 14:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:46.248 14:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:14:46.248 14:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:46.506 14:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:14:46.506 14:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:14:46.506 14:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:14:46.506 14:07:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 187920 00:14:46.506 14:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 187920 ']' 00:14:46.506 14:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 187920 00:14:46.506 14:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:14:46.506 14:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:46.506 14:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 187920 00:14:46.506 14:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:46.506 14:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:46.506 14:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 187920' 00:14:46.506 killing process with pid 187920 00:14:46.506 14:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 187920 00:14:46.506 [2024-07-15 14:07:32.385779] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:46.506 14:07:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 187920 00:14:46.506 [2024-07-15 14:07:32.386074] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:47.882 14:07:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:14:47.882 00:14:47.882 real 0m12.884s 00:14:47.882 user 0m22.594s 00:14:47.882 sys 0m1.501s 00:14:47.882 14:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:47.882 14:07:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.882 ************************************ 00:14:47.882 END TEST raid_state_function_test 00:14:47.882 ************************************ 00:14:47.882 14:07:33 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:14:47.882 14:07:33 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:14:47.882 14:07:33 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:14:47.882 14:07:33 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:47.882 14:07:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:47.882 ************************************ 00:14:47.882 START TEST raid_state_function_test_sb 00:14:47.882 ************************************ 00:14:47.882 14:07:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 2 true 00:14:47.882 14:07:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:14:47.882 14:07:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:14:47.882 14:07:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:14:47.882 14:07:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:14:47.882 14:07:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:14:47.882 14:07:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:47.882 14:07:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:14:47.882 14:07:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:47.882 14:07:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:47.882 14:07:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:14:47.882 14:07:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:47.882 14:07:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:47.882 14:07:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:47.882 14:07:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:14:47.882 14:07:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:14:47.882 14:07:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:14:47.882 14:07:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:14:47.882 14:07:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:14:47.882 14:07:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:14:47.882 14:07:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:14:47.882 14:07:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:14:47.882 14:07:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:14:47.882 14:07:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:14:47.882 14:07:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=188306 00:14:47.882 14:07:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 188306' 00:14:47.882 Process raid pid: 188306 00:14:47.882 14:07:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:47.882 14:07:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 188306 /var/tmp/spdk-raid.sock 00:14:47.882 14:07:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 188306 ']' 00:14:47.882 14:07:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:47.882 14:07:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:47.882 14:07:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:47.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:47.882 14:07:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:47.882 14:07:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.882 [2024-07-15 14:07:33.628145] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:14:47.882 [2024-07-15 14:07:33.628870] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:47.882 [2024-07-15 14:07:33.780599] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:48.140 [2024-07-15 14:07:34.033948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:48.399 [2024-07-15 14:07:34.247440] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:48.713 14:07:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:48.713 14:07:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:14:48.713 14:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:48.987 [2024-07-15 14:07:34.903674] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:48.987 [2024-07-15 14:07:34.904361] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:48.987 [2024-07-15 14:07:34.904509] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:48.987 [2024-07-15 14:07:34.904720] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:48.987 14:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:48.987 14:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:48.987 14:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:48.987 14:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:48.987 14:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:48.987 14:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:48.987 14:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:48.987 14:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:48.987 14:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:48.987 14:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:48.987 14:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:48.987 14:07:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:49.246 14:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:49.246 "name": "Existed_Raid", 00:14:49.246 "uuid": "a74612d9-4a68-4ee1-873e-bdb1a2f447a3", 00:14:49.246 "strip_size_kb": 64, 00:14:49.246 "state": "configuring", 00:14:49.246 "raid_level": "concat", 00:14:49.246 "superblock": true, 00:14:49.246 "num_base_bdevs": 2, 00:14:49.246 "num_base_bdevs_discovered": 0, 00:14:49.246 "num_base_bdevs_operational": 2, 00:14:49.246 "base_bdevs_list": [ 00:14:49.246 { 00:14:49.246 "name": "BaseBdev1", 00:14:49.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.246 "is_configured": false, 00:14:49.246 "data_offset": 0, 00:14:49.246 "data_size": 0 00:14:49.246 }, 00:14:49.246 { 00:14:49.246 "name": "BaseBdev2", 00:14:49.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.246 "is_configured": false, 00:14:49.246 "data_offset": 0, 00:14:49.246 "data_size": 0 00:14:49.246 } 00:14:49.246 ] 00:14:49.246 }' 00:14:49.246 14:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:49.246 14:07:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.815 14:07:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:50.073 [2024-07-15 14:07:36.019754] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:50.074 [2024-07-15 14:07:36.019991] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:14:50.074 14:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:50.332 [2024-07-15 14:07:36.271844] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:50.332 [2024-07-15 14:07:36.272390] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:50.332 [2024-07-15 14:07:36.272532] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:50.332 [2024-07-15 14:07:36.272679] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:50.332 14:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:50.899 [2024-07-15 14:07:36.595671] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:50.899 BaseBdev1 00:14:50.899 14:07:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:14:50.899 14:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:14:50.899 14:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:50.899 14:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:14:50.899 14:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:50.899 14:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:50.899 14:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:50.899 14:07:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:51.158 [ 00:14:51.158 { 00:14:51.158 "name": "BaseBdev1", 00:14:51.158 "aliases": [ 00:14:51.158 "37fca87b-1088-4e02-8928-3a4d8637627a" 00:14:51.158 ], 00:14:51.158 "product_name": "Malloc disk", 00:14:51.158 "block_size": 512, 00:14:51.158 "num_blocks": 65536, 00:14:51.158 "uuid": "37fca87b-1088-4e02-8928-3a4d8637627a", 00:14:51.158 "assigned_rate_limits": { 00:14:51.158 "rw_ios_per_sec": 0, 00:14:51.158 "rw_mbytes_per_sec": 0, 00:14:51.158 "r_mbytes_per_sec": 0, 00:14:51.158 "w_mbytes_per_sec": 0 00:14:51.158 }, 00:14:51.158 "claimed": true, 00:14:51.158 "claim_type": "exclusive_write", 00:14:51.158 "zoned": false, 00:14:51.158 "supported_io_types": { 00:14:51.158 "read": true, 00:14:51.158 "write": true, 00:14:51.158 "unmap": true, 00:14:51.158 "flush": true, 00:14:51.158 "reset": true, 00:14:51.158 "nvme_admin": false, 00:14:51.158 "nvme_io": false, 00:14:51.158 "nvme_io_md": false, 00:14:51.158 "write_zeroes": true, 00:14:51.158 "zcopy": true, 00:14:51.158 "get_zone_info": false, 00:14:51.158 "zone_management": false, 00:14:51.158 "zone_append": false, 00:14:51.158 "compare": false, 00:14:51.158 "compare_and_write": false, 00:14:51.158 "abort": true, 00:14:51.158 "seek_hole": false, 00:14:51.158 "seek_data": false, 00:14:51.158 "copy": true, 00:14:51.158 "nvme_iov_md": false 00:14:51.158 }, 00:14:51.158 "memory_domains": [ 00:14:51.158 { 00:14:51.158 "dma_device_id": "system", 00:14:51.158 "dma_device_type": 1 00:14:51.158 }, 00:14:51.158 { 00:14:51.158 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:51.158 "dma_device_type": 2 00:14:51.158 } 00:14:51.158 ], 00:14:51.158 "driver_specific": {} 00:14:51.158 } 00:14:51.158 ] 00:14:51.417 14:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:14:51.417 14:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:51.417 14:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:51.417 14:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:51.417 14:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:51.417 14:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:51.417 14:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:51.417 14:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:51.417 14:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:51.417 14:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:51.417 14:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:51.417 14:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:51.417 14:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:51.417 14:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:51.417 "name": "Existed_Raid", 00:14:51.417 "uuid": "2742effe-9d3b-4b1f-990a-3ea7bebf602e", 00:14:51.417 "strip_size_kb": 64, 00:14:51.417 "state": "configuring", 00:14:51.417 "raid_level": "concat", 00:14:51.417 "superblock": true, 00:14:51.417 "num_base_bdevs": 2, 00:14:51.417 "num_base_bdevs_discovered": 1, 00:14:51.417 "num_base_bdevs_operational": 2, 00:14:51.417 "base_bdevs_list": [ 00:14:51.417 { 00:14:51.417 "name": "BaseBdev1", 00:14:51.417 "uuid": "37fca87b-1088-4e02-8928-3a4d8637627a", 00:14:51.417 "is_configured": true, 00:14:51.417 "data_offset": 2048, 00:14:51.417 "data_size": 63488 00:14:51.417 }, 00:14:51.417 { 00:14:51.417 "name": "BaseBdev2", 00:14:51.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.417 "is_configured": false, 00:14:51.417 "data_offset": 0, 00:14:51.417 "data_size": 0 00:14:51.417 } 00:14:51.417 ] 00:14:51.417 }' 00:14:51.417 14:07:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:51.417 14:07:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.378 14:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:52.378 [2024-07-15 14:07:38.292147] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:52.378 [2024-07-15 14:07:38.292402] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:14:52.378 14:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:52.636 [2024-07-15 14:07:38.588238] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:52.636 [2024-07-15 14:07:38.589968] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:52.636 [2024-07-15 14:07:38.590148] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:52.636 14:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:14:52.636 14:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:52.636 14:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:52.636 14:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:52.636 14:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:52.636 14:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:52.636 14:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:52.636 14:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:52.636 14:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:52.636 14:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:52.636 14:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:52.637 14:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:52.637 14:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:52.637 14:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:53.203 14:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:53.203 "name": "Existed_Raid", 00:14:53.203 "uuid": "3b9021eb-7075-4cf1-93c5-0abc723abbee", 00:14:53.203 "strip_size_kb": 64, 00:14:53.203 "state": "configuring", 00:14:53.203 "raid_level": "concat", 00:14:53.203 "superblock": true, 00:14:53.203 "num_base_bdevs": 2, 00:14:53.203 "num_base_bdevs_discovered": 1, 00:14:53.203 "num_base_bdevs_operational": 2, 00:14:53.203 "base_bdevs_list": [ 00:14:53.203 { 00:14:53.203 "name": "BaseBdev1", 00:14:53.203 "uuid": "37fca87b-1088-4e02-8928-3a4d8637627a", 00:14:53.203 "is_configured": true, 00:14:53.203 "data_offset": 2048, 00:14:53.203 "data_size": 63488 00:14:53.203 }, 00:14:53.203 { 00:14:53.203 "name": "BaseBdev2", 00:14:53.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.203 "is_configured": false, 00:14:53.203 "data_offset": 0, 00:14:53.203 "data_size": 0 00:14:53.203 } 00:14:53.203 ] 00:14:53.203 }' 00:14:53.203 14:07:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:53.203 14:07:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.769 14:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:53.769 [2024-07-15 14:07:39.737122] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:53.769 [2024-07-15 14:07:39.737496] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:14:53.769 [2024-07-15 14:07:39.737614] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:53.769 [2024-07-15 14:07:39.737791] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:14:53.769 [2024-07-15 14:07:39.738069] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:14:53.769 [2024-07-15 14:07:39.738198] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:14:53.769 [2024-07-15 14:07:39.738437] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:53.769 BaseBdev2 00:14:53.769 14:07:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:14:53.769 14:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:14:53.769 14:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:53.769 14:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:14:53.769 14:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:53.770 14:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:53.770 14:07:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:54.335 14:07:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:54.592 [ 00:14:54.592 { 00:14:54.592 "name": "BaseBdev2", 00:14:54.592 "aliases": [ 00:14:54.592 "fc822a78-79df-4743-82c1-5159e82cb699" 00:14:54.592 ], 00:14:54.592 "product_name": "Malloc disk", 00:14:54.592 "block_size": 512, 00:14:54.592 "num_blocks": 65536, 00:14:54.592 "uuid": "fc822a78-79df-4743-82c1-5159e82cb699", 00:14:54.592 "assigned_rate_limits": { 00:14:54.592 "rw_ios_per_sec": 0, 00:14:54.592 "rw_mbytes_per_sec": 0, 00:14:54.592 "r_mbytes_per_sec": 0, 00:14:54.592 "w_mbytes_per_sec": 0 00:14:54.592 }, 00:14:54.592 "claimed": true, 00:14:54.592 "claim_type": "exclusive_write", 00:14:54.592 "zoned": false, 00:14:54.592 "supported_io_types": { 00:14:54.592 "read": true, 00:14:54.592 "write": true, 00:14:54.592 "unmap": true, 00:14:54.592 "flush": true, 00:14:54.592 "reset": true, 00:14:54.592 "nvme_admin": false, 00:14:54.592 "nvme_io": false, 00:14:54.592 "nvme_io_md": false, 00:14:54.592 "write_zeroes": true, 00:14:54.592 "zcopy": true, 00:14:54.592 "get_zone_info": false, 00:14:54.592 "zone_management": false, 00:14:54.592 "zone_append": false, 00:14:54.592 "compare": false, 00:14:54.592 "compare_and_write": false, 00:14:54.592 "abort": true, 00:14:54.592 "seek_hole": false, 00:14:54.592 "seek_data": false, 00:14:54.592 "copy": true, 00:14:54.592 "nvme_iov_md": false 00:14:54.592 }, 00:14:54.592 "memory_domains": [ 00:14:54.592 { 00:14:54.592 "dma_device_id": "system", 00:14:54.592 "dma_device_type": 1 00:14:54.592 }, 00:14:54.592 { 00:14:54.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:54.592 "dma_device_type": 2 00:14:54.592 } 00:14:54.592 ], 00:14:54.592 "driver_specific": {} 00:14:54.592 } 00:14:54.592 ] 00:14:54.592 14:07:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:14:54.592 14:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:54.592 14:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:54.592 14:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:14:54.592 14:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:54.592 14:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:54.592 14:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:54.592 14:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:54.592 14:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:54.592 14:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:54.592 14:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:54.592 14:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:54.592 14:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:54.592 14:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:54.592 14:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:54.851 14:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:54.851 "name": "Existed_Raid", 00:14:54.851 "uuid": "3b9021eb-7075-4cf1-93c5-0abc723abbee", 00:14:54.851 "strip_size_kb": 64, 00:14:54.851 "state": "online", 00:14:54.851 "raid_level": "concat", 00:14:54.851 "superblock": true, 00:14:54.851 "num_base_bdevs": 2, 00:14:54.851 "num_base_bdevs_discovered": 2, 00:14:54.851 "num_base_bdevs_operational": 2, 00:14:54.851 "base_bdevs_list": [ 00:14:54.851 { 00:14:54.851 "name": "BaseBdev1", 00:14:54.851 "uuid": "37fca87b-1088-4e02-8928-3a4d8637627a", 00:14:54.851 "is_configured": true, 00:14:54.851 "data_offset": 2048, 00:14:54.851 "data_size": 63488 00:14:54.851 }, 00:14:54.851 { 00:14:54.851 "name": "BaseBdev2", 00:14:54.851 "uuid": "fc822a78-79df-4743-82c1-5159e82cb699", 00:14:54.851 "is_configured": true, 00:14:54.851 "data_offset": 2048, 00:14:54.851 "data_size": 63488 00:14:54.851 } 00:14:54.851 ] 00:14:54.851 }' 00:14:54.851 14:07:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:54.851 14:07:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.416 14:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:14:55.416 14:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:14:55.416 14:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:55.416 14:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:55.416 14:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:55.416 14:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:14:55.416 14:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:55.416 14:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:55.676 [2024-07-15 14:07:41.505627] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:55.676 14:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:55.676 "name": "Existed_Raid", 00:14:55.676 "aliases": [ 00:14:55.676 "3b9021eb-7075-4cf1-93c5-0abc723abbee" 00:14:55.676 ], 00:14:55.676 "product_name": "Raid Volume", 00:14:55.676 "block_size": 512, 00:14:55.676 "num_blocks": 126976, 00:14:55.676 "uuid": "3b9021eb-7075-4cf1-93c5-0abc723abbee", 00:14:55.676 "assigned_rate_limits": { 00:14:55.676 "rw_ios_per_sec": 0, 00:14:55.676 "rw_mbytes_per_sec": 0, 00:14:55.676 "r_mbytes_per_sec": 0, 00:14:55.676 "w_mbytes_per_sec": 0 00:14:55.676 }, 00:14:55.676 "claimed": false, 00:14:55.676 "zoned": false, 00:14:55.676 "supported_io_types": { 00:14:55.676 "read": true, 00:14:55.676 "write": true, 00:14:55.676 "unmap": true, 00:14:55.676 "flush": true, 00:14:55.676 "reset": true, 00:14:55.676 "nvme_admin": false, 00:14:55.676 "nvme_io": false, 00:14:55.676 "nvme_io_md": false, 00:14:55.676 "write_zeroes": true, 00:14:55.676 "zcopy": false, 00:14:55.676 "get_zone_info": false, 00:14:55.676 "zone_management": false, 00:14:55.676 "zone_append": false, 00:14:55.676 "compare": false, 00:14:55.676 "compare_and_write": false, 00:14:55.676 "abort": false, 00:14:55.676 "seek_hole": false, 00:14:55.676 "seek_data": false, 00:14:55.676 "copy": false, 00:14:55.676 "nvme_iov_md": false 00:14:55.676 }, 00:14:55.676 "memory_domains": [ 00:14:55.676 { 00:14:55.676 "dma_device_id": "system", 00:14:55.676 "dma_device_type": 1 00:14:55.676 }, 00:14:55.676 { 00:14:55.676 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:55.676 "dma_device_type": 2 00:14:55.676 }, 00:14:55.676 { 00:14:55.676 "dma_device_id": "system", 00:14:55.676 "dma_device_type": 1 00:14:55.676 }, 00:14:55.676 { 00:14:55.676 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:55.676 "dma_device_type": 2 00:14:55.676 } 00:14:55.676 ], 00:14:55.676 "driver_specific": { 00:14:55.676 "raid": { 00:14:55.676 "uuid": "3b9021eb-7075-4cf1-93c5-0abc723abbee", 00:14:55.676 "strip_size_kb": 64, 00:14:55.676 "state": "online", 00:14:55.676 "raid_level": "concat", 00:14:55.676 "superblock": true, 00:14:55.676 "num_base_bdevs": 2, 00:14:55.676 "num_base_bdevs_discovered": 2, 00:14:55.676 "num_base_bdevs_operational": 2, 00:14:55.676 "base_bdevs_list": [ 00:14:55.676 { 00:14:55.676 "name": "BaseBdev1", 00:14:55.676 "uuid": "37fca87b-1088-4e02-8928-3a4d8637627a", 00:14:55.676 "is_configured": true, 00:14:55.676 "data_offset": 2048, 00:14:55.676 "data_size": 63488 00:14:55.676 }, 00:14:55.676 { 00:14:55.676 "name": "BaseBdev2", 00:14:55.676 "uuid": "fc822a78-79df-4743-82c1-5159e82cb699", 00:14:55.676 "is_configured": true, 00:14:55.676 "data_offset": 2048, 00:14:55.676 "data_size": 63488 00:14:55.676 } 00:14:55.676 ] 00:14:55.676 } 00:14:55.676 } 00:14:55.676 }' 00:14:55.676 14:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:55.676 14:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:14:55.676 BaseBdev2' 00:14:55.676 14:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:55.676 14:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:14:55.676 14:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:55.938 14:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:55.938 "name": "BaseBdev1", 00:14:55.938 "aliases": [ 00:14:55.938 "37fca87b-1088-4e02-8928-3a4d8637627a" 00:14:55.938 ], 00:14:55.938 "product_name": "Malloc disk", 00:14:55.938 "block_size": 512, 00:14:55.938 "num_blocks": 65536, 00:14:55.938 "uuid": "37fca87b-1088-4e02-8928-3a4d8637627a", 00:14:55.938 "assigned_rate_limits": { 00:14:55.938 "rw_ios_per_sec": 0, 00:14:55.938 "rw_mbytes_per_sec": 0, 00:14:55.938 "r_mbytes_per_sec": 0, 00:14:55.938 "w_mbytes_per_sec": 0 00:14:55.938 }, 00:14:55.938 "claimed": true, 00:14:55.938 "claim_type": "exclusive_write", 00:14:55.938 "zoned": false, 00:14:55.938 "supported_io_types": { 00:14:55.938 "read": true, 00:14:55.938 "write": true, 00:14:55.938 "unmap": true, 00:14:55.938 "flush": true, 00:14:55.938 "reset": true, 00:14:55.938 "nvme_admin": false, 00:14:55.938 "nvme_io": false, 00:14:55.938 "nvme_io_md": false, 00:14:55.938 "write_zeroes": true, 00:14:55.938 "zcopy": true, 00:14:55.938 "get_zone_info": false, 00:14:55.938 "zone_management": false, 00:14:55.938 "zone_append": false, 00:14:55.938 "compare": false, 00:14:55.938 "compare_and_write": false, 00:14:55.938 "abort": true, 00:14:55.938 "seek_hole": false, 00:14:55.938 "seek_data": false, 00:14:55.938 "copy": true, 00:14:55.938 "nvme_iov_md": false 00:14:55.938 }, 00:14:55.938 "memory_domains": [ 00:14:55.938 { 00:14:55.938 "dma_device_id": "system", 00:14:55.938 "dma_device_type": 1 00:14:55.938 }, 00:14:55.938 { 00:14:55.938 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:55.938 "dma_device_type": 2 00:14:55.938 } 00:14:55.938 ], 00:14:55.938 "driver_specific": {} 00:14:55.938 }' 00:14:55.938 14:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:55.938 14:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:56.196 14:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:56.196 14:07:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:56.196 14:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:56.196 14:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:56.196 14:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:56.196 14:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:56.196 14:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:56.196 14:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:56.454 14:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:56.454 14:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:56.454 14:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:56.454 14:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:56.454 14:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:56.711 14:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:56.711 "name": "BaseBdev2", 00:14:56.711 "aliases": [ 00:14:56.711 "fc822a78-79df-4743-82c1-5159e82cb699" 00:14:56.711 ], 00:14:56.711 "product_name": "Malloc disk", 00:14:56.711 "block_size": 512, 00:14:56.711 "num_blocks": 65536, 00:14:56.711 "uuid": "fc822a78-79df-4743-82c1-5159e82cb699", 00:14:56.711 "assigned_rate_limits": { 00:14:56.711 "rw_ios_per_sec": 0, 00:14:56.711 "rw_mbytes_per_sec": 0, 00:14:56.711 "r_mbytes_per_sec": 0, 00:14:56.711 "w_mbytes_per_sec": 0 00:14:56.711 }, 00:14:56.711 "claimed": true, 00:14:56.711 "claim_type": "exclusive_write", 00:14:56.711 "zoned": false, 00:14:56.711 "supported_io_types": { 00:14:56.711 "read": true, 00:14:56.711 "write": true, 00:14:56.711 "unmap": true, 00:14:56.711 "flush": true, 00:14:56.711 "reset": true, 00:14:56.711 "nvme_admin": false, 00:14:56.711 "nvme_io": false, 00:14:56.711 "nvme_io_md": false, 00:14:56.711 "write_zeroes": true, 00:14:56.711 "zcopy": true, 00:14:56.711 "get_zone_info": false, 00:14:56.711 "zone_management": false, 00:14:56.711 "zone_append": false, 00:14:56.711 "compare": false, 00:14:56.711 "compare_and_write": false, 00:14:56.711 "abort": true, 00:14:56.711 "seek_hole": false, 00:14:56.711 "seek_data": false, 00:14:56.711 "copy": true, 00:14:56.711 "nvme_iov_md": false 00:14:56.711 }, 00:14:56.711 "memory_domains": [ 00:14:56.711 { 00:14:56.711 "dma_device_id": "system", 00:14:56.711 "dma_device_type": 1 00:14:56.711 }, 00:14:56.711 { 00:14:56.711 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:56.711 "dma_device_type": 2 00:14:56.711 } 00:14:56.711 ], 00:14:56.711 "driver_specific": {} 00:14:56.711 }' 00:14:56.711 14:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:56.711 14:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:56.711 14:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:56.711 14:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:56.970 14:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:56.970 14:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:56.970 14:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:56.970 14:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:56.970 14:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:56.970 14:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:56.970 14:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:57.228 14:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:57.228 14:07:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:57.486 [2024-07-15 14:07:43.253905] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:57.486 [2024-07-15 14:07:43.254149] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:57.486 [2024-07-15 14:07:43.254295] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:57.486 14:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:14:57.486 14:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:14:57.486 14:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:57.486 14:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:14:57.486 14:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:14:57.486 14:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:14:57.486 14:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:57.486 14:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:14:57.486 14:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:57.486 14:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:57.486 14:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:14:57.486 14:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:57.486 14:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:57.486 14:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:57.486 14:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:57.486 14:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:57.486 14:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:57.744 14:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:57.744 "name": "Existed_Raid", 00:14:57.744 "uuid": "3b9021eb-7075-4cf1-93c5-0abc723abbee", 00:14:57.744 "strip_size_kb": 64, 00:14:57.744 "state": "offline", 00:14:57.744 "raid_level": "concat", 00:14:57.744 "superblock": true, 00:14:57.744 "num_base_bdevs": 2, 00:14:57.744 "num_base_bdevs_discovered": 1, 00:14:57.744 "num_base_bdevs_operational": 1, 00:14:57.744 "base_bdevs_list": [ 00:14:57.744 { 00:14:57.744 "name": null, 00:14:57.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.744 "is_configured": false, 00:14:57.744 "data_offset": 2048, 00:14:57.744 "data_size": 63488 00:14:57.744 }, 00:14:57.744 { 00:14:57.744 "name": "BaseBdev2", 00:14:57.744 "uuid": "fc822a78-79df-4743-82c1-5159e82cb699", 00:14:57.744 "is_configured": true, 00:14:57.744 "data_offset": 2048, 00:14:57.744 "data_size": 63488 00:14:57.744 } 00:14:57.744 ] 00:14:57.744 }' 00:14:57.744 14:07:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:57.744 14:07:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.679 14:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:14:58.679 14:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:58.679 14:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:58.679 14:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:58.679 14:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:58.679 14:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:58.679 14:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:58.937 [2024-07-15 14:07:44.789470] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:58.937 [2024-07-15 14:07:44.789856] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:14:58.937 14:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:58.937 14:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:58.937 14:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:14:58.937 14:07:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:59.503 14:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:14:59.503 14:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:14:59.503 14:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:14:59.503 14:07:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 188306 00:14:59.503 14:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 188306 ']' 00:14:59.503 14:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 188306 00:14:59.503 14:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:14:59.503 14:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:59.503 14:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 188306 00:14:59.503 14:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:59.503 14:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:59.503 14:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 188306' 00:14:59.503 killing process with pid 188306 00:14:59.503 14:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 188306 00:14:59.503 [2024-07-15 14:07:45.226255] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:59.503 14:07:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 188306 00:14:59.503 [2024-07-15 14:07:45.226516] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:00.440 14:07:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:15:00.440 00:15:00.440 real 0m12.781s 00:15:00.440 user 0m22.442s 00:15:00.440 sys 0m1.469s 00:15:00.440 14:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:00.440 14:07:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.440 ************************************ 00:15:00.440 END TEST raid_state_function_test_sb 00:15:00.440 ************************************ 00:15:00.440 14:07:46 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:15:00.440 14:07:46 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:15:00.440 14:07:46 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:15:00.440 14:07:46 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:00.440 14:07:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:00.440 ************************************ 00:15:00.440 START TEST raid_superblock_test 00:15:00.440 ************************************ 00:15:00.440 14:07:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test concat 2 00:15:00.440 14:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=concat 00:15:00.440 14:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:15:00.440 14:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:15:00.440 14:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:15:00.440 14:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:15:00.440 14:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:15:00.440 14:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:15:00.440 14:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:15:00.440 14:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:15:00.440 14:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:15:00.440 14:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:15:00.440 14:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:15:00.440 14:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:15:00.440 14:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' concat '!=' raid1 ']' 00:15:00.440 14:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:15:00.440 14:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:15:00.440 14:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=188698 00:15:00.440 14:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:15:00.440 14:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 188698 /var/tmp/spdk-raid.sock 00:15:00.440 14:07:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 188698 ']' 00:15:00.440 14:07:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:00.440 14:07:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:00.440 14:07:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:00.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:00.440 14:07:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:00.440 14:07:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.698 [2024-07-15 14:07:46.467237] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:15:00.698 [2024-07-15 14:07:46.467640] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid188698 ] 00:15:00.698 [2024-07-15 14:07:46.620531] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:00.956 [2024-07-15 14:07:46.869892] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:01.227 [2024-07-15 14:07:47.073627] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:01.792 14:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:01.792 14:07:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:15:01.792 14:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:15:01.792 14:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:01.792 14:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:15:01.792 14:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:15:01.792 14:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:01.792 14:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:01.792 14:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:15:01.792 14:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:01.792 14:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:15:01.792 malloc1 00:15:01.792 14:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:02.050 [2024-07-15 14:07:48.013936] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:02.050 [2024-07-15 14:07:48.014282] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:02.050 [2024-07-15 14:07:48.014467] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:15:02.050 [2024-07-15 14:07:48.014643] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:02.050 [2024-07-15 14:07:48.016634] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:02.050 [2024-07-15 14:07:48.016828] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:02.050 pt1 00:15:02.050 14:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:15:02.050 14:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:02.050 14:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:15:02.050 14:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:15:02.050 14:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:02.050 14:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:02.051 14:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:15:02.051 14:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:02.051 14:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:15:02.309 malloc2 00:15:02.566 14:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:02.566 [2024-07-15 14:07:48.538082] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:02.566 [2024-07-15 14:07:48.538446] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:02.566 [2024-07-15 14:07:48.538529] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:15:02.566 [2024-07-15 14:07:48.538830] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:02.566 [2024-07-15 14:07:48.540920] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:02.566 [2024-07-15 14:07:48.541147] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:02.566 pt2 00:15:02.566 14:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:15:02.566 14:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:02.566 14:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2' -n raid_bdev1 -s 00:15:02.824 [2024-07-15 14:07:48.822157] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:02.824 [2024-07-15 14:07:48.824053] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:02.824 [2024-07-15 14:07:48.824355] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:15:02.824 [2024-07-15 14:07:48.824583] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:02.824 [2024-07-15 14:07:48.824806] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:15:02.824 [2024-07-15 14:07:48.825238] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:15:02.824 [2024-07-15 14:07:48.825365] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:15:02.824 [2024-07-15 14:07:48.825592] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:03.082 14:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:15:03.082 14:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:03.082 14:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:03.082 14:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:03.082 14:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:03.082 14:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:03.082 14:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:03.082 14:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:03.082 14:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:03.082 14:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:03.082 14:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:03.082 14:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.340 14:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:03.340 "name": "raid_bdev1", 00:15:03.340 "uuid": "9840fe0a-0ea9-4975-857c-2827ea936cf7", 00:15:03.340 "strip_size_kb": 64, 00:15:03.340 "state": "online", 00:15:03.340 "raid_level": "concat", 00:15:03.340 "superblock": true, 00:15:03.340 "num_base_bdevs": 2, 00:15:03.340 "num_base_bdevs_discovered": 2, 00:15:03.340 "num_base_bdevs_operational": 2, 00:15:03.340 "base_bdevs_list": [ 00:15:03.340 { 00:15:03.340 "name": "pt1", 00:15:03.340 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:03.340 "is_configured": true, 00:15:03.340 "data_offset": 2048, 00:15:03.340 "data_size": 63488 00:15:03.340 }, 00:15:03.340 { 00:15:03.340 "name": "pt2", 00:15:03.340 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:03.340 "is_configured": true, 00:15:03.340 "data_offset": 2048, 00:15:03.340 "data_size": 63488 00:15:03.340 } 00:15:03.340 ] 00:15:03.340 }' 00:15:03.340 14:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:03.340 14:07:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.907 14:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:15:03.907 14:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:15:03.907 14:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:03.907 14:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:03.907 14:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:03.907 14:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:03.907 14:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:03.907 14:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:04.166 [2024-07-15 14:07:50.034467] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:04.166 14:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:04.166 "name": "raid_bdev1", 00:15:04.166 "aliases": [ 00:15:04.166 "9840fe0a-0ea9-4975-857c-2827ea936cf7" 00:15:04.166 ], 00:15:04.166 "product_name": "Raid Volume", 00:15:04.166 "block_size": 512, 00:15:04.166 "num_blocks": 126976, 00:15:04.166 "uuid": "9840fe0a-0ea9-4975-857c-2827ea936cf7", 00:15:04.166 "assigned_rate_limits": { 00:15:04.166 "rw_ios_per_sec": 0, 00:15:04.166 "rw_mbytes_per_sec": 0, 00:15:04.166 "r_mbytes_per_sec": 0, 00:15:04.166 "w_mbytes_per_sec": 0 00:15:04.166 }, 00:15:04.166 "claimed": false, 00:15:04.166 "zoned": false, 00:15:04.166 "supported_io_types": { 00:15:04.166 "read": true, 00:15:04.166 "write": true, 00:15:04.166 "unmap": true, 00:15:04.166 "flush": true, 00:15:04.166 "reset": true, 00:15:04.166 "nvme_admin": false, 00:15:04.166 "nvme_io": false, 00:15:04.166 "nvme_io_md": false, 00:15:04.166 "write_zeroes": true, 00:15:04.166 "zcopy": false, 00:15:04.166 "get_zone_info": false, 00:15:04.166 "zone_management": false, 00:15:04.166 "zone_append": false, 00:15:04.166 "compare": false, 00:15:04.166 "compare_and_write": false, 00:15:04.166 "abort": false, 00:15:04.166 "seek_hole": false, 00:15:04.166 "seek_data": false, 00:15:04.166 "copy": false, 00:15:04.166 "nvme_iov_md": false 00:15:04.166 }, 00:15:04.166 "memory_domains": [ 00:15:04.166 { 00:15:04.166 "dma_device_id": "system", 00:15:04.166 "dma_device_type": 1 00:15:04.166 }, 00:15:04.166 { 00:15:04.166 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:04.166 "dma_device_type": 2 00:15:04.166 }, 00:15:04.166 { 00:15:04.166 "dma_device_id": "system", 00:15:04.166 "dma_device_type": 1 00:15:04.166 }, 00:15:04.166 { 00:15:04.166 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:04.166 "dma_device_type": 2 00:15:04.166 } 00:15:04.166 ], 00:15:04.166 "driver_specific": { 00:15:04.166 "raid": { 00:15:04.166 "uuid": "9840fe0a-0ea9-4975-857c-2827ea936cf7", 00:15:04.166 "strip_size_kb": 64, 00:15:04.166 "state": "online", 00:15:04.166 "raid_level": "concat", 00:15:04.166 "superblock": true, 00:15:04.166 "num_base_bdevs": 2, 00:15:04.166 "num_base_bdevs_discovered": 2, 00:15:04.166 "num_base_bdevs_operational": 2, 00:15:04.166 "base_bdevs_list": [ 00:15:04.166 { 00:15:04.166 "name": "pt1", 00:15:04.166 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:04.166 "is_configured": true, 00:15:04.166 "data_offset": 2048, 00:15:04.166 "data_size": 63488 00:15:04.166 }, 00:15:04.166 { 00:15:04.166 "name": "pt2", 00:15:04.166 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:04.166 "is_configured": true, 00:15:04.166 "data_offset": 2048, 00:15:04.166 "data_size": 63488 00:15:04.166 } 00:15:04.166 ] 00:15:04.166 } 00:15:04.166 } 00:15:04.166 }' 00:15:04.166 14:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:04.166 14:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:15:04.166 pt2' 00:15:04.166 14:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:04.166 14:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:15:04.166 14:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:04.425 14:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:04.425 "name": "pt1", 00:15:04.425 "aliases": [ 00:15:04.425 "00000000-0000-0000-0000-000000000001" 00:15:04.425 ], 00:15:04.425 "product_name": "passthru", 00:15:04.425 "block_size": 512, 00:15:04.425 "num_blocks": 65536, 00:15:04.425 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:04.425 "assigned_rate_limits": { 00:15:04.425 "rw_ios_per_sec": 0, 00:15:04.425 "rw_mbytes_per_sec": 0, 00:15:04.425 "r_mbytes_per_sec": 0, 00:15:04.425 "w_mbytes_per_sec": 0 00:15:04.425 }, 00:15:04.425 "claimed": true, 00:15:04.425 "claim_type": "exclusive_write", 00:15:04.425 "zoned": false, 00:15:04.425 "supported_io_types": { 00:15:04.425 "read": true, 00:15:04.425 "write": true, 00:15:04.425 "unmap": true, 00:15:04.425 "flush": true, 00:15:04.425 "reset": true, 00:15:04.425 "nvme_admin": false, 00:15:04.425 "nvme_io": false, 00:15:04.425 "nvme_io_md": false, 00:15:04.425 "write_zeroes": true, 00:15:04.425 "zcopy": true, 00:15:04.425 "get_zone_info": false, 00:15:04.425 "zone_management": false, 00:15:04.425 "zone_append": false, 00:15:04.425 "compare": false, 00:15:04.425 "compare_and_write": false, 00:15:04.425 "abort": true, 00:15:04.425 "seek_hole": false, 00:15:04.425 "seek_data": false, 00:15:04.425 "copy": true, 00:15:04.425 "nvme_iov_md": false 00:15:04.425 }, 00:15:04.425 "memory_domains": [ 00:15:04.425 { 00:15:04.425 "dma_device_id": "system", 00:15:04.425 "dma_device_type": 1 00:15:04.425 }, 00:15:04.425 { 00:15:04.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:04.425 "dma_device_type": 2 00:15:04.425 } 00:15:04.425 ], 00:15:04.425 "driver_specific": { 00:15:04.425 "passthru": { 00:15:04.425 "name": "pt1", 00:15:04.425 "base_bdev_name": "malloc1" 00:15:04.425 } 00:15:04.425 } 00:15:04.425 }' 00:15:04.425 14:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:04.425 14:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:04.684 14:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:04.684 14:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:04.684 14:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:04.684 14:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:04.684 14:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:04.684 14:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:04.684 14:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:04.684 14:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:04.942 14:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:04.942 14:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:04.942 14:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:04.942 14:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:04.942 14:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:15:05.200 14:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:05.200 "name": "pt2", 00:15:05.200 "aliases": [ 00:15:05.200 "00000000-0000-0000-0000-000000000002" 00:15:05.200 ], 00:15:05.200 "product_name": "passthru", 00:15:05.200 "block_size": 512, 00:15:05.200 "num_blocks": 65536, 00:15:05.200 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:05.200 "assigned_rate_limits": { 00:15:05.200 "rw_ios_per_sec": 0, 00:15:05.200 "rw_mbytes_per_sec": 0, 00:15:05.200 "r_mbytes_per_sec": 0, 00:15:05.200 "w_mbytes_per_sec": 0 00:15:05.200 }, 00:15:05.200 "claimed": true, 00:15:05.200 "claim_type": "exclusive_write", 00:15:05.200 "zoned": false, 00:15:05.200 "supported_io_types": { 00:15:05.200 "read": true, 00:15:05.200 "write": true, 00:15:05.200 "unmap": true, 00:15:05.200 "flush": true, 00:15:05.200 "reset": true, 00:15:05.200 "nvme_admin": false, 00:15:05.200 "nvme_io": false, 00:15:05.200 "nvme_io_md": false, 00:15:05.200 "write_zeroes": true, 00:15:05.200 "zcopy": true, 00:15:05.200 "get_zone_info": false, 00:15:05.200 "zone_management": false, 00:15:05.200 "zone_append": false, 00:15:05.200 "compare": false, 00:15:05.200 "compare_and_write": false, 00:15:05.200 "abort": true, 00:15:05.200 "seek_hole": false, 00:15:05.200 "seek_data": false, 00:15:05.200 "copy": true, 00:15:05.200 "nvme_iov_md": false 00:15:05.200 }, 00:15:05.200 "memory_domains": [ 00:15:05.200 { 00:15:05.200 "dma_device_id": "system", 00:15:05.200 "dma_device_type": 1 00:15:05.200 }, 00:15:05.200 { 00:15:05.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:05.200 "dma_device_type": 2 00:15:05.200 } 00:15:05.200 ], 00:15:05.200 "driver_specific": { 00:15:05.200 "passthru": { 00:15:05.200 "name": "pt2", 00:15:05.200 "base_bdev_name": "malloc2" 00:15:05.200 } 00:15:05.200 } 00:15:05.200 }' 00:15:05.200 14:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:05.200 14:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:05.200 14:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:05.200 14:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:05.200 14:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:05.458 14:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:05.458 14:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:05.458 14:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:05.458 14:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:05.458 14:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:05.458 14:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:05.458 14:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:05.458 14:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:05.458 14:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:15:05.717 [2024-07-15 14:07:51.634720] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:05.717 14:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=9840fe0a-0ea9-4975-857c-2827ea936cf7 00:15:05.717 14:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 9840fe0a-0ea9-4975-857c-2827ea936cf7 ']' 00:15:05.717 14:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:05.975 [2024-07-15 14:07:51.894576] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:05.975 [2024-07-15 14:07:51.894852] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:05.975 [2024-07-15 14:07:51.895044] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:05.975 [2024-07-15 14:07:51.895201] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:05.975 [2024-07-15 14:07:51.895314] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:15:05.975 14:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:05.975 14:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:15:06.232 14:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:15:06.232 14:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:15:06.232 14:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:15:06.232 14:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:06.488 14:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:15:06.488 14:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:06.746 14:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:15:06.746 14:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:07.004 14:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:15:07.004 14:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:15:07.004 14:07:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:15:07.004 14:07:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:15:07.004 14:07:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:07.004 14:07:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:07.004 14:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:07.004 14:07:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:07.004 14:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:07.004 14:07:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:07.004 14:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:07.004 14:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:07.329 14:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:15:07.329 [2024-07-15 14:07:53.282903] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:07.329 [2024-07-15 14:07:53.284635] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:07.329 [2024-07-15 14:07:53.284825] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:07.329 [2024-07-15 14:07:53.285046] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:07.329 [2024-07-15 14:07:53.285194] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:07.329 [2024-07-15 14:07:53.285306] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state configuring 00:15:07.329 request: 00:15:07.329 { 00:15:07.329 "name": "raid_bdev1", 00:15:07.329 "raid_level": "concat", 00:15:07.329 "base_bdevs": [ 00:15:07.329 "malloc1", 00:15:07.329 "malloc2" 00:15:07.329 ], 00:15:07.329 "strip_size_kb": 64, 00:15:07.329 "superblock": false, 00:15:07.329 "method": "bdev_raid_create", 00:15:07.329 "req_id": 1 00:15:07.329 } 00:15:07.329 Got JSON-RPC error response 00:15:07.329 response: 00:15:07.329 { 00:15:07.329 "code": -17, 00:15:07.329 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:07.329 } 00:15:07.329 14:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:15:07.329 14:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:07.329 14:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:07.329 14:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:07.329 14:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:07.329 14:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:15:07.894 14:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:15:07.894 14:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:15:07.894 14:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:07.894 [2024-07-15 14:07:53.834935] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:07.894 [2024-07-15 14:07:53.835253] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:07.894 [2024-07-15 14:07:53.835352] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:07.894 [2024-07-15 14:07:53.835609] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:07.894 [2024-07-15 14:07:53.837572] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:07.894 [2024-07-15 14:07:53.837773] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:07.894 [2024-07-15 14:07:53.838017] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:07.894 [2024-07-15 14:07:53.838184] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:07.894 pt1 00:15:07.894 14:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:15:07.894 14:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:07.894 14:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:07.894 14:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:07.894 14:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:07.894 14:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:07.894 14:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:07.894 14:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:07.894 14:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:07.894 14:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:07.894 14:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:07.894 14:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.152 14:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:08.152 "name": "raid_bdev1", 00:15:08.152 "uuid": "9840fe0a-0ea9-4975-857c-2827ea936cf7", 00:15:08.152 "strip_size_kb": 64, 00:15:08.152 "state": "configuring", 00:15:08.152 "raid_level": "concat", 00:15:08.152 "superblock": true, 00:15:08.152 "num_base_bdevs": 2, 00:15:08.152 "num_base_bdevs_discovered": 1, 00:15:08.152 "num_base_bdevs_operational": 2, 00:15:08.152 "base_bdevs_list": [ 00:15:08.152 { 00:15:08.152 "name": "pt1", 00:15:08.152 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:08.152 "is_configured": true, 00:15:08.153 "data_offset": 2048, 00:15:08.153 "data_size": 63488 00:15:08.153 }, 00:15:08.153 { 00:15:08.153 "name": null, 00:15:08.153 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:08.153 "is_configured": false, 00:15:08.153 "data_offset": 2048, 00:15:08.153 "data_size": 63488 00:15:08.153 } 00:15:08.153 ] 00:15:08.153 }' 00:15:08.153 14:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:08.153 14:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.087 14:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:15:09.087 14:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:15:09.087 14:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:15:09.087 14:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:09.087 [2024-07-15 14:07:55.051073] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:09.087 [2024-07-15 14:07:55.051436] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:09.087 [2024-07-15 14:07:55.051615] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:09.087 [2024-07-15 14:07:55.051793] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:09.087 [2024-07-15 14:07:55.052303] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:09.087 [2024-07-15 14:07:55.052493] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:09.087 [2024-07-15 14:07:55.052742] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:09.087 [2024-07-15 14:07:55.052876] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:09.087 [2024-07-15 14:07:55.053067] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:15:09.087 [2024-07-15 14:07:55.053185] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:09.087 [2024-07-15 14:07:55.053316] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:09.087 [2024-07-15 14:07:55.053580] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:15:09.087 [2024-07-15 14:07:55.053629] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:15:09.087 [2024-07-15 14:07:55.053875] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:09.087 pt2 00:15:09.087 14:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:15:09.087 14:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:15:09.087 14:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:15:09.087 14:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:09.087 14:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:09.087 14:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:09.087 14:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:09.087 14:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:09.087 14:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:09.087 14:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:09.087 14:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:09.087 14:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:09.087 14:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:09.087 14:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.654 14:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:09.654 "name": "raid_bdev1", 00:15:09.654 "uuid": "9840fe0a-0ea9-4975-857c-2827ea936cf7", 00:15:09.654 "strip_size_kb": 64, 00:15:09.654 "state": "online", 00:15:09.654 "raid_level": "concat", 00:15:09.654 "superblock": true, 00:15:09.654 "num_base_bdevs": 2, 00:15:09.654 "num_base_bdevs_discovered": 2, 00:15:09.654 "num_base_bdevs_operational": 2, 00:15:09.654 "base_bdevs_list": [ 00:15:09.654 { 00:15:09.654 "name": "pt1", 00:15:09.654 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:09.654 "is_configured": true, 00:15:09.654 "data_offset": 2048, 00:15:09.654 "data_size": 63488 00:15:09.654 }, 00:15:09.654 { 00:15:09.654 "name": "pt2", 00:15:09.654 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:09.654 "is_configured": true, 00:15:09.654 "data_offset": 2048, 00:15:09.654 "data_size": 63488 00:15:09.654 } 00:15:09.654 ] 00:15:09.654 }' 00:15:09.654 14:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:09.654 14:07:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.221 14:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:15:10.221 14:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:15:10.221 14:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:10.221 14:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:10.221 14:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:10.221 14:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:10.221 14:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:10.221 14:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:10.487 [2024-07-15 14:07:56.271404] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:10.487 14:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:10.487 "name": "raid_bdev1", 00:15:10.487 "aliases": [ 00:15:10.487 "9840fe0a-0ea9-4975-857c-2827ea936cf7" 00:15:10.487 ], 00:15:10.487 "product_name": "Raid Volume", 00:15:10.487 "block_size": 512, 00:15:10.487 "num_blocks": 126976, 00:15:10.487 "uuid": "9840fe0a-0ea9-4975-857c-2827ea936cf7", 00:15:10.487 "assigned_rate_limits": { 00:15:10.487 "rw_ios_per_sec": 0, 00:15:10.487 "rw_mbytes_per_sec": 0, 00:15:10.487 "r_mbytes_per_sec": 0, 00:15:10.487 "w_mbytes_per_sec": 0 00:15:10.487 }, 00:15:10.487 "claimed": false, 00:15:10.487 "zoned": false, 00:15:10.487 "supported_io_types": { 00:15:10.487 "read": true, 00:15:10.487 "write": true, 00:15:10.487 "unmap": true, 00:15:10.487 "flush": true, 00:15:10.487 "reset": true, 00:15:10.487 "nvme_admin": false, 00:15:10.487 "nvme_io": false, 00:15:10.487 "nvme_io_md": false, 00:15:10.487 "write_zeroes": true, 00:15:10.487 "zcopy": false, 00:15:10.487 "get_zone_info": false, 00:15:10.487 "zone_management": false, 00:15:10.487 "zone_append": false, 00:15:10.487 "compare": false, 00:15:10.487 "compare_and_write": false, 00:15:10.487 "abort": false, 00:15:10.487 "seek_hole": false, 00:15:10.487 "seek_data": false, 00:15:10.487 "copy": false, 00:15:10.487 "nvme_iov_md": false 00:15:10.487 }, 00:15:10.487 "memory_domains": [ 00:15:10.487 { 00:15:10.487 "dma_device_id": "system", 00:15:10.487 "dma_device_type": 1 00:15:10.487 }, 00:15:10.487 { 00:15:10.487 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:10.487 "dma_device_type": 2 00:15:10.487 }, 00:15:10.487 { 00:15:10.487 "dma_device_id": "system", 00:15:10.487 "dma_device_type": 1 00:15:10.487 }, 00:15:10.487 { 00:15:10.487 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:10.487 "dma_device_type": 2 00:15:10.487 } 00:15:10.487 ], 00:15:10.487 "driver_specific": { 00:15:10.487 "raid": { 00:15:10.487 "uuid": "9840fe0a-0ea9-4975-857c-2827ea936cf7", 00:15:10.487 "strip_size_kb": 64, 00:15:10.487 "state": "online", 00:15:10.487 "raid_level": "concat", 00:15:10.487 "superblock": true, 00:15:10.487 "num_base_bdevs": 2, 00:15:10.487 "num_base_bdevs_discovered": 2, 00:15:10.487 "num_base_bdevs_operational": 2, 00:15:10.487 "base_bdevs_list": [ 00:15:10.487 { 00:15:10.487 "name": "pt1", 00:15:10.487 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:10.487 "is_configured": true, 00:15:10.487 "data_offset": 2048, 00:15:10.487 "data_size": 63488 00:15:10.487 }, 00:15:10.487 { 00:15:10.487 "name": "pt2", 00:15:10.487 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:10.487 "is_configured": true, 00:15:10.487 "data_offset": 2048, 00:15:10.487 "data_size": 63488 00:15:10.487 } 00:15:10.487 ] 00:15:10.487 } 00:15:10.487 } 00:15:10.487 }' 00:15:10.487 14:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:10.487 14:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:15:10.487 pt2' 00:15:10.487 14:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:10.487 14:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:15:10.487 14:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:10.759 14:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:10.759 "name": "pt1", 00:15:10.759 "aliases": [ 00:15:10.759 "00000000-0000-0000-0000-000000000001" 00:15:10.759 ], 00:15:10.759 "product_name": "passthru", 00:15:10.759 "block_size": 512, 00:15:10.759 "num_blocks": 65536, 00:15:10.760 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:10.760 "assigned_rate_limits": { 00:15:10.760 "rw_ios_per_sec": 0, 00:15:10.760 "rw_mbytes_per_sec": 0, 00:15:10.760 "r_mbytes_per_sec": 0, 00:15:10.760 "w_mbytes_per_sec": 0 00:15:10.760 }, 00:15:10.760 "claimed": true, 00:15:10.760 "claim_type": "exclusive_write", 00:15:10.760 "zoned": false, 00:15:10.760 "supported_io_types": { 00:15:10.760 "read": true, 00:15:10.760 "write": true, 00:15:10.760 "unmap": true, 00:15:10.760 "flush": true, 00:15:10.760 "reset": true, 00:15:10.760 "nvme_admin": false, 00:15:10.760 "nvme_io": false, 00:15:10.760 "nvme_io_md": false, 00:15:10.760 "write_zeroes": true, 00:15:10.760 "zcopy": true, 00:15:10.760 "get_zone_info": false, 00:15:10.760 "zone_management": false, 00:15:10.760 "zone_append": false, 00:15:10.760 "compare": false, 00:15:10.760 "compare_and_write": false, 00:15:10.760 "abort": true, 00:15:10.760 "seek_hole": false, 00:15:10.760 "seek_data": false, 00:15:10.760 "copy": true, 00:15:10.760 "nvme_iov_md": false 00:15:10.760 }, 00:15:10.760 "memory_domains": [ 00:15:10.760 { 00:15:10.760 "dma_device_id": "system", 00:15:10.760 "dma_device_type": 1 00:15:10.760 }, 00:15:10.760 { 00:15:10.760 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:10.760 "dma_device_type": 2 00:15:10.760 } 00:15:10.760 ], 00:15:10.760 "driver_specific": { 00:15:10.760 "passthru": { 00:15:10.760 "name": "pt1", 00:15:10.760 "base_bdev_name": "malloc1" 00:15:10.760 } 00:15:10.760 } 00:15:10.760 }' 00:15:10.760 14:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:10.760 14:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:10.760 14:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:10.760 14:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:10.760 14:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:11.017 14:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:11.017 14:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:11.017 14:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:11.017 14:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:11.017 14:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:11.017 14:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:11.017 14:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:11.017 14:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:11.017 14:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:15:11.017 14:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:11.274 14:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:11.274 "name": "pt2", 00:15:11.274 "aliases": [ 00:15:11.274 "00000000-0000-0000-0000-000000000002" 00:15:11.274 ], 00:15:11.274 "product_name": "passthru", 00:15:11.274 "block_size": 512, 00:15:11.274 "num_blocks": 65536, 00:15:11.274 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:11.274 "assigned_rate_limits": { 00:15:11.274 "rw_ios_per_sec": 0, 00:15:11.274 "rw_mbytes_per_sec": 0, 00:15:11.274 "r_mbytes_per_sec": 0, 00:15:11.274 "w_mbytes_per_sec": 0 00:15:11.274 }, 00:15:11.274 "claimed": true, 00:15:11.274 "claim_type": "exclusive_write", 00:15:11.274 "zoned": false, 00:15:11.274 "supported_io_types": { 00:15:11.274 "read": true, 00:15:11.274 "write": true, 00:15:11.274 "unmap": true, 00:15:11.274 "flush": true, 00:15:11.274 "reset": true, 00:15:11.274 "nvme_admin": false, 00:15:11.274 "nvme_io": false, 00:15:11.274 "nvme_io_md": false, 00:15:11.274 "write_zeroes": true, 00:15:11.274 "zcopy": true, 00:15:11.274 "get_zone_info": false, 00:15:11.274 "zone_management": false, 00:15:11.274 "zone_append": false, 00:15:11.274 "compare": false, 00:15:11.274 "compare_and_write": false, 00:15:11.274 "abort": true, 00:15:11.274 "seek_hole": false, 00:15:11.274 "seek_data": false, 00:15:11.274 "copy": true, 00:15:11.274 "nvme_iov_md": false 00:15:11.274 }, 00:15:11.274 "memory_domains": [ 00:15:11.274 { 00:15:11.274 "dma_device_id": "system", 00:15:11.274 "dma_device_type": 1 00:15:11.274 }, 00:15:11.274 { 00:15:11.274 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:11.274 "dma_device_type": 2 00:15:11.274 } 00:15:11.274 ], 00:15:11.274 "driver_specific": { 00:15:11.274 "passthru": { 00:15:11.274 "name": "pt2", 00:15:11.274 "base_bdev_name": "malloc2" 00:15:11.274 } 00:15:11.274 } 00:15:11.274 }' 00:15:11.274 14:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:11.274 14:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:11.531 14:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:11.531 14:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:11.531 14:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:11.531 14:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:11.531 14:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:11.531 14:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:11.531 14:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:11.531 14:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:11.789 14:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:11.789 14:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:11.789 14:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:11.789 14:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:15:12.046 [2024-07-15 14:07:57.823594] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:12.046 14:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 9840fe0a-0ea9-4975-857c-2827ea936cf7 '!=' 9840fe0a-0ea9-4975-857c-2827ea936cf7 ']' 00:15:12.046 14:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy concat 00:15:12.046 14:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:12.046 14:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:12.046 14:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 188698 00:15:12.046 14:07:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 188698 ']' 00:15:12.046 14:07:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 188698 00:15:12.046 14:07:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:15:12.046 14:07:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:12.046 14:07:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 188698 00:15:12.046 14:07:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:12.046 14:07:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:12.046 14:07:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 188698' 00:15:12.046 killing process with pid 188698 00:15:12.046 14:07:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 188698 00:15:12.046 [2024-07-15 14:07:57.881365] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:12.046 14:07:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 188698 00:15:12.046 [2024-07-15 14:07:57.881594] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:12.046 [2024-07-15 14:07:57.881643] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:12.046 [2024-07-15 14:07:57.881653] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:15:12.303 [2024-07-15 14:07:58.052663] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:13.245 14:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:15:13.245 00:15:13.245 real 0m12.748s 00:15:13.245 user 0m22.405s 00:15:13.245 sys 0m1.552s 00:15:13.245 14:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:13.245 14:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.245 ************************************ 00:15:13.245 END TEST raid_superblock_test 00:15:13.245 ************************************ 00:15:13.245 14:07:59 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:15:13.245 14:07:59 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:15:13.245 14:07:59 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:15:13.245 14:07:59 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:13.245 14:07:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:13.245 ************************************ 00:15:13.245 START TEST raid_read_error_test 00:15:13.245 ************************************ 00:15:13.245 14:07:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 2 read 00:15:13.245 14:07:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:15:13.245 14:07:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:15:13.245 14:07:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:15:13.245 14:07:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:15:13.245 14:07:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:13.245 14:07:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:15:13.245 14:07:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:13.245 14:07:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:13.245 14:07:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:15:13.245 14:07:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:13.245 14:07:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:13.245 14:07:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:13.245 14:07:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:15:13.245 14:07:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:15:13.245 14:07:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:15:13.245 14:07:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:15:13.245 14:07:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:15:13.245 14:07:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:15:13.245 14:07:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:15:13.245 14:07:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:15:13.245 14:07:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:15:13.245 14:07:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:15:13.245 14:07:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.zlC99RuyMb 00:15:13.245 14:07:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=189077 00:15:13.245 14:07:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 189077 /var/tmp/spdk-raid.sock 00:15:13.245 14:07:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 189077 ']' 00:15:13.245 14:07:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:13.245 14:07:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:13.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:13.245 14:07:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:13.245 14:07:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:13.245 14:07:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:13.245 14:07:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.502 [2024-07-15 14:07:59.268545] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:15:13.502 [2024-07-15 14:07:59.268707] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid189077 ] 00:15:13.502 [2024-07-15 14:07:59.419353] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:13.760 [2024-07-15 14:07:59.635430] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:14.018 [2024-07-15 14:07:59.832136] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:14.275 14:08:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:14.275 14:08:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:15:14.275 14:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:14.275 14:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:14.840 BaseBdev1_malloc 00:15:14.840 14:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:15:14.840 true 00:15:15.097 14:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:15.097 [2024-07-15 14:08:01.093516] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:15.097 [2024-07-15 14:08:01.094026] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:15.097 [2024-07-15 14:08:01.094153] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:15:15.097 [2024-07-15 14:08:01.094253] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:15.097 [2024-07-15 14:08:01.096129] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:15.097 [2024-07-15 14:08:01.096269] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:15.097 BaseBdev1 00:15:15.355 14:08:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:15.355 14:08:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:15.612 BaseBdev2_malloc 00:15:15.612 14:08:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:15:15.870 true 00:15:15.870 14:08:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:16.127 [2024-07-15 14:08:01.927067] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:16.127 [2024-07-15 14:08:01.927349] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:16.127 [2024-07-15 14:08:01.927460] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:16.127 [2024-07-15 14:08:01.927544] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:16.127 [2024-07-15 14:08:01.929418] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:16.127 [2024-07-15 14:08:01.929542] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:16.127 BaseBdev2 00:15:16.127 14:08:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:15:16.385 [2024-07-15 14:08:02.227184] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:16.385 [2024-07-15 14:08:02.228707] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:16.385 [2024-07-15 14:08:02.228918] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008480 00:15:16.385 [2024-07-15 14:08:02.228936] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:16.385 [2024-07-15 14:08:02.229053] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:15:16.385 [2024-07-15 14:08:02.229305] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008480 00:15:16.385 [2024-07-15 14:08:02.229320] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008480 00:15:16.385 [2024-07-15 14:08:02.229437] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:16.385 14:08:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:15:16.385 14:08:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:16.385 14:08:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:16.385 14:08:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:16.385 14:08:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:16.385 14:08:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:16.385 14:08:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:16.385 14:08:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:16.385 14:08:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:16.385 14:08:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:16.385 14:08:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:16.385 14:08:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.642 14:08:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:16.642 "name": "raid_bdev1", 00:15:16.642 "uuid": "e4c1f295-66df-4aed-9e2a-4c6a7141b9c5", 00:15:16.642 "strip_size_kb": 64, 00:15:16.642 "state": "online", 00:15:16.642 "raid_level": "concat", 00:15:16.642 "superblock": true, 00:15:16.642 "num_base_bdevs": 2, 00:15:16.642 "num_base_bdevs_discovered": 2, 00:15:16.642 "num_base_bdevs_operational": 2, 00:15:16.642 "base_bdevs_list": [ 00:15:16.642 { 00:15:16.642 "name": "BaseBdev1", 00:15:16.642 "uuid": "422a2cb3-dea3-53af-a5c4-ac283e679169", 00:15:16.642 "is_configured": true, 00:15:16.642 "data_offset": 2048, 00:15:16.642 "data_size": 63488 00:15:16.642 }, 00:15:16.642 { 00:15:16.642 "name": "BaseBdev2", 00:15:16.642 "uuid": "451ee713-3399-541c-bdab-cea73f640910", 00:15:16.642 "is_configured": true, 00:15:16.642 "data_offset": 2048, 00:15:16.642 "data_size": 63488 00:15:16.642 } 00:15:16.642 ] 00:15:16.642 }' 00:15:16.642 14:08:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:16.642 14:08:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.576 14:08:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:15:17.576 14:08:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:15:17.576 [2024-07-15 14:08:03.344477] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:15:18.512 14:08:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:15:18.771 14:08:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:15:18.771 14:08:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:15:18.771 14:08:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:15:18.771 14:08:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:15:18.771 14:08:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:18.771 14:08:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:18.771 14:08:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:18.771 14:08:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:18.771 14:08:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:18.771 14:08:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:18.771 14:08:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:18.771 14:08:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:18.771 14:08:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:18.771 14:08:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:18.771 14:08:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.029 14:08:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:19.029 "name": "raid_bdev1", 00:15:19.029 "uuid": "e4c1f295-66df-4aed-9e2a-4c6a7141b9c5", 00:15:19.029 "strip_size_kb": 64, 00:15:19.029 "state": "online", 00:15:19.029 "raid_level": "concat", 00:15:19.029 "superblock": true, 00:15:19.029 "num_base_bdevs": 2, 00:15:19.029 "num_base_bdevs_discovered": 2, 00:15:19.029 "num_base_bdevs_operational": 2, 00:15:19.029 "base_bdevs_list": [ 00:15:19.029 { 00:15:19.029 "name": "BaseBdev1", 00:15:19.029 "uuid": "422a2cb3-dea3-53af-a5c4-ac283e679169", 00:15:19.029 "is_configured": true, 00:15:19.029 "data_offset": 2048, 00:15:19.029 "data_size": 63488 00:15:19.029 }, 00:15:19.029 { 00:15:19.029 "name": "BaseBdev2", 00:15:19.029 "uuid": "451ee713-3399-541c-bdab-cea73f640910", 00:15:19.029 "is_configured": true, 00:15:19.029 "data_offset": 2048, 00:15:19.029 "data_size": 63488 00:15:19.029 } 00:15:19.029 ] 00:15:19.029 }' 00:15:19.029 14:08:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:19.029 14:08:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.596 14:08:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:19.855 [2024-07-15 14:08:05.652655] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:19.855 [2024-07-15 14:08:05.652709] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:19.855 [2024-07-15 14:08:05.654054] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:19.855 [2024-07-15 14:08:05.654103] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:19.855 [2024-07-15 14:08:05.654130] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:19.855 [2024-07-15 14:08:05.654141] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state offline 00:15:19.855 0 00:15:19.855 14:08:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 189077 00:15:19.855 14:08:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 189077 ']' 00:15:19.855 14:08:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 189077 00:15:19.855 14:08:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:15:19.855 14:08:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:19.855 14:08:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 189077 00:15:19.855 14:08:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:19.855 14:08:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:19.855 killing process with pid 189077 00:15:19.855 14:08:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 189077' 00:15:19.855 14:08:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 189077 00:15:19.855 [2024-07-15 14:08:05.694076] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:19.855 14:08:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 189077 00:15:19.855 [2024-07-15 14:08:05.809065] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:21.296 14:08:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.zlC99RuyMb 00:15:21.296 14:08:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:15:21.296 14:08:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:15:21.296 14:08:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.43 00:15:21.296 14:08:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:15:21.296 14:08:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:21.296 14:08:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:21.296 14:08:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.43 != \0\.\0\0 ]] 00:15:21.296 00:15:21.296 real 0m7.790s 00:15:21.296 user 0m11.950s 00:15:21.296 sys 0m0.826s 00:15:21.296 14:08:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:21.296 ************************************ 00:15:21.296 END TEST raid_read_error_test 00:15:21.296 ************************************ 00:15:21.296 14:08:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.296 14:08:07 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:15:21.296 14:08:07 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:15:21.296 14:08:07 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:15:21.296 14:08:07 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:21.296 14:08:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:21.296 ************************************ 00:15:21.296 START TEST raid_write_error_test 00:15:21.296 ************************************ 00:15:21.296 14:08:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 2 write 00:15:21.297 14:08:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:15:21.297 14:08:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:15:21.297 14:08:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:15:21.297 14:08:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:15:21.297 14:08:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:21.297 14:08:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:15:21.297 14:08:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:21.297 14:08:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:21.297 14:08:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:15:21.297 14:08:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:21.297 14:08:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:21.297 14:08:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:21.297 14:08:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:15:21.297 14:08:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:15:21.297 14:08:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:15:21.297 14:08:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:15:21.297 14:08:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:15:21.297 14:08:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:15:21.297 14:08:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:15:21.297 14:08:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:15:21.297 14:08:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:15:21.297 14:08:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:15:21.297 14:08:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.ziNVtOxUep 00:15:21.297 14:08:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=189279 00:15:21.297 14:08:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 189279 /var/tmp/spdk-raid.sock 00:15:21.297 14:08:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:21.297 14:08:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 189279 ']' 00:15:21.297 14:08:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:21.297 14:08:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:21.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:21.297 14:08:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:21.297 14:08:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:21.297 14:08:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.297 [2024-07-15 14:08:07.117912] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:15:21.297 [2024-07-15 14:08:07.118111] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid189279 ] 00:15:21.297 [2024-07-15 14:08:07.280844] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:21.555 [2024-07-15 14:08:07.500710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:21.814 [2024-07-15 14:08:07.700362] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:22.380 14:08:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:22.380 14:08:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:15:22.380 14:08:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:22.380 14:08:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:22.380 BaseBdev1_malloc 00:15:22.380 14:08:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:15:22.947 true 00:15:22.947 14:08:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:22.947 [2024-07-15 14:08:08.922683] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:22.947 [2024-07-15 14:08:08.922833] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:22.947 [2024-07-15 14:08:08.922883] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:15:22.947 [2024-07-15 14:08:08.922912] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:22.947 [2024-07-15 14:08:08.924722] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:22.947 [2024-07-15 14:08:08.924814] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:22.947 BaseBdev1 00:15:22.947 14:08:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:22.947 14:08:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:23.206 BaseBdev2_malloc 00:15:23.465 14:08:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:15:23.724 true 00:15:23.724 14:08:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:23.981 [2024-07-15 14:08:09.785090] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:23.981 [2024-07-15 14:08:09.785232] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:23.981 [2024-07-15 14:08:09.785280] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:23.981 [2024-07-15 14:08:09.785305] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:23.981 [2024-07-15 14:08:09.787106] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:23.981 [2024-07-15 14:08:09.787172] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:23.981 BaseBdev2 00:15:23.981 14:08:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:15:24.239 [2024-07-15 14:08:10.025213] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:24.239 [2024-07-15 14:08:10.026756] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:24.239 [2024-07-15 14:08:10.026949] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008480 00:15:24.239 [2024-07-15 14:08:10.026977] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:24.239 [2024-07-15 14:08:10.027095] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:15:24.239 [2024-07-15 14:08:10.027367] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008480 00:15:24.239 [2024-07-15 14:08:10.027383] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008480 00:15:24.239 [2024-07-15 14:08:10.027510] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:24.239 14:08:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:15:24.239 14:08:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:24.239 14:08:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:24.239 14:08:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:24.239 14:08:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:24.239 14:08:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:24.239 14:08:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:24.239 14:08:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:24.239 14:08:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:24.239 14:08:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:24.239 14:08:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:24.239 14:08:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:24.497 14:08:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:24.497 "name": "raid_bdev1", 00:15:24.497 "uuid": "9926da8f-88c2-44d5-99b5-40c0cbb56d19", 00:15:24.497 "strip_size_kb": 64, 00:15:24.497 "state": "online", 00:15:24.497 "raid_level": "concat", 00:15:24.497 "superblock": true, 00:15:24.497 "num_base_bdevs": 2, 00:15:24.497 "num_base_bdevs_discovered": 2, 00:15:24.497 "num_base_bdevs_operational": 2, 00:15:24.497 "base_bdevs_list": [ 00:15:24.497 { 00:15:24.497 "name": "BaseBdev1", 00:15:24.497 "uuid": "1c65c50d-174f-58a1-8f31-5f5da0ad6c5a", 00:15:24.497 "is_configured": true, 00:15:24.497 "data_offset": 2048, 00:15:24.497 "data_size": 63488 00:15:24.497 }, 00:15:24.497 { 00:15:24.497 "name": "BaseBdev2", 00:15:24.497 "uuid": "d0959f89-5aaf-5912-8a6c-b4fde99b454c", 00:15:24.497 "is_configured": true, 00:15:24.497 "data_offset": 2048, 00:15:24.497 "data_size": 63488 00:15:24.497 } 00:15:24.497 ] 00:15:24.497 }' 00:15:24.497 14:08:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:24.497 14:08:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.063 14:08:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:15:25.063 14:08:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:15:25.322 [2024-07-15 14:08:11.126466] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:15:26.283 14:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:15:26.545 14:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:15:26.545 14:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:15:26.545 14:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:15:26.545 14:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:15:26.545 14:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:26.545 14:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:26.545 14:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:26.545 14:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:26.545 14:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:26.545 14:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:26.545 14:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:26.545 14:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:26.545 14:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:26.545 14:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:26.545 14:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.804 14:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:26.804 "name": "raid_bdev1", 00:15:26.804 "uuid": "9926da8f-88c2-44d5-99b5-40c0cbb56d19", 00:15:26.804 "strip_size_kb": 64, 00:15:26.804 "state": "online", 00:15:26.804 "raid_level": "concat", 00:15:26.804 "superblock": true, 00:15:26.804 "num_base_bdevs": 2, 00:15:26.804 "num_base_bdevs_discovered": 2, 00:15:26.804 "num_base_bdevs_operational": 2, 00:15:26.804 "base_bdevs_list": [ 00:15:26.804 { 00:15:26.804 "name": "BaseBdev1", 00:15:26.804 "uuid": "1c65c50d-174f-58a1-8f31-5f5da0ad6c5a", 00:15:26.804 "is_configured": true, 00:15:26.804 "data_offset": 2048, 00:15:26.804 "data_size": 63488 00:15:26.804 }, 00:15:26.804 { 00:15:26.804 "name": "BaseBdev2", 00:15:26.804 "uuid": "d0959f89-5aaf-5912-8a6c-b4fde99b454c", 00:15:26.804 "is_configured": true, 00:15:26.805 "data_offset": 2048, 00:15:26.805 "data_size": 63488 00:15:26.805 } 00:15:26.805 ] 00:15:26.805 }' 00:15:26.805 14:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:26.805 14:08:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.372 14:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:27.632 [2024-07-15 14:08:13.564226] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:27.632 [2024-07-15 14:08:13.564527] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:27.632 [2024-07-15 14:08:13.566010] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:27.632 [2024-07-15 14:08:13.566176] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:27.632 [2024-07-15 14:08:13.566321] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:27.632 [2024-07-15 14:08:13.566467] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state offline 00:15:27.632 0 00:15:27.632 14:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 189279 00:15:27.632 14:08:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 189279 ']' 00:15:27.632 14:08:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 189279 00:15:27.632 14:08:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:15:27.632 14:08:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:27.632 14:08:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 189279 00:15:27.632 14:08:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:27.632 14:08:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:27.632 14:08:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 189279' 00:15:27.632 killing process with pid 189279 00:15:27.632 14:08:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 189279 00:15:27.632 [2024-07-15 14:08:13.627035] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:27.632 14:08:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 189279 00:15:27.891 [2024-07-15 14:08:13.741977] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:29.267 14:08:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.ziNVtOxUep 00:15:29.267 14:08:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:15:29.267 14:08:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:15:29.267 14:08:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.41 00:15:29.267 14:08:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:15:29.267 14:08:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:29.267 14:08:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:29.267 14:08:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.41 != \0\.\0\0 ]] 00:15:29.267 00:15:29.267 real 0m7.883s 00:15:29.267 user 0m12.082s 00:15:29.267 sys 0m0.818s 00:15:29.267 14:08:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:29.267 14:08:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.267 ************************************ 00:15:29.267 END TEST raid_write_error_test 00:15:29.267 ************************************ 00:15:29.267 14:08:14 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:15:29.267 14:08:14 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:15:29.267 14:08:14 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:15:29.267 14:08:14 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:15:29.267 14:08:14 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:29.267 14:08:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:29.267 ************************************ 00:15:29.267 START TEST raid_state_function_test 00:15:29.267 ************************************ 00:15:29.267 14:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 false 00:15:29.267 14:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:15:29.267 14:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:15:29.267 14:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:15:29.267 14:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:15:29.267 14:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:15:29.267 14:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:29.267 14:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:15:29.267 14:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:29.267 14:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:29.267 14:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:15:29.267 14:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:29.267 14:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:29.267 14:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:29.267 14:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:15:29.267 14:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:15:29.267 14:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:15:29.267 14:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:15:29.267 14:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:15:29.267 14:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:15:29.267 14:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:15:29.267 14:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:15:29.267 14:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:15:29.267 14:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=189479 00:15:29.267 14:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:29.267 14:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 189479' 00:15:29.267 Process raid pid: 189479 00:15:29.267 14:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 189479 /var/tmp/spdk-raid.sock 00:15:29.267 14:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 189479 ']' 00:15:29.267 14:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:29.267 14:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:29.267 14:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:29.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:29.267 14:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:29.267 14:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.267 [2024-07-15 14:08:15.045088] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:15:29.267 [2024-07-15 14:08:15.045493] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:29.267 [2024-07-15 14:08:15.194563] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:29.526 [2024-07-15 14:08:15.414889] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:29.784 [2024-07-15 14:08:15.617649] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:30.042 14:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:30.042 14:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:15:30.042 14:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:30.300 [2024-07-15 14:08:16.279460] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:30.300 [2024-07-15 14:08:16.280180] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:30.300 [2024-07-15 14:08:16.280323] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:30.300 [2024-07-15 14:08:16.280464] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:30.300 14:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:30.300 14:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:30.300 14:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:30.300 14:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:30.300 14:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:30.300 14:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:30.300 14:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:30.300 14:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:30.300 14:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:30.557 14:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:30.557 14:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:30.557 14:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:30.557 14:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:30.557 "name": "Existed_Raid", 00:15:30.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.557 "strip_size_kb": 0, 00:15:30.557 "state": "configuring", 00:15:30.557 "raid_level": "raid1", 00:15:30.558 "superblock": false, 00:15:30.558 "num_base_bdevs": 2, 00:15:30.558 "num_base_bdevs_discovered": 0, 00:15:30.558 "num_base_bdevs_operational": 2, 00:15:30.558 "base_bdevs_list": [ 00:15:30.558 { 00:15:30.558 "name": "BaseBdev1", 00:15:30.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.558 "is_configured": false, 00:15:30.558 "data_offset": 0, 00:15:30.558 "data_size": 0 00:15:30.558 }, 00:15:30.558 { 00:15:30.558 "name": "BaseBdev2", 00:15:30.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.558 "is_configured": false, 00:15:30.558 "data_offset": 0, 00:15:30.558 "data_size": 0 00:15:30.558 } 00:15:30.558 ] 00:15:30.558 }' 00:15:30.558 14:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:30.558 14:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.123 14:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:31.381 [2024-07-15 14:08:17.351542] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:31.381 [2024-07-15 14:08:17.351835] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:15:31.381 14:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:31.639 [2024-07-15 14:08:17.635598] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:31.639 [2024-07-15 14:08:17.636291] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:31.639 [2024-07-15 14:08:17.636433] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:31.639 [2024-07-15 14:08:17.636594] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:31.897 14:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:32.154 [2024-07-15 14:08:17.922682] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:32.154 BaseBdev1 00:15:32.154 14:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:15:32.154 14:08:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:15:32.154 14:08:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:32.154 14:08:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:15:32.154 14:08:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:32.154 14:08:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:32.154 14:08:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:32.411 14:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:32.668 [ 00:15:32.668 { 00:15:32.668 "name": "BaseBdev1", 00:15:32.668 "aliases": [ 00:15:32.668 "c4403c61-bc19-4dd8-b4ac-5bc8f6f13ae3" 00:15:32.668 ], 00:15:32.668 "product_name": "Malloc disk", 00:15:32.668 "block_size": 512, 00:15:32.668 "num_blocks": 65536, 00:15:32.668 "uuid": "c4403c61-bc19-4dd8-b4ac-5bc8f6f13ae3", 00:15:32.668 "assigned_rate_limits": { 00:15:32.668 "rw_ios_per_sec": 0, 00:15:32.668 "rw_mbytes_per_sec": 0, 00:15:32.668 "r_mbytes_per_sec": 0, 00:15:32.668 "w_mbytes_per_sec": 0 00:15:32.668 }, 00:15:32.668 "claimed": true, 00:15:32.668 "claim_type": "exclusive_write", 00:15:32.668 "zoned": false, 00:15:32.668 "supported_io_types": { 00:15:32.668 "read": true, 00:15:32.668 "write": true, 00:15:32.668 "unmap": true, 00:15:32.668 "flush": true, 00:15:32.668 "reset": true, 00:15:32.668 "nvme_admin": false, 00:15:32.668 "nvme_io": false, 00:15:32.668 "nvme_io_md": false, 00:15:32.668 "write_zeroes": true, 00:15:32.668 "zcopy": true, 00:15:32.668 "get_zone_info": false, 00:15:32.668 "zone_management": false, 00:15:32.668 "zone_append": false, 00:15:32.668 "compare": false, 00:15:32.668 "compare_and_write": false, 00:15:32.668 "abort": true, 00:15:32.668 "seek_hole": false, 00:15:32.668 "seek_data": false, 00:15:32.668 "copy": true, 00:15:32.668 "nvme_iov_md": false 00:15:32.668 }, 00:15:32.668 "memory_domains": [ 00:15:32.668 { 00:15:32.668 "dma_device_id": "system", 00:15:32.668 "dma_device_type": 1 00:15:32.668 }, 00:15:32.668 { 00:15:32.668 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:32.668 "dma_device_type": 2 00:15:32.668 } 00:15:32.668 ], 00:15:32.668 "driver_specific": {} 00:15:32.668 } 00:15:32.668 ] 00:15:32.668 14:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:15:32.668 14:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:32.668 14:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:32.668 14:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:32.668 14:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:32.668 14:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:32.668 14:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:32.668 14:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:32.668 14:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:32.668 14:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:32.668 14:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:32.668 14:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:32.668 14:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:32.924 14:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:32.924 "name": "Existed_Raid", 00:15:32.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.924 "strip_size_kb": 0, 00:15:32.924 "state": "configuring", 00:15:32.924 "raid_level": "raid1", 00:15:32.925 "superblock": false, 00:15:32.925 "num_base_bdevs": 2, 00:15:32.925 "num_base_bdevs_discovered": 1, 00:15:32.925 "num_base_bdevs_operational": 2, 00:15:32.925 "base_bdevs_list": [ 00:15:32.925 { 00:15:32.925 "name": "BaseBdev1", 00:15:32.925 "uuid": "c4403c61-bc19-4dd8-b4ac-5bc8f6f13ae3", 00:15:32.925 "is_configured": true, 00:15:32.925 "data_offset": 0, 00:15:32.925 "data_size": 65536 00:15:32.925 }, 00:15:32.925 { 00:15:32.925 "name": "BaseBdev2", 00:15:32.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.925 "is_configured": false, 00:15:32.925 "data_offset": 0, 00:15:32.925 "data_size": 0 00:15:32.925 } 00:15:32.925 ] 00:15:32.925 }' 00:15:32.925 14:08:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:32.925 14:08:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.489 14:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:33.747 [2024-07-15 14:08:19.687028] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:33.747 [2024-07-15 14:08:19.687364] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:15:33.747 14:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:34.006 [2024-07-15 14:08:19.927104] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:34.006 [2024-07-15 14:08:19.928890] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:34.006 [2024-07-15 14:08:19.929400] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:34.006 14:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:15:34.006 14:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:34.006 14:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:34.006 14:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:34.006 14:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:34.006 14:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:34.006 14:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:34.006 14:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:34.006 14:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:34.006 14:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:34.006 14:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:34.006 14:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:34.006 14:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:34.006 14:08:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:34.264 14:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:34.264 "name": "Existed_Raid", 00:15:34.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.264 "strip_size_kb": 0, 00:15:34.264 "state": "configuring", 00:15:34.264 "raid_level": "raid1", 00:15:34.264 "superblock": false, 00:15:34.264 "num_base_bdevs": 2, 00:15:34.264 "num_base_bdevs_discovered": 1, 00:15:34.264 "num_base_bdevs_operational": 2, 00:15:34.264 "base_bdevs_list": [ 00:15:34.264 { 00:15:34.264 "name": "BaseBdev1", 00:15:34.264 "uuid": "c4403c61-bc19-4dd8-b4ac-5bc8f6f13ae3", 00:15:34.264 "is_configured": true, 00:15:34.264 "data_offset": 0, 00:15:34.264 "data_size": 65536 00:15:34.264 }, 00:15:34.264 { 00:15:34.264 "name": "BaseBdev2", 00:15:34.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.264 "is_configured": false, 00:15:34.264 "data_offset": 0, 00:15:34.264 "data_size": 0 00:15:34.264 } 00:15:34.264 ] 00:15:34.264 }' 00:15:34.264 14:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:34.264 14:08:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.199 14:08:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:35.199 [2024-07-15 14:08:21.176202] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:35.199 [2024-07-15 14:08:21.176470] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:15:35.199 [2024-07-15 14:08:21.176538] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:35.199 [2024-07-15 14:08:21.176807] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:15:35.199 [2024-07-15 14:08:21.177183] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:15:35.199 [2024-07-15 14:08:21.177311] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:15:35.199 [2024-07-15 14:08:21.177645] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:35.199 BaseBdev2 00:15:35.199 14:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:15:35.199 14:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:15:35.199 14:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:35.199 14:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:15:35.199 14:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:35.199 14:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:35.199 14:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:35.458 14:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:35.717 [ 00:15:35.717 { 00:15:35.717 "name": "BaseBdev2", 00:15:35.717 "aliases": [ 00:15:35.717 "1dcd7097-3dfd-4e83-b545-ac9300108708" 00:15:35.717 ], 00:15:35.717 "product_name": "Malloc disk", 00:15:35.717 "block_size": 512, 00:15:35.717 "num_blocks": 65536, 00:15:35.717 "uuid": "1dcd7097-3dfd-4e83-b545-ac9300108708", 00:15:35.717 "assigned_rate_limits": { 00:15:35.717 "rw_ios_per_sec": 0, 00:15:35.717 "rw_mbytes_per_sec": 0, 00:15:35.717 "r_mbytes_per_sec": 0, 00:15:35.717 "w_mbytes_per_sec": 0 00:15:35.717 }, 00:15:35.717 "claimed": true, 00:15:35.717 "claim_type": "exclusive_write", 00:15:35.717 "zoned": false, 00:15:35.717 "supported_io_types": { 00:15:35.717 "read": true, 00:15:35.717 "write": true, 00:15:35.717 "unmap": true, 00:15:35.717 "flush": true, 00:15:35.717 "reset": true, 00:15:35.717 "nvme_admin": false, 00:15:35.717 "nvme_io": false, 00:15:35.717 "nvme_io_md": false, 00:15:35.717 "write_zeroes": true, 00:15:35.717 "zcopy": true, 00:15:35.717 "get_zone_info": false, 00:15:35.717 "zone_management": false, 00:15:35.717 "zone_append": false, 00:15:35.717 "compare": false, 00:15:35.717 "compare_and_write": false, 00:15:35.717 "abort": true, 00:15:35.717 "seek_hole": false, 00:15:35.717 "seek_data": false, 00:15:35.717 "copy": true, 00:15:35.717 "nvme_iov_md": false 00:15:35.717 }, 00:15:35.717 "memory_domains": [ 00:15:35.717 { 00:15:35.717 "dma_device_id": "system", 00:15:35.717 "dma_device_type": 1 00:15:35.717 }, 00:15:35.717 { 00:15:35.717 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:35.717 "dma_device_type": 2 00:15:35.717 } 00:15:35.717 ], 00:15:35.717 "driver_specific": {} 00:15:35.717 } 00:15:35.717 ] 00:15:35.717 14:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:15:35.717 14:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:35.717 14:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:35.717 14:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:15:35.717 14:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:35.717 14:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:35.717 14:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:35.717 14:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:35.717 14:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:35.717 14:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:35.717 14:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:35.717 14:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:35.717 14:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:35.717 14:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:35.717 14:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:35.976 14:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:35.976 "name": "Existed_Raid", 00:15:35.976 "uuid": "930293f8-fcdb-47d7-8b29-ef9a2ab5c08d", 00:15:35.976 "strip_size_kb": 0, 00:15:35.976 "state": "online", 00:15:35.976 "raid_level": "raid1", 00:15:35.976 "superblock": false, 00:15:35.976 "num_base_bdevs": 2, 00:15:35.976 "num_base_bdevs_discovered": 2, 00:15:35.976 "num_base_bdevs_operational": 2, 00:15:35.976 "base_bdevs_list": [ 00:15:35.976 { 00:15:35.976 "name": "BaseBdev1", 00:15:35.976 "uuid": "c4403c61-bc19-4dd8-b4ac-5bc8f6f13ae3", 00:15:35.976 "is_configured": true, 00:15:35.976 "data_offset": 0, 00:15:35.976 "data_size": 65536 00:15:35.976 }, 00:15:35.976 { 00:15:35.976 "name": "BaseBdev2", 00:15:35.976 "uuid": "1dcd7097-3dfd-4e83-b545-ac9300108708", 00:15:35.976 "is_configured": true, 00:15:35.976 "data_offset": 0, 00:15:35.976 "data_size": 65536 00:15:35.976 } 00:15:35.976 ] 00:15:35.976 }' 00:15:35.976 14:08:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:35.976 14:08:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.913 14:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:15:36.913 14:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:36.913 14:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:36.913 14:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:36.913 14:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:36.913 14:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:36.913 14:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:36.913 14:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:37.171 [2024-07-15 14:08:22.928683] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:37.171 14:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:37.171 "name": "Existed_Raid", 00:15:37.171 "aliases": [ 00:15:37.171 "930293f8-fcdb-47d7-8b29-ef9a2ab5c08d" 00:15:37.171 ], 00:15:37.171 "product_name": "Raid Volume", 00:15:37.171 "block_size": 512, 00:15:37.171 "num_blocks": 65536, 00:15:37.171 "uuid": "930293f8-fcdb-47d7-8b29-ef9a2ab5c08d", 00:15:37.171 "assigned_rate_limits": { 00:15:37.171 "rw_ios_per_sec": 0, 00:15:37.171 "rw_mbytes_per_sec": 0, 00:15:37.171 "r_mbytes_per_sec": 0, 00:15:37.171 "w_mbytes_per_sec": 0 00:15:37.171 }, 00:15:37.171 "claimed": false, 00:15:37.171 "zoned": false, 00:15:37.171 "supported_io_types": { 00:15:37.171 "read": true, 00:15:37.171 "write": true, 00:15:37.171 "unmap": false, 00:15:37.171 "flush": false, 00:15:37.171 "reset": true, 00:15:37.171 "nvme_admin": false, 00:15:37.171 "nvme_io": false, 00:15:37.171 "nvme_io_md": false, 00:15:37.171 "write_zeroes": true, 00:15:37.171 "zcopy": false, 00:15:37.171 "get_zone_info": false, 00:15:37.171 "zone_management": false, 00:15:37.171 "zone_append": false, 00:15:37.171 "compare": false, 00:15:37.171 "compare_and_write": false, 00:15:37.171 "abort": false, 00:15:37.171 "seek_hole": false, 00:15:37.171 "seek_data": false, 00:15:37.171 "copy": false, 00:15:37.171 "nvme_iov_md": false 00:15:37.171 }, 00:15:37.171 "memory_domains": [ 00:15:37.171 { 00:15:37.171 "dma_device_id": "system", 00:15:37.171 "dma_device_type": 1 00:15:37.171 }, 00:15:37.171 { 00:15:37.171 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:37.171 "dma_device_type": 2 00:15:37.171 }, 00:15:37.171 { 00:15:37.171 "dma_device_id": "system", 00:15:37.171 "dma_device_type": 1 00:15:37.171 }, 00:15:37.171 { 00:15:37.171 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:37.171 "dma_device_type": 2 00:15:37.171 } 00:15:37.171 ], 00:15:37.171 "driver_specific": { 00:15:37.171 "raid": { 00:15:37.171 "uuid": "930293f8-fcdb-47d7-8b29-ef9a2ab5c08d", 00:15:37.171 "strip_size_kb": 0, 00:15:37.171 "state": "online", 00:15:37.171 "raid_level": "raid1", 00:15:37.171 "superblock": false, 00:15:37.171 "num_base_bdevs": 2, 00:15:37.171 "num_base_bdevs_discovered": 2, 00:15:37.171 "num_base_bdevs_operational": 2, 00:15:37.171 "base_bdevs_list": [ 00:15:37.171 { 00:15:37.171 "name": "BaseBdev1", 00:15:37.171 "uuid": "c4403c61-bc19-4dd8-b4ac-5bc8f6f13ae3", 00:15:37.171 "is_configured": true, 00:15:37.171 "data_offset": 0, 00:15:37.171 "data_size": 65536 00:15:37.171 }, 00:15:37.171 { 00:15:37.171 "name": "BaseBdev2", 00:15:37.171 "uuid": "1dcd7097-3dfd-4e83-b545-ac9300108708", 00:15:37.171 "is_configured": true, 00:15:37.171 "data_offset": 0, 00:15:37.171 "data_size": 65536 00:15:37.171 } 00:15:37.171 ] 00:15:37.171 } 00:15:37.171 } 00:15:37.171 }' 00:15:37.171 14:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:37.171 14:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:15:37.171 BaseBdev2' 00:15:37.171 14:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:37.171 14:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:15:37.171 14:08:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:37.434 14:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:37.434 "name": "BaseBdev1", 00:15:37.434 "aliases": [ 00:15:37.434 "c4403c61-bc19-4dd8-b4ac-5bc8f6f13ae3" 00:15:37.434 ], 00:15:37.434 "product_name": "Malloc disk", 00:15:37.434 "block_size": 512, 00:15:37.434 "num_blocks": 65536, 00:15:37.434 "uuid": "c4403c61-bc19-4dd8-b4ac-5bc8f6f13ae3", 00:15:37.434 "assigned_rate_limits": { 00:15:37.434 "rw_ios_per_sec": 0, 00:15:37.434 "rw_mbytes_per_sec": 0, 00:15:37.434 "r_mbytes_per_sec": 0, 00:15:37.434 "w_mbytes_per_sec": 0 00:15:37.434 }, 00:15:37.434 "claimed": true, 00:15:37.434 "claim_type": "exclusive_write", 00:15:37.434 "zoned": false, 00:15:37.434 "supported_io_types": { 00:15:37.434 "read": true, 00:15:37.434 "write": true, 00:15:37.434 "unmap": true, 00:15:37.434 "flush": true, 00:15:37.434 "reset": true, 00:15:37.434 "nvme_admin": false, 00:15:37.434 "nvme_io": false, 00:15:37.434 "nvme_io_md": false, 00:15:37.434 "write_zeroes": true, 00:15:37.434 "zcopy": true, 00:15:37.434 "get_zone_info": false, 00:15:37.434 "zone_management": false, 00:15:37.434 "zone_append": false, 00:15:37.434 "compare": false, 00:15:37.434 "compare_and_write": false, 00:15:37.434 "abort": true, 00:15:37.434 "seek_hole": false, 00:15:37.434 "seek_data": false, 00:15:37.434 "copy": true, 00:15:37.434 "nvme_iov_md": false 00:15:37.434 }, 00:15:37.434 "memory_domains": [ 00:15:37.434 { 00:15:37.434 "dma_device_id": "system", 00:15:37.434 "dma_device_type": 1 00:15:37.434 }, 00:15:37.434 { 00:15:37.434 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:37.434 "dma_device_type": 2 00:15:37.434 } 00:15:37.434 ], 00:15:37.434 "driver_specific": {} 00:15:37.434 }' 00:15:37.434 14:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:37.434 14:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:37.434 14:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:37.434 14:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:37.434 14:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:37.434 14:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:37.434 14:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:37.691 14:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:37.691 14:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:37.691 14:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:37.691 14:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:37.691 14:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:37.691 14:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:37.691 14:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:37.691 14:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:37.948 14:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:37.948 "name": "BaseBdev2", 00:15:37.948 "aliases": [ 00:15:37.948 "1dcd7097-3dfd-4e83-b545-ac9300108708" 00:15:37.948 ], 00:15:37.948 "product_name": "Malloc disk", 00:15:37.948 "block_size": 512, 00:15:37.948 "num_blocks": 65536, 00:15:37.948 "uuid": "1dcd7097-3dfd-4e83-b545-ac9300108708", 00:15:37.948 "assigned_rate_limits": { 00:15:37.948 "rw_ios_per_sec": 0, 00:15:37.948 "rw_mbytes_per_sec": 0, 00:15:37.948 "r_mbytes_per_sec": 0, 00:15:37.948 "w_mbytes_per_sec": 0 00:15:37.948 }, 00:15:37.948 "claimed": true, 00:15:37.948 "claim_type": "exclusive_write", 00:15:37.948 "zoned": false, 00:15:37.948 "supported_io_types": { 00:15:37.948 "read": true, 00:15:37.948 "write": true, 00:15:37.948 "unmap": true, 00:15:37.948 "flush": true, 00:15:37.948 "reset": true, 00:15:37.948 "nvme_admin": false, 00:15:37.948 "nvme_io": false, 00:15:37.948 "nvme_io_md": false, 00:15:37.948 "write_zeroes": true, 00:15:37.948 "zcopy": true, 00:15:37.948 "get_zone_info": false, 00:15:37.948 "zone_management": false, 00:15:37.948 "zone_append": false, 00:15:37.948 "compare": false, 00:15:37.948 "compare_and_write": false, 00:15:37.948 "abort": true, 00:15:37.948 "seek_hole": false, 00:15:37.948 "seek_data": false, 00:15:37.948 "copy": true, 00:15:37.948 "nvme_iov_md": false 00:15:37.948 }, 00:15:37.948 "memory_domains": [ 00:15:37.948 { 00:15:37.948 "dma_device_id": "system", 00:15:37.948 "dma_device_type": 1 00:15:37.948 }, 00:15:37.948 { 00:15:37.948 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:37.948 "dma_device_type": 2 00:15:37.948 } 00:15:37.948 ], 00:15:37.948 "driver_specific": {} 00:15:37.948 }' 00:15:37.948 14:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:37.948 14:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:38.206 14:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:38.206 14:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:38.206 14:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:38.206 14:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:38.206 14:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:38.206 14:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:38.206 14:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:38.206 14:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:38.464 14:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:38.464 14:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:38.464 14:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:38.722 [2024-07-15 14:08:24.548816] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:38.722 14:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:15:38.722 14:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:15:38.722 14:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:38.722 14:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:15:38.722 14:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:15:38.722 14:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:15:38.722 14:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:38.722 14:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:38.722 14:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:38.722 14:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:38.722 14:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:15:38.722 14:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:38.722 14:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:38.722 14:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:38.722 14:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:38.722 14:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:38.722 14:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:39.288 14:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:39.288 "name": "Existed_Raid", 00:15:39.288 "uuid": "930293f8-fcdb-47d7-8b29-ef9a2ab5c08d", 00:15:39.288 "strip_size_kb": 0, 00:15:39.288 "state": "online", 00:15:39.288 "raid_level": "raid1", 00:15:39.288 "superblock": false, 00:15:39.288 "num_base_bdevs": 2, 00:15:39.288 "num_base_bdevs_discovered": 1, 00:15:39.288 "num_base_bdevs_operational": 1, 00:15:39.288 "base_bdevs_list": [ 00:15:39.288 { 00:15:39.288 "name": null, 00:15:39.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.288 "is_configured": false, 00:15:39.288 "data_offset": 0, 00:15:39.288 "data_size": 65536 00:15:39.288 }, 00:15:39.288 { 00:15:39.288 "name": "BaseBdev2", 00:15:39.288 "uuid": "1dcd7097-3dfd-4e83-b545-ac9300108708", 00:15:39.288 "is_configured": true, 00:15:39.288 "data_offset": 0, 00:15:39.288 "data_size": 65536 00:15:39.288 } 00:15:39.288 ] 00:15:39.288 }' 00:15:39.288 14:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:39.288 14:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.855 14:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:15:39.855 14:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:39.855 14:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:39.855 14:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:40.114 14:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:40.114 14:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:40.114 14:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:40.372 [2024-07-15 14:08:26.226372] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:40.372 [2024-07-15 14:08:26.226684] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:40.372 [2024-07-15 14:08:26.313834] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:40.372 [2024-07-15 14:08:26.314917] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:40.372 [2024-07-15 14:08:26.315173] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:15:40.372 14:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:40.372 14:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:40.372 14:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:40.372 14:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:15:40.631 14:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:15:40.631 14:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:15:40.631 14:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:15:40.631 14:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 189479 00:15:40.631 14:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 189479 ']' 00:15:40.631 14:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 189479 00:15:40.631 14:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:15:40.631 14:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:40.631 14:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 189479 00:15:40.631 14:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:40.631 14:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:40.632 14:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 189479' 00:15:40.632 killing process with pid 189479 00:15:40.632 14:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 189479 00:15:40.632 14:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 189479 00:15:40.632 [2024-07-15 14:08:26.595610] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:40.632 [2024-07-15 14:08:26.595787] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:42.122 14:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:15:42.122 00:15:42.122 real 0m12.771s 00:15:42.122 user 0m22.434s 00:15:42.122 sys 0m1.452s 00:15:42.122 14:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:42.122 14:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.122 ************************************ 00:15:42.122 END TEST raid_state_function_test 00:15:42.122 ************************************ 00:15:42.122 14:08:27 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:15:42.122 14:08:27 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:15:42.122 14:08:27 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:15:42.122 14:08:27 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:42.122 14:08:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:42.122 ************************************ 00:15:42.122 START TEST raid_state_function_test_sb 00:15:42.122 ************************************ 00:15:42.122 14:08:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 true 00:15:42.122 14:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:15:42.122 14:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:15:42.122 14:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:15:42.122 14:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:15:42.122 14:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:15:42.122 14:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:42.122 14:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:15:42.122 14:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:42.122 14:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:42.122 14:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:15:42.122 14:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:42.122 14:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:42.122 14:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:42.122 14:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:15:42.122 14:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:15:42.122 14:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:15:42.122 14:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:15:42.122 14:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:15:42.122 14:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:15:42.122 14:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:15:42.122 14:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:15:42.122 14:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:15:42.122 14:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=189875 00:15:42.122 14:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:42.122 14:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 189875' 00:15:42.122 Process raid pid: 189875 00:15:42.122 14:08:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 189875 /var/tmp/spdk-raid.sock 00:15:42.122 14:08:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 189875 ']' 00:15:42.122 14:08:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:42.122 14:08:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:42.122 14:08:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:42.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:42.122 14:08:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:42.122 14:08:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.122 [2024-07-15 14:08:27.873304] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:15:42.122 [2024-07-15 14:08:27.873679] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:42.122 [2024-07-15 14:08:28.045690] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:42.382 [2024-07-15 14:08:28.302633] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:42.640 [2024-07-15 14:08:28.505564] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:42.899 14:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:42.899 14:08:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:15:42.899 14:08:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:43.466 [2024-07-15 14:08:29.182694] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:43.466 [2024-07-15 14:08:29.183408] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:43.466 [2024-07-15 14:08:29.183558] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:43.466 [2024-07-15 14:08:29.183796] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:43.466 14:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:43.466 14:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:43.466 14:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:43.466 14:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:43.466 14:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:43.466 14:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:43.466 14:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:43.466 14:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:43.466 14:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:43.466 14:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:43.466 14:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:43.466 14:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:43.724 14:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:43.724 "name": "Existed_Raid", 00:15:43.724 "uuid": "5db3031d-8d6b-4c3f-bfe4-93ef11a4cb00", 00:15:43.724 "strip_size_kb": 0, 00:15:43.724 "state": "configuring", 00:15:43.724 "raid_level": "raid1", 00:15:43.724 "superblock": true, 00:15:43.724 "num_base_bdevs": 2, 00:15:43.724 "num_base_bdevs_discovered": 0, 00:15:43.724 "num_base_bdevs_operational": 2, 00:15:43.724 "base_bdevs_list": [ 00:15:43.724 { 00:15:43.724 "name": "BaseBdev1", 00:15:43.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.724 "is_configured": false, 00:15:43.724 "data_offset": 0, 00:15:43.724 "data_size": 0 00:15:43.724 }, 00:15:43.724 { 00:15:43.724 "name": "BaseBdev2", 00:15:43.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.724 "is_configured": false, 00:15:43.724 "data_offset": 0, 00:15:43.724 "data_size": 0 00:15:43.724 } 00:15:43.724 ] 00:15:43.724 }' 00:15:43.724 14:08:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:43.724 14:08:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.292 14:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:44.551 [2024-07-15 14:08:30.354765] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:44.551 [2024-07-15 14:08:30.355019] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:15:44.551 14:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:44.810 [2024-07-15 14:08:30.590842] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:44.810 [2024-07-15 14:08:30.591503] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:44.810 [2024-07-15 14:08:30.591643] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:44.810 [2024-07-15 14:08:30.591809] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:44.810 14:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:45.068 [2024-07-15 14:08:30.867502] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:45.068 BaseBdev1 00:15:45.068 14:08:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:15:45.068 14:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:15:45.068 14:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:45.068 14:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:15:45.068 14:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:45.068 14:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:45.068 14:08:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:45.327 14:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:45.585 [ 00:15:45.585 { 00:15:45.585 "name": "BaseBdev1", 00:15:45.585 "aliases": [ 00:15:45.585 "7ad868af-c127-4ee5-8ad0-95623d60e69a" 00:15:45.585 ], 00:15:45.585 "product_name": "Malloc disk", 00:15:45.585 "block_size": 512, 00:15:45.585 "num_blocks": 65536, 00:15:45.585 "uuid": "7ad868af-c127-4ee5-8ad0-95623d60e69a", 00:15:45.585 "assigned_rate_limits": { 00:15:45.585 "rw_ios_per_sec": 0, 00:15:45.585 "rw_mbytes_per_sec": 0, 00:15:45.585 "r_mbytes_per_sec": 0, 00:15:45.585 "w_mbytes_per_sec": 0 00:15:45.585 }, 00:15:45.585 "claimed": true, 00:15:45.585 "claim_type": "exclusive_write", 00:15:45.585 "zoned": false, 00:15:45.585 "supported_io_types": { 00:15:45.585 "read": true, 00:15:45.585 "write": true, 00:15:45.585 "unmap": true, 00:15:45.585 "flush": true, 00:15:45.585 "reset": true, 00:15:45.585 "nvme_admin": false, 00:15:45.585 "nvme_io": false, 00:15:45.585 "nvme_io_md": false, 00:15:45.585 "write_zeroes": true, 00:15:45.585 "zcopy": true, 00:15:45.586 "get_zone_info": false, 00:15:45.586 "zone_management": false, 00:15:45.586 "zone_append": false, 00:15:45.586 "compare": false, 00:15:45.586 "compare_and_write": false, 00:15:45.586 "abort": true, 00:15:45.586 "seek_hole": false, 00:15:45.586 "seek_data": false, 00:15:45.586 "copy": true, 00:15:45.586 "nvme_iov_md": false 00:15:45.586 }, 00:15:45.586 "memory_domains": [ 00:15:45.586 { 00:15:45.586 "dma_device_id": "system", 00:15:45.586 "dma_device_type": 1 00:15:45.586 }, 00:15:45.586 { 00:15:45.586 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:45.586 "dma_device_type": 2 00:15:45.586 } 00:15:45.586 ], 00:15:45.586 "driver_specific": {} 00:15:45.586 } 00:15:45.586 ] 00:15:45.586 14:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:15:45.586 14:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:45.586 14:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:45.586 14:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:45.586 14:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:45.586 14:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:45.586 14:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:45.586 14:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:45.586 14:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:45.586 14:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:45.586 14:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:45.586 14:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:45.586 14:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:45.844 14:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:45.844 "name": "Existed_Raid", 00:15:45.844 "uuid": "892a8143-927a-4f7a-ad9f-4fad3a36cf4f", 00:15:45.844 "strip_size_kb": 0, 00:15:45.844 "state": "configuring", 00:15:45.844 "raid_level": "raid1", 00:15:45.844 "superblock": true, 00:15:45.844 "num_base_bdevs": 2, 00:15:45.844 "num_base_bdevs_discovered": 1, 00:15:45.844 "num_base_bdevs_operational": 2, 00:15:45.844 "base_bdevs_list": [ 00:15:45.844 { 00:15:45.844 "name": "BaseBdev1", 00:15:45.844 "uuid": "7ad868af-c127-4ee5-8ad0-95623d60e69a", 00:15:45.844 "is_configured": true, 00:15:45.844 "data_offset": 2048, 00:15:45.844 "data_size": 63488 00:15:45.844 }, 00:15:45.844 { 00:15:45.844 "name": "BaseBdev2", 00:15:45.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.844 "is_configured": false, 00:15:45.844 "data_offset": 0, 00:15:45.844 "data_size": 0 00:15:45.844 } 00:15:45.844 ] 00:15:45.844 }' 00:15:45.844 14:08:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:45.844 14:08:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.781 14:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:46.781 [2024-07-15 14:08:32.763834] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:46.781 [2024-07-15 14:08:32.764517] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:15:46.781 14:08:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:47.040 [2024-07-15 14:08:33.007921] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:47.040 [2024-07-15 14:08:33.009640] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:47.040 [2024-07-15 14:08:33.010170] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:47.040 14:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:15:47.040 14:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:47.040 14:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:47.040 14:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:47.040 14:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:47.040 14:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:47.040 14:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:47.040 14:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:47.040 14:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:47.040 14:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:47.040 14:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:47.040 14:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:47.040 14:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:47.040 14:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:47.622 14:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:47.622 "name": "Existed_Raid", 00:15:47.622 "uuid": "2fb9c9d0-db14-43e0-a674-462369b38c7c", 00:15:47.622 "strip_size_kb": 0, 00:15:47.622 "state": "configuring", 00:15:47.622 "raid_level": "raid1", 00:15:47.622 "superblock": true, 00:15:47.622 "num_base_bdevs": 2, 00:15:47.622 "num_base_bdevs_discovered": 1, 00:15:47.622 "num_base_bdevs_operational": 2, 00:15:47.622 "base_bdevs_list": [ 00:15:47.622 { 00:15:47.622 "name": "BaseBdev1", 00:15:47.622 "uuid": "7ad868af-c127-4ee5-8ad0-95623d60e69a", 00:15:47.622 "is_configured": true, 00:15:47.622 "data_offset": 2048, 00:15:47.622 "data_size": 63488 00:15:47.622 }, 00:15:47.622 { 00:15:47.622 "name": "BaseBdev2", 00:15:47.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.622 "is_configured": false, 00:15:47.622 "data_offset": 0, 00:15:47.622 "data_size": 0 00:15:47.622 } 00:15:47.622 ] 00:15:47.622 }' 00:15:47.622 14:08:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:47.622 14:08:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.248 14:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:48.506 [2024-07-15 14:08:34.298588] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:48.506 [2024-07-15 14:08:34.299170] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:15:48.506 [2024-07-15 14:08:34.299329] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:48.506 [2024-07-15 14:08:34.299600] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:15:48.506 [2024-07-15 14:08:34.300064] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:15:48.506 [2024-07-15 14:08:34.300213] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:15:48.506 BaseBdev2 00:15:48.506 [2024-07-15 14:08:34.300534] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:48.506 14:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:15:48.506 14:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:15:48.506 14:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:48.506 14:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:15:48.506 14:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:48.506 14:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:48.506 14:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:48.763 14:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:49.022 [ 00:15:49.022 { 00:15:49.022 "name": "BaseBdev2", 00:15:49.022 "aliases": [ 00:15:49.022 "400baf47-53af-4a34-9082-daacdb145256" 00:15:49.022 ], 00:15:49.022 "product_name": "Malloc disk", 00:15:49.022 "block_size": 512, 00:15:49.022 "num_blocks": 65536, 00:15:49.022 "uuid": "400baf47-53af-4a34-9082-daacdb145256", 00:15:49.022 "assigned_rate_limits": { 00:15:49.022 "rw_ios_per_sec": 0, 00:15:49.022 "rw_mbytes_per_sec": 0, 00:15:49.022 "r_mbytes_per_sec": 0, 00:15:49.022 "w_mbytes_per_sec": 0 00:15:49.022 }, 00:15:49.022 "claimed": true, 00:15:49.022 "claim_type": "exclusive_write", 00:15:49.022 "zoned": false, 00:15:49.022 "supported_io_types": { 00:15:49.022 "read": true, 00:15:49.022 "write": true, 00:15:49.022 "unmap": true, 00:15:49.022 "flush": true, 00:15:49.022 "reset": true, 00:15:49.022 "nvme_admin": false, 00:15:49.022 "nvme_io": false, 00:15:49.022 "nvme_io_md": false, 00:15:49.022 "write_zeroes": true, 00:15:49.022 "zcopy": true, 00:15:49.022 "get_zone_info": false, 00:15:49.022 "zone_management": false, 00:15:49.022 "zone_append": false, 00:15:49.022 "compare": false, 00:15:49.022 "compare_and_write": false, 00:15:49.022 "abort": true, 00:15:49.022 "seek_hole": false, 00:15:49.022 "seek_data": false, 00:15:49.022 "copy": true, 00:15:49.022 "nvme_iov_md": false 00:15:49.022 }, 00:15:49.022 "memory_domains": [ 00:15:49.022 { 00:15:49.022 "dma_device_id": "system", 00:15:49.022 "dma_device_type": 1 00:15:49.022 }, 00:15:49.022 { 00:15:49.022 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:49.022 "dma_device_type": 2 00:15:49.022 } 00:15:49.022 ], 00:15:49.022 "driver_specific": {} 00:15:49.022 } 00:15:49.022 ] 00:15:49.022 14:08:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:15:49.022 14:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:49.022 14:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:49.022 14:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:15:49.022 14:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:49.022 14:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:49.022 14:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:49.022 14:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:49.022 14:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:49.022 14:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:49.022 14:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:49.022 14:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:49.022 14:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:49.022 14:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:49.022 14:08:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:49.281 14:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:49.281 "name": "Existed_Raid", 00:15:49.281 "uuid": "2fb9c9d0-db14-43e0-a674-462369b38c7c", 00:15:49.281 "strip_size_kb": 0, 00:15:49.281 "state": "online", 00:15:49.281 "raid_level": "raid1", 00:15:49.281 "superblock": true, 00:15:49.281 "num_base_bdevs": 2, 00:15:49.281 "num_base_bdevs_discovered": 2, 00:15:49.281 "num_base_bdevs_operational": 2, 00:15:49.281 "base_bdevs_list": [ 00:15:49.281 { 00:15:49.281 "name": "BaseBdev1", 00:15:49.281 "uuid": "7ad868af-c127-4ee5-8ad0-95623d60e69a", 00:15:49.281 "is_configured": true, 00:15:49.281 "data_offset": 2048, 00:15:49.281 "data_size": 63488 00:15:49.281 }, 00:15:49.281 { 00:15:49.281 "name": "BaseBdev2", 00:15:49.281 "uuid": "400baf47-53af-4a34-9082-daacdb145256", 00:15:49.281 "is_configured": true, 00:15:49.281 "data_offset": 2048, 00:15:49.281 "data_size": 63488 00:15:49.281 } 00:15:49.281 ] 00:15:49.281 }' 00:15:49.281 14:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:49.281 14:08:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.848 14:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:15:49.848 14:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:49.848 14:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:49.848 14:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:49.848 14:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:49.848 14:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:15:49.848 14:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:49.848 14:08:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:50.107 [2024-07-15 14:08:36.017491] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:50.107 14:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:50.107 "name": "Existed_Raid", 00:15:50.107 "aliases": [ 00:15:50.107 "2fb9c9d0-db14-43e0-a674-462369b38c7c" 00:15:50.107 ], 00:15:50.107 "product_name": "Raid Volume", 00:15:50.107 "block_size": 512, 00:15:50.107 "num_blocks": 63488, 00:15:50.107 "uuid": "2fb9c9d0-db14-43e0-a674-462369b38c7c", 00:15:50.107 "assigned_rate_limits": { 00:15:50.107 "rw_ios_per_sec": 0, 00:15:50.107 "rw_mbytes_per_sec": 0, 00:15:50.107 "r_mbytes_per_sec": 0, 00:15:50.107 "w_mbytes_per_sec": 0 00:15:50.107 }, 00:15:50.107 "claimed": false, 00:15:50.107 "zoned": false, 00:15:50.107 "supported_io_types": { 00:15:50.107 "read": true, 00:15:50.107 "write": true, 00:15:50.107 "unmap": false, 00:15:50.107 "flush": false, 00:15:50.107 "reset": true, 00:15:50.107 "nvme_admin": false, 00:15:50.107 "nvme_io": false, 00:15:50.107 "nvme_io_md": false, 00:15:50.107 "write_zeroes": true, 00:15:50.107 "zcopy": false, 00:15:50.107 "get_zone_info": false, 00:15:50.107 "zone_management": false, 00:15:50.107 "zone_append": false, 00:15:50.107 "compare": false, 00:15:50.107 "compare_and_write": false, 00:15:50.107 "abort": false, 00:15:50.107 "seek_hole": false, 00:15:50.107 "seek_data": false, 00:15:50.107 "copy": false, 00:15:50.108 "nvme_iov_md": false 00:15:50.108 }, 00:15:50.108 "memory_domains": [ 00:15:50.108 { 00:15:50.108 "dma_device_id": "system", 00:15:50.108 "dma_device_type": 1 00:15:50.108 }, 00:15:50.108 { 00:15:50.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:50.108 "dma_device_type": 2 00:15:50.108 }, 00:15:50.108 { 00:15:50.108 "dma_device_id": "system", 00:15:50.108 "dma_device_type": 1 00:15:50.108 }, 00:15:50.108 { 00:15:50.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:50.108 "dma_device_type": 2 00:15:50.108 } 00:15:50.108 ], 00:15:50.108 "driver_specific": { 00:15:50.108 "raid": { 00:15:50.108 "uuid": "2fb9c9d0-db14-43e0-a674-462369b38c7c", 00:15:50.108 "strip_size_kb": 0, 00:15:50.108 "state": "online", 00:15:50.108 "raid_level": "raid1", 00:15:50.108 "superblock": true, 00:15:50.108 "num_base_bdevs": 2, 00:15:50.108 "num_base_bdevs_discovered": 2, 00:15:50.108 "num_base_bdevs_operational": 2, 00:15:50.108 "base_bdevs_list": [ 00:15:50.108 { 00:15:50.108 "name": "BaseBdev1", 00:15:50.108 "uuid": "7ad868af-c127-4ee5-8ad0-95623d60e69a", 00:15:50.108 "is_configured": true, 00:15:50.108 "data_offset": 2048, 00:15:50.108 "data_size": 63488 00:15:50.108 }, 00:15:50.108 { 00:15:50.108 "name": "BaseBdev2", 00:15:50.108 "uuid": "400baf47-53af-4a34-9082-daacdb145256", 00:15:50.108 "is_configured": true, 00:15:50.108 "data_offset": 2048, 00:15:50.108 "data_size": 63488 00:15:50.108 } 00:15:50.108 ] 00:15:50.108 } 00:15:50.108 } 00:15:50.108 }' 00:15:50.108 14:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:50.108 14:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:15:50.108 BaseBdev2' 00:15:50.108 14:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:50.108 14:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:50.108 14:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:15:50.674 14:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:50.674 "name": "BaseBdev1", 00:15:50.674 "aliases": [ 00:15:50.674 "7ad868af-c127-4ee5-8ad0-95623d60e69a" 00:15:50.674 ], 00:15:50.674 "product_name": "Malloc disk", 00:15:50.674 "block_size": 512, 00:15:50.674 "num_blocks": 65536, 00:15:50.674 "uuid": "7ad868af-c127-4ee5-8ad0-95623d60e69a", 00:15:50.674 "assigned_rate_limits": { 00:15:50.674 "rw_ios_per_sec": 0, 00:15:50.674 "rw_mbytes_per_sec": 0, 00:15:50.674 "r_mbytes_per_sec": 0, 00:15:50.674 "w_mbytes_per_sec": 0 00:15:50.674 }, 00:15:50.674 "claimed": true, 00:15:50.674 "claim_type": "exclusive_write", 00:15:50.674 "zoned": false, 00:15:50.674 "supported_io_types": { 00:15:50.674 "read": true, 00:15:50.674 "write": true, 00:15:50.674 "unmap": true, 00:15:50.674 "flush": true, 00:15:50.674 "reset": true, 00:15:50.674 "nvme_admin": false, 00:15:50.674 "nvme_io": false, 00:15:50.674 "nvme_io_md": false, 00:15:50.674 "write_zeroes": true, 00:15:50.674 "zcopy": true, 00:15:50.674 "get_zone_info": false, 00:15:50.674 "zone_management": false, 00:15:50.674 "zone_append": false, 00:15:50.674 "compare": false, 00:15:50.674 "compare_and_write": false, 00:15:50.674 "abort": true, 00:15:50.674 "seek_hole": false, 00:15:50.674 "seek_data": false, 00:15:50.674 "copy": true, 00:15:50.674 "nvme_iov_md": false 00:15:50.674 }, 00:15:50.674 "memory_domains": [ 00:15:50.674 { 00:15:50.674 "dma_device_id": "system", 00:15:50.674 "dma_device_type": 1 00:15:50.674 }, 00:15:50.674 { 00:15:50.674 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:50.674 "dma_device_type": 2 00:15:50.674 } 00:15:50.674 ], 00:15:50.674 "driver_specific": {} 00:15:50.674 }' 00:15:50.674 14:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:50.674 14:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:50.674 14:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:50.674 14:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:50.674 14:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:50.674 14:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:50.674 14:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:50.674 14:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:50.674 14:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:50.674 14:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:50.931 14:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:50.931 14:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:50.931 14:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:50.931 14:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:50.931 14:08:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:51.220 14:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:51.220 "name": "BaseBdev2", 00:15:51.220 "aliases": [ 00:15:51.220 "400baf47-53af-4a34-9082-daacdb145256" 00:15:51.220 ], 00:15:51.220 "product_name": "Malloc disk", 00:15:51.220 "block_size": 512, 00:15:51.220 "num_blocks": 65536, 00:15:51.220 "uuid": "400baf47-53af-4a34-9082-daacdb145256", 00:15:51.220 "assigned_rate_limits": { 00:15:51.220 "rw_ios_per_sec": 0, 00:15:51.220 "rw_mbytes_per_sec": 0, 00:15:51.220 "r_mbytes_per_sec": 0, 00:15:51.220 "w_mbytes_per_sec": 0 00:15:51.220 }, 00:15:51.220 "claimed": true, 00:15:51.220 "claim_type": "exclusive_write", 00:15:51.220 "zoned": false, 00:15:51.220 "supported_io_types": { 00:15:51.220 "read": true, 00:15:51.220 "write": true, 00:15:51.220 "unmap": true, 00:15:51.220 "flush": true, 00:15:51.220 "reset": true, 00:15:51.220 "nvme_admin": false, 00:15:51.220 "nvme_io": false, 00:15:51.220 "nvme_io_md": false, 00:15:51.220 "write_zeroes": true, 00:15:51.220 "zcopy": true, 00:15:51.220 "get_zone_info": false, 00:15:51.220 "zone_management": false, 00:15:51.220 "zone_append": false, 00:15:51.220 "compare": false, 00:15:51.220 "compare_and_write": false, 00:15:51.220 "abort": true, 00:15:51.220 "seek_hole": false, 00:15:51.220 "seek_data": false, 00:15:51.220 "copy": true, 00:15:51.220 "nvme_iov_md": false 00:15:51.220 }, 00:15:51.220 "memory_domains": [ 00:15:51.220 { 00:15:51.220 "dma_device_id": "system", 00:15:51.220 "dma_device_type": 1 00:15:51.220 }, 00:15:51.220 { 00:15:51.220 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.220 "dma_device_type": 2 00:15:51.220 } 00:15:51.220 ], 00:15:51.220 "driver_specific": {} 00:15:51.220 }' 00:15:51.220 14:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:51.220 14:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:51.220 14:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:51.220 14:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:51.220 14:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:51.220 14:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:51.220 14:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:51.505 14:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:51.505 14:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:51.505 14:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:51.505 14:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:51.505 14:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:51.505 14:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:51.763 [2024-07-15 14:08:37.620431] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:51.763 14:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:15:51.763 14:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:15:51.763 14:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:51.763 14:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:15:51.763 14:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:15:51.763 14:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:15:51.763 14:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:51.763 14:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:51.763 14:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:51.763 14:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:51.763 14:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:15:51.763 14:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:51.763 14:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:51.763 14:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:51.763 14:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:51.763 14:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:51.763 14:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:52.021 14:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:52.021 "name": "Existed_Raid", 00:15:52.021 "uuid": "2fb9c9d0-db14-43e0-a674-462369b38c7c", 00:15:52.021 "strip_size_kb": 0, 00:15:52.021 "state": "online", 00:15:52.021 "raid_level": "raid1", 00:15:52.021 "superblock": true, 00:15:52.021 "num_base_bdevs": 2, 00:15:52.021 "num_base_bdevs_discovered": 1, 00:15:52.021 "num_base_bdevs_operational": 1, 00:15:52.021 "base_bdevs_list": [ 00:15:52.021 { 00:15:52.021 "name": null, 00:15:52.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.021 "is_configured": false, 00:15:52.021 "data_offset": 2048, 00:15:52.021 "data_size": 63488 00:15:52.021 }, 00:15:52.021 { 00:15:52.021 "name": "BaseBdev2", 00:15:52.021 "uuid": "400baf47-53af-4a34-9082-daacdb145256", 00:15:52.021 "is_configured": true, 00:15:52.021 "data_offset": 2048, 00:15:52.021 "data_size": 63488 00:15:52.021 } 00:15:52.021 ] 00:15:52.021 }' 00:15:52.021 14:08:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:52.021 14:08:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.954 14:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:15:52.954 14:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:52.954 14:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:52.954 14:08:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:53.212 14:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:53.212 14:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:53.212 14:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:53.469 [2024-07-15 14:08:39.299839] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:53.469 [2024-07-15 14:08:39.300169] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:53.469 [2024-07-15 14:08:39.386244] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:53.469 [2024-07-15 14:08:39.386540] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:53.469 [2024-07-15 14:08:39.386652] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:15:53.469 14:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:53.469 14:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:53.469 14:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:53.469 14:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:15:53.725 14:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:15:53.725 14:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:15:53.725 14:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:15:53.725 14:08:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 189875 00:15:53.725 14:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 189875 ']' 00:15:53.725 14:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 189875 00:15:53.725 14:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:15:53.725 14:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:53.725 14:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 189875 00:15:53.725 14:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:53.725 14:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:53.725 14:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 189875' 00:15:53.725 killing process with pid 189875 00:15:53.725 14:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 189875 00:15:53.725 [2024-07-15 14:08:39.681161] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:53.725 14:08:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 189875 00:15:53.725 [2024-07-15 14:08:39.681414] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:55.096 14:08:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:15:55.096 00:15:55.096 real 0m12.981s 00:15:55.096 user 0m22.809s 00:15:55.096 sys 0m1.517s 00:15:55.096 14:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:55.096 14:08:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.096 ************************************ 00:15:55.096 END TEST raid_state_function_test_sb 00:15:55.096 ************************************ 00:15:55.096 14:08:40 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:15:55.096 14:08:40 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:15:55.096 14:08:40 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:15:55.096 14:08:40 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:55.097 14:08:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:55.097 ************************************ 00:15:55.097 START TEST raid_superblock_test 00:15:55.097 ************************************ 00:15:55.097 14:08:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 2 00:15:55.097 14:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:15:55.097 14:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:15:55.097 14:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:15:55.097 14:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:15:55.097 14:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:15:55.097 14:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:15:55.097 14:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:15:55.097 14:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:15:55.097 14:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:15:55.097 14:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:15:55.097 14:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:15:55.097 14:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:15:55.097 14:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:15:55.097 14:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:15:55.097 14:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:15:55.097 14:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=190260 00:15:55.097 14:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:15:55.097 14:08:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 190260 /var/tmp/spdk-raid.sock 00:15:55.097 14:08:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 190260 ']' 00:15:55.097 14:08:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:55.097 14:08:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:55.097 14:08:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:55.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:55.097 14:08:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:55.097 14:08:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.097 [2024-07-15 14:08:40.899798] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:15:55.097 [2024-07-15 14:08:40.900603] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid190260 ] 00:15:55.097 [2024-07-15 14:08:41.053341] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:55.355 [2024-07-15 14:08:41.290995] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:55.613 [2024-07-15 14:08:41.535703] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:56.179 14:08:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:56.179 14:08:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:15:56.179 14:08:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:15:56.179 14:08:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:56.179 14:08:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:15:56.179 14:08:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:15:56.179 14:08:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:56.179 14:08:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:56.179 14:08:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:15:56.179 14:08:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:56.179 14:08:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:15:56.437 malloc1 00:15:56.437 14:08:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:56.695 [2024-07-15 14:08:42.493621] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:56.695 [2024-07-15 14:08:42.494331] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:56.695 [2024-07-15 14:08:42.494586] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:15:56.695 [2024-07-15 14:08:42.494816] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:56.695 [2024-07-15 14:08:42.496883] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:56.695 [2024-07-15 14:08:42.497140] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:56.695 pt1 00:15:56.695 14:08:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:15:56.695 14:08:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:56.695 14:08:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:15:56.695 14:08:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:15:56.695 14:08:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:56.695 14:08:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:56.695 14:08:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:15:56.695 14:08:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:56.695 14:08:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:15:56.953 malloc2 00:15:56.954 14:08:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:57.212 [2024-07-15 14:08:43.025214] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:57.212 [2024-07-15 14:08:43.025674] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:57.212 [2024-07-15 14:08:43.025937] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:15:57.212 [2024-07-15 14:08:43.026168] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:57.212 [2024-07-15 14:08:43.028125] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:57.212 [2024-07-15 14:08:43.028361] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:57.212 pt2 00:15:57.212 14:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:15:57.212 14:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:57.212 14:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:15:57.470 [2024-07-15 14:08:43.269281] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:57.470 [2024-07-15 14:08:43.271015] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:57.470 [2024-07-15 14:08:43.271314] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:15:57.470 [2024-07-15 14:08:43.271461] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:57.470 [2024-07-15 14:08:43.271630] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:15:57.470 [2024-07-15 14:08:43.272030] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:15:57.470 [2024-07-15 14:08:43.272172] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:15:57.470 [2024-07-15 14:08:43.272420] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:57.470 14:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:57.470 14:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:57.470 14:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:57.470 14:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:57.470 14:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:57.470 14:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:57.470 14:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:57.470 14:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:57.470 14:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:57.470 14:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:57.470 14:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:57.470 14:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.780 14:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:57.780 "name": "raid_bdev1", 00:15:57.780 "uuid": "fcc32d00-6680-451d-93cf-3dc9a2e6bc2e", 00:15:57.780 "strip_size_kb": 0, 00:15:57.780 "state": "online", 00:15:57.780 "raid_level": "raid1", 00:15:57.780 "superblock": true, 00:15:57.780 "num_base_bdevs": 2, 00:15:57.780 "num_base_bdevs_discovered": 2, 00:15:57.780 "num_base_bdevs_operational": 2, 00:15:57.780 "base_bdevs_list": [ 00:15:57.780 { 00:15:57.780 "name": "pt1", 00:15:57.780 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:57.780 "is_configured": true, 00:15:57.780 "data_offset": 2048, 00:15:57.780 "data_size": 63488 00:15:57.780 }, 00:15:57.780 { 00:15:57.780 "name": "pt2", 00:15:57.780 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:57.780 "is_configured": true, 00:15:57.780 "data_offset": 2048, 00:15:57.780 "data_size": 63488 00:15:57.780 } 00:15:57.780 ] 00:15:57.780 }' 00:15:57.780 14:08:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:57.780 14:08:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.347 14:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:15:58.347 14:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:15:58.347 14:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:58.347 14:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:58.347 14:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:58.347 14:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:58.347 14:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:58.347 14:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:58.605 [2024-07-15 14:08:44.377560] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:58.605 14:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:58.605 "name": "raid_bdev1", 00:15:58.605 "aliases": [ 00:15:58.605 "fcc32d00-6680-451d-93cf-3dc9a2e6bc2e" 00:15:58.605 ], 00:15:58.605 "product_name": "Raid Volume", 00:15:58.605 "block_size": 512, 00:15:58.605 "num_blocks": 63488, 00:15:58.605 "uuid": "fcc32d00-6680-451d-93cf-3dc9a2e6bc2e", 00:15:58.605 "assigned_rate_limits": { 00:15:58.605 "rw_ios_per_sec": 0, 00:15:58.605 "rw_mbytes_per_sec": 0, 00:15:58.605 "r_mbytes_per_sec": 0, 00:15:58.605 "w_mbytes_per_sec": 0 00:15:58.605 }, 00:15:58.605 "claimed": false, 00:15:58.605 "zoned": false, 00:15:58.605 "supported_io_types": { 00:15:58.605 "read": true, 00:15:58.605 "write": true, 00:15:58.605 "unmap": false, 00:15:58.605 "flush": false, 00:15:58.605 "reset": true, 00:15:58.605 "nvme_admin": false, 00:15:58.605 "nvme_io": false, 00:15:58.605 "nvme_io_md": false, 00:15:58.605 "write_zeroes": true, 00:15:58.605 "zcopy": false, 00:15:58.605 "get_zone_info": false, 00:15:58.605 "zone_management": false, 00:15:58.605 "zone_append": false, 00:15:58.605 "compare": false, 00:15:58.605 "compare_and_write": false, 00:15:58.605 "abort": false, 00:15:58.605 "seek_hole": false, 00:15:58.605 "seek_data": false, 00:15:58.605 "copy": false, 00:15:58.605 "nvme_iov_md": false 00:15:58.605 }, 00:15:58.605 "memory_domains": [ 00:15:58.605 { 00:15:58.605 "dma_device_id": "system", 00:15:58.605 "dma_device_type": 1 00:15:58.605 }, 00:15:58.605 { 00:15:58.605 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:58.605 "dma_device_type": 2 00:15:58.605 }, 00:15:58.605 { 00:15:58.605 "dma_device_id": "system", 00:15:58.605 "dma_device_type": 1 00:15:58.605 }, 00:15:58.605 { 00:15:58.605 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:58.605 "dma_device_type": 2 00:15:58.605 } 00:15:58.605 ], 00:15:58.605 "driver_specific": { 00:15:58.605 "raid": { 00:15:58.605 "uuid": "fcc32d00-6680-451d-93cf-3dc9a2e6bc2e", 00:15:58.605 "strip_size_kb": 0, 00:15:58.605 "state": "online", 00:15:58.605 "raid_level": "raid1", 00:15:58.605 "superblock": true, 00:15:58.605 "num_base_bdevs": 2, 00:15:58.605 "num_base_bdevs_discovered": 2, 00:15:58.605 "num_base_bdevs_operational": 2, 00:15:58.605 "base_bdevs_list": [ 00:15:58.605 { 00:15:58.605 "name": "pt1", 00:15:58.605 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:58.605 "is_configured": true, 00:15:58.605 "data_offset": 2048, 00:15:58.605 "data_size": 63488 00:15:58.605 }, 00:15:58.605 { 00:15:58.605 "name": "pt2", 00:15:58.605 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:58.605 "is_configured": true, 00:15:58.605 "data_offset": 2048, 00:15:58.605 "data_size": 63488 00:15:58.605 } 00:15:58.605 ] 00:15:58.605 } 00:15:58.605 } 00:15:58.605 }' 00:15:58.605 14:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:58.605 14:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:15:58.605 pt2' 00:15:58.605 14:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:58.605 14:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:15:58.605 14:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:58.863 14:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:58.863 "name": "pt1", 00:15:58.863 "aliases": [ 00:15:58.863 "00000000-0000-0000-0000-000000000001" 00:15:58.863 ], 00:15:58.863 "product_name": "passthru", 00:15:58.863 "block_size": 512, 00:15:58.863 "num_blocks": 65536, 00:15:58.863 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:58.863 "assigned_rate_limits": { 00:15:58.863 "rw_ios_per_sec": 0, 00:15:58.863 "rw_mbytes_per_sec": 0, 00:15:58.863 "r_mbytes_per_sec": 0, 00:15:58.863 "w_mbytes_per_sec": 0 00:15:58.863 }, 00:15:58.863 "claimed": true, 00:15:58.863 "claim_type": "exclusive_write", 00:15:58.863 "zoned": false, 00:15:58.863 "supported_io_types": { 00:15:58.863 "read": true, 00:15:58.863 "write": true, 00:15:58.863 "unmap": true, 00:15:58.863 "flush": true, 00:15:58.863 "reset": true, 00:15:58.863 "nvme_admin": false, 00:15:58.863 "nvme_io": false, 00:15:58.863 "nvme_io_md": false, 00:15:58.863 "write_zeroes": true, 00:15:58.863 "zcopy": true, 00:15:58.863 "get_zone_info": false, 00:15:58.863 "zone_management": false, 00:15:58.863 "zone_append": false, 00:15:58.863 "compare": false, 00:15:58.863 "compare_and_write": false, 00:15:58.863 "abort": true, 00:15:58.863 "seek_hole": false, 00:15:58.863 "seek_data": false, 00:15:58.863 "copy": true, 00:15:58.863 "nvme_iov_md": false 00:15:58.863 }, 00:15:58.863 "memory_domains": [ 00:15:58.863 { 00:15:58.863 "dma_device_id": "system", 00:15:58.863 "dma_device_type": 1 00:15:58.863 }, 00:15:58.863 { 00:15:58.863 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:58.863 "dma_device_type": 2 00:15:58.863 } 00:15:58.863 ], 00:15:58.863 "driver_specific": { 00:15:58.863 "passthru": { 00:15:58.863 "name": "pt1", 00:15:58.863 "base_bdev_name": "malloc1" 00:15:58.863 } 00:15:58.863 } 00:15:58.863 }' 00:15:58.863 14:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:58.863 14:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:58.863 14:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:58.863 14:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:59.120 14:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:59.120 14:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:59.120 14:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:59.120 14:08:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:59.120 14:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:59.120 14:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:59.120 14:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:59.120 14:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:59.120 14:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:59.120 14:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:15:59.120 14:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:59.378 14:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:59.378 "name": "pt2", 00:15:59.378 "aliases": [ 00:15:59.378 "00000000-0000-0000-0000-000000000002" 00:15:59.378 ], 00:15:59.378 "product_name": "passthru", 00:15:59.378 "block_size": 512, 00:15:59.378 "num_blocks": 65536, 00:15:59.378 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:59.378 "assigned_rate_limits": { 00:15:59.378 "rw_ios_per_sec": 0, 00:15:59.378 "rw_mbytes_per_sec": 0, 00:15:59.378 "r_mbytes_per_sec": 0, 00:15:59.378 "w_mbytes_per_sec": 0 00:15:59.378 }, 00:15:59.378 "claimed": true, 00:15:59.378 "claim_type": "exclusive_write", 00:15:59.378 "zoned": false, 00:15:59.378 "supported_io_types": { 00:15:59.378 "read": true, 00:15:59.378 "write": true, 00:15:59.378 "unmap": true, 00:15:59.378 "flush": true, 00:15:59.378 "reset": true, 00:15:59.378 "nvme_admin": false, 00:15:59.378 "nvme_io": false, 00:15:59.378 "nvme_io_md": false, 00:15:59.378 "write_zeroes": true, 00:15:59.378 "zcopy": true, 00:15:59.378 "get_zone_info": false, 00:15:59.378 "zone_management": false, 00:15:59.378 "zone_append": false, 00:15:59.378 "compare": false, 00:15:59.378 "compare_and_write": false, 00:15:59.378 "abort": true, 00:15:59.378 "seek_hole": false, 00:15:59.378 "seek_data": false, 00:15:59.378 "copy": true, 00:15:59.378 "nvme_iov_md": false 00:15:59.378 }, 00:15:59.378 "memory_domains": [ 00:15:59.378 { 00:15:59.378 "dma_device_id": "system", 00:15:59.378 "dma_device_type": 1 00:15:59.378 }, 00:15:59.378 { 00:15:59.378 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:59.378 "dma_device_type": 2 00:15:59.378 } 00:15:59.378 ], 00:15:59.378 "driver_specific": { 00:15:59.378 "passthru": { 00:15:59.378 "name": "pt2", 00:15:59.378 "base_bdev_name": "malloc2" 00:15:59.378 } 00:15:59.378 } 00:15:59.378 }' 00:15:59.378 14:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:59.378 14:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:59.635 14:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:59.635 14:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:59.635 14:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:59.635 14:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:59.635 14:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:59.635 14:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:59.635 14:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:59.635 14:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:59.635 14:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:59.894 14:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:59.894 14:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:59.894 14:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:16:00.154 [2024-07-15 14:08:45.909747] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:00.154 14:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=fcc32d00-6680-451d-93cf-3dc9a2e6bc2e 00:16:00.154 14:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z fcc32d00-6680-451d-93cf-3dc9a2e6bc2e ']' 00:16:00.154 14:08:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:00.154 [2024-07-15 14:08:46.153582] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:00.154 [2024-07-15 14:08:46.153811] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:00.154 [2024-07-15 14:08:46.154007] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:00.154 [2024-07-15 14:08:46.154180] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:00.154 [2024-07-15 14:08:46.154317] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:16:00.412 14:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:00.412 14:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:16:00.670 14:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:16:00.670 14:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:16:00.670 14:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:16:00.670 14:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:00.928 14:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:16:00.928 14:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:00.928 14:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:16:00.928 14:08:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:01.500 14:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:16:01.500 14:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:16:01.500 14:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:16:01.500 14:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:16:01.500 14:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:01.500 14:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:01.500 14:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:01.500 14:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:01.500 14:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:01.500 14:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:01.500 14:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:01.500 14:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:16:01.500 14:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:16:01.759 [2024-07-15 14:08:47.517783] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:01.759 [2024-07-15 14:08:47.519484] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:01.759 [2024-07-15 14:08:47.519677] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:01.759 [2024-07-15 14:08:47.519937] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:01.759 [2024-07-15 14:08:47.520126] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:01.759 [2024-07-15 14:08:47.520235] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state configuring 00:16:01.759 request: 00:16:01.759 { 00:16:01.759 "name": "raid_bdev1", 00:16:01.759 "raid_level": "raid1", 00:16:01.759 "base_bdevs": [ 00:16:01.759 "malloc1", 00:16:01.759 "malloc2" 00:16:01.759 ], 00:16:01.759 "superblock": false, 00:16:01.759 "method": "bdev_raid_create", 00:16:01.759 "req_id": 1 00:16:01.759 } 00:16:01.759 Got JSON-RPC error response 00:16:01.759 response: 00:16:01.759 { 00:16:01.759 "code": -17, 00:16:01.759 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:01.759 } 00:16:01.759 14:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:16:01.759 14:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:01.759 14:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:01.759 14:08:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:01.759 14:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:01.759 14:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:16:02.018 14:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:16:02.018 14:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:16:02.018 14:08:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:02.277 [2024-07-15 14:08:48.065795] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:02.277 [2024-07-15 14:08:48.066177] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:02.277 [2024-07-15 14:08:48.066412] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:02.277 [2024-07-15 14:08:48.066593] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:02.277 [2024-07-15 14:08:48.068578] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:02.277 [2024-07-15 14:08:48.068814] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:02.277 [2024-07-15 14:08:48.069047] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:02.277 [2024-07-15 14:08:48.069220] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:02.277 pt1 00:16:02.277 14:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:16:02.277 14:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:02.277 14:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:02.277 14:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:02.277 14:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:02.277 14:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:02.277 14:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:02.277 14:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:02.277 14:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:02.277 14:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:02.277 14:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:02.277 14:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.536 14:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:02.536 "name": "raid_bdev1", 00:16:02.536 "uuid": "fcc32d00-6680-451d-93cf-3dc9a2e6bc2e", 00:16:02.536 "strip_size_kb": 0, 00:16:02.536 "state": "configuring", 00:16:02.536 "raid_level": "raid1", 00:16:02.536 "superblock": true, 00:16:02.536 "num_base_bdevs": 2, 00:16:02.536 "num_base_bdevs_discovered": 1, 00:16:02.536 "num_base_bdevs_operational": 2, 00:16:02.536 "base_bdevs_list": [ 00:16:02.536 { 00:16:02.536 "name": "pt1", 00:16:02.536 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:02.536 "is_configured": true, 00:16:02.536 "data_offset": 2048, 00:16:02.536 "data_size": 63488 00:16:02.536 }, 00:16:02.536 { 00:16:02.536 "name": null, 00:16:02.536 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:02.536 "is_configured": false, 00:16:02.536 "data_offset": 2048, 00:16:02.536 "data_size": 63488 00:16:02.536 } 00:16:02.536 ] 00:16:02.536 }' 00:16:02.536 14:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:02.536 14:08:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.104 14:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:16:03.104 14:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:16:03.104 14:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:16:03.104 14:08:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:03.363 [2024-07-15 14:08:49.209214] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:03.363 [2024-07-15 14:08:49.209478] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:03.363 [2024-07-15 14:08:49.209645] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:03.363 [2024-07-15 14:08:49.209795] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:03.363 [2024-07-15 14:08:49.210270] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:03.363 [2024-07-15 14:08:49.210452] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:03.363 [2024-07-15 14:08:49.210669] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:03.363 [2024-07-15 14:08:49.210826] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:03.363 [2024-07-15 14:08:49.211032] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:16:03.363 [2024-07-15 14:08:49.211143] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:03.363 [2024-07-15 14:08:49.211329] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:03.363 [2024-07-15 14:08:49.211694] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:16:03.363 [2024-07-15 14:08:49.211830] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:16:03.363 [2024-07-15 14:08:49.212038] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:03.363 pt2 00:16:03.363 14:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:16:03.363 14:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:16:03.363 14:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:03.363 14:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:03.363 14:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:03.363 14:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:03.363 14:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:03.363 14:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:03.363 14:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:03.363 14:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:03.363 14:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:03.363 14:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:03.363 14:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:03.363 14:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.633 14:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:03.633 "name": "raid_bdev1", 00:16:03.633 "uuid": "fcc32d00-6680-451d-93cf-3dc9a2e6bc2e", 00:16:03.633 "strip_size_kb": 0, 00:16:03.633 "state": "online", 00:16:03.633 "raid_level": "raid1", 00:16:03.633 "superblock": true, 00:16:03.633 "num_base_bdevs": 2, 00:16:03.633 "num_base_bdevs_discovered": 2, 00:16:03.633 "num_base_bdevs_operational": 2, 00:16:03.633 "base_bdevs_list": [ 00:16:03.633 { 00:16:03.633 "name": "pt1", 00:16:03.633 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:03.633 "is_configured": true, 00:16:03.633 "data_offset": 2048, 00:16:03.633 "data_size": 63488 00:16:03.633 }, 00:16:03.633 { 00:16:03.633 "name": "pt2", 00:16:03.633 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:03.633 "is_configured": true, 00:16:03.633 "data_offset": 2048, 00:16:03.633 "data_size": 63488 00:16:03.633 } 00:16:03.633 ] 00:16:03.633 }' 00:16:03.633 14:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:03.633 14:08:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.220 14:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:16:04.220 14:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:16:04.220 14:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:04.220 14:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:04.220 14:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:04.220 14:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:04.220 14:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:04.220 14:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:04.481 [2024-07-15 14:08:50.421983] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:04.481 14:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:04.481 "name": "raid_bdev1", 00:16:04.481 "aliases": [ 00:16:04.481 "fcc32d00-6680-451d-93cf-3dc9a2e6bc2e" 00:16:04.481 ], 00:16:04.481 "product_name": "Raid Volume", 00:16:04.481 "block_size": 512, 00:16:04.481 "num_blocks": 63488, 00:16:04.481 "uuid": "fcc32d00-6680-451d-93cf-3dc9a2e6bc2e", 00:16:04.481 "assigned_rate_limits": { 00:16:04.481 "rw_ios_per_sec": 0, 00:16:04.481 "rw_mbytes_per_sec": 0, 00:16:04.481 "r_mbytes_per_sec": 0, 00:16:04.481 "w_mbytes_per_sec": 0 00:16:04.481 }, 00:16:04.481 "claimed": false, 00:16:04.481 "zoned": false, 00:16:04.481 "supported_io_types": { 00:16:04.481 "read": true, 00:16:04.481 "write": true, 00:16:04.481 "unmap": false, 00:16:04.481 "flush": false, 00:16:04.481 "reset": true, 00:16:04.481 "nvme_admin": false, 00:16:04.481 "nvme_io": false, 00:16:04.481 "nvme_io_md": false, 00:16:04.481 "write_zeroes": true, 00:16:04.481 "zcopy": false, 00:16:04.481 "get_zone_info": false, 00:16:04.481 "zone_management": false, 00:16:04.481 "zone_append": false, 00:16:04.481 "compare": false, 00:16:04.481 "compare_and_write": false, 00:16:04.481 "abort": false, 00:16:04.481 "seek_hole": false, 00:16:04.481 "seek_data": false, 00:16:04.481 "copy": false, 00:16:04.481 "nvme_iov_md": false 00:16:04.481 }, 00:16:04.481 "memory_domains": [ 00:16:04.481 { 00:16:04.481 "dma_device_id": "system", 00:16:04.481 "dma_device_type": 1 00:16:04.481 }, 00:16:04.481 { 00:16:04.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:04.481 "dma_device_type": 2 00:16:04.481 }, 00:16:04.481 { 00:16:04.481 "dma_device_id": "system", 00:16:04.481 "dma_device_type": 1 00:16:04.481 }, 00:16:04.481 { 00:16:04.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:04.481 "dma_device_type": 2 00:16:04.481 } 00:16:04.481 ], 00:16:04.481 "driver_specific": { 00:16:04.481 "raid": { 00:16:04.481 "uuid": "fcc32d00-6680-451d-93cf-3dc9a2e6bc2e", 00:16:04.481 "strip_size_kb": 0, 00:16:04.481 "state": "online", 00:16:04.481 "raid_level": "raid1", 00:16:04.481 "superblock": true, 00:16:04.481 "num_base_bdevs": 2, 00:16:04.481 "num_base_bdevs_discovered": 2, 00:16:04.481 "num_base_bdevs_operational": 2, 00:16:04.481 "base_bdevs_list": [ 00:16:04.481 { 00:16:04.481 "name": "pt1", 00:16:04.481 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:04.481 "is_configured": true, 00:16:04.481 "data_offset": 2048, 00:16:04.481 "data_size": 63488 00:16:04.481 }, 00:16:04.481 { 00:16:04.481 "name": "pt2", 00:16:04.481 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:04.481 "is_configured": true, 00:16:04.481 "data_offset": 2048, 00:16:04.481 "data_size": 63488 00:16:04.481 } 00:16:04.481 ] 00:16:04.481 } 00:16:04.481 } 00:16:04.481 }' 00:16:04.481 14:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:04.739 14:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:16:04.739 pt2' 00:16:04.739 14:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:04.739 14:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:16:04.739 14:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:04.739 14:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:04.739 "name": "pt1", 00:16:04.739 "aliases": [ 00:16:04.739 "00000000-0000-0000-0000-000000000001" 00:16:04.739 ], 00:16:04.739 "product_name": "passthru", 00:16:04.739 "block_size": 512, 00:16:04.739 "num_blocks": 65536, 00:16:04.739 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:04.739 "assigned_rate_limits": { 00:16:04.739 "rw_ios_per_sec": 0, 00:16:04.739 "rw_mbytes_per_sec": 0, 00:16:04.739 "r_mbytes_per_sec": 0, 00:16:04.739 "w_mbytes_per_sec": 0 00:16:04.739 }, 00:16:04.739 "claimed": true, 00:16:04.739 "claim_type": "exclusive_write", 00:16:04.739 "zoned": false, 00:16:04.739 "supported_io_types": { 00:16:04.739 "read": true, 00:16:04.739 "write": true, 00:16:04.739 "unmap": true, 00:16:04.739 "flush": true, 00:16:04.739 "reset": true, 00:16:04.739 "nvme_admin": false, 00:16:04.739 "nvme_io": false, 00:16:04.739 "nvme_io_md": false, 00:16:04.739 "write_zeroes": true, 00:16:04.739 "zcopy": true, 00:16:04.739 "get_zone_info": false, 00:16:04.739 "zone_management": false, 00:16:04.739 "zone_append": false, 00:16:04.739 "compare": false, 00:16:04.739 "compare_and_write": false, 00:16:04.739 "abort": true, 00:16:04.739 "seek_hole": false, 00:16:04.739 "seek_data": false, 00:16:04.739 "copy": true, 00:16:04.739 "nvme_iov_md": false 00:16:04.739 }, 00:16:04.739 "memory_domains": [ 00:16:04.739 { 00:16:04.739 "dma_device_id": "system", 00:16:04.739 "dma_device_type": 1 00:16:04.739 }, 00:16:04.739 { 00:16:04.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:04.739 "dma_device_type": 2 00:16:04.739 } 00:16:04.739 ], 00:16:04.739 "driver_specific": { 00:16:04.739 "passthru": { 00:16:04.739 "name": "pt1", 00:16:04.739 "base_bdev_name": "malloc1" 00:16:04.739 } 00:16:04.739 } 00:16:04.739 }' 00:16:04.997 14:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:04.997 14:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:04.997 14:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:04.997 14:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:04.997 14:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:04.997 14:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:04.997 14:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:04.997 14:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:05.254 14:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:05.254 14:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:05.254 14:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:05.255 14:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:05.255 14:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:05.255 14:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:16:05.255 14:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:05.512 14:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:05.512 "name": "pt2", 00:16:05.512 "aliases": [ 00:16:05.512 "00000000-0000-0000-0000-000000000002" 00:16:05.512 ], 00:16:05.512 "product_name": "passthru", 00:16:05.512 "block_size": 512, 00:16:05.512 "num_blocks": 65536, 00:16:05.512 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:05.512 "assigned_rate_limits": { 00:16:05.512 "rw_ios_per_sec": 0, 00:16:05.512 "rw_mbytes_per_sec": 0, 00:16:05.512 "r_mbytes_per_sec": 0, 00:16:05.512 "w_mbytes_per_sec": 0 00:16:05.512 }, 00:16:05.512 "claimed": true, 00:16:05.512 "claim_type": "exclusive_write", 00:16:05.512 "zoned": false, 00:16:05.512 "supported_io_types": { 00:16:05.512 "read": true, 00:16:05.512 "write": true, 00:16:05.512 "unmap": true, 00:16:05.512 "flush": true, 00:16:05.512 "reset": true, 00:16:05.512 "nvme_admin": false, 00:16:05.512 "nvme_io": false, 00:16:05.512 "nvme_io_md": false, 00:16:05.512 "write_zeroes": true, 00:16:05.512 "zcopy": true, 00:16:05.512 "get_zone_info": false, 00:16:05.512 "zone_management": false, 00:16:05.512 "zone_append": false, 00:16:05.512 "compare": false, 00:16:05.512 "compare_and_write": false, 00:16:05.512 "abort": true, 00:16:05.512 "seek_hole": false, 00:16:05.512 "seek_data": false, 00:16:05.512 "copy": true, 00:16:05.512 "nvme_iov_md": false 00:16:05.512 }, 00:16:05.512 "memory_domains": [ 00:16:05.512 { 00:16:05.512 "dma_device_id": "system", 00:16:05.512 "dma_device_type": 1 00:16:05.512 }, 00:16:05.512 { 00:16:05.512 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:05.512 "dma_device_type": 2 00:16:05.512 } 00:16:05.512 ], 00:16:05.512 "driver_specific": { 00:16:05.512 "passthru": { 00:16:05.512 "name": "pt2", 00:16:05.512 "base_bdev_name": "malloc2" 00:16:05.512 } 00:16:05.512 } 00:16:05.512 }' 00:16:05.512 14:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:05.512 14:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:05.512 14:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:05.512 14:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:05.512 14:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:05.770 14:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:05.770 14:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:05.770 14:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:05.770 14:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:05.770 14:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:05.770 14:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:05.770 14:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:05.770 14:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:05.770 14:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:16:06.028 [2024-07-15 14:08:51.970151] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:06.028 14:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' fcc32d00-6680-451d-93cf-3dc9a2e6bc2e '!=' fcc32d00-6680-451d-93cf-3dc9a2e6bc2e ']' 00:16:06.028 14:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:16:06.028 14:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:06.028 14:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:16:06.028 14:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:06.287 [2024-07-15 14:08:52.214041] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:06.287 14:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:06.287 14:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:06.287 14:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:06.287 14:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:06.287 14:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:06.287 14:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:16:06.287 14:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:06.287 14:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:06.287 14:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:06.287 14:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:06.287 14:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:06.287 14:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.545 14:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:06.545 "name": "raid_bdev1", 00:16:06.545 "uuid": "fcc32d00-6680-451d-93cf-3dc9a2e6bc2e", 00:16:06.545 "strip_size_kb": 0, 00:16:06.546 "state": "online", 00:16:06.546 "raid_level": "raid1", 00:16:06.546 "superblock": true, 00:16:06.546 "num_base_bdevs": 2, 00:16:06.546 "num_base_bdevs_discovered": 1, 00:16:06.546 "num_base_bdevs_operational": 1, 00:16:06.546 "base_bdevs_list": [ 00:16:06.546 { 00:16:06.546 "name": null, 00:16:06.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.546 "is_configured": false, 00:16:06.546 "data_offset": 2048, 00:16:06.546 "data_size": 63488 00:16:06.546 }, 00:16:06.546 { 00:16:06.546 "name": "pt2", 00:16:06.546 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:06.546 "is_configured": true, 00:16:06.546 "data_offset": 2048, 00:16:06.546 "data_size": 63488 00:16:06.546 } 00:16:06.546 ] 00:16:06.546 }' 00:16:06.546 14:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:06.546 14:08:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.112 14:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:07.372 [2024-07-15 14:08:53.330230] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:07.372 [2024-07-15 14:08:53.330472] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:07.372 [2024-07-15 14:08:53.330643] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:07.372 [2024-07-15 14:08:53.330818] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:07.372 [2024-07-15 14:08:53.330948] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:16:07.372 14:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:07.372 14:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:16:07.940 14:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:16:07.940 14:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:16:07.940 14:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:16:07.940 14:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:16:07.940 14:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:08.198 14:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:16:08.198 14:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:16:08.198 14:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:16:08.198 14:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:16:08.198 14:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@518 -- # i=1 00:16:08.198 14:08:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:08.457 [2024-07-15 14:08:54.202321] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:08.457 [2024-07-15 14:08:54.202603] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:08.457 [2024-07-15 14:08:54.202789] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:08.457 [2024-07-15 14:08:54.202927] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:08.457 [2024-07-15 14:08:54.204700] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:08.457 [2024-07-15 14:08:54.204904] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:08.457 [2024-07-15 14:08:54.205138] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:08.457 [2024-07-15 14:08:54.205311] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:08.457 [2024-07-15 14:08:54.205533] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:16:08.457 [2024-07-15 14:08:54.205655] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:08.457 [2024-07-15 14:08:54.205838] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:08.457 [2024-07-15 14:08:54.206173] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:16:08.457 [2024-07-15 14:08:54.206291] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:16:08.457 [2024-07-15 14:08:54.206524] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:08.457 pt2 00:16:08.457 14:08:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:08.457 14:08:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:08.457 14:08:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:08.457 14:08:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:08.457 14:08:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:08.457 14:08:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:16:08.457 14:08:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:08.457 14:08:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:08.457 14:08:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:08.457 14:08:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:08.457 14:08:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:08.457 14:08:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.715 14:08:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:08.715 "name": "raid_bdev1", 00:16:08.715 "uuid": "fcc32d00-6680-451d-93cf-3dc9a2e6bc2e", 00:16:08.715 "strip_size_kb": 0, 00:16:08.715 "state": "online", 00:16:08.715 "raid_level": "raid1", 00:16:08.715 "superblock": true, 00:16:08.715 "num_base_bdevs": 2, 00:16:08.715 "num_base_bdevs_discovered": 1, 00:16:08.715 "num_base_bdevs_operational": 1, 00:16:08.715 "base_bdevs_list": [ 00:16:08.715 { 00:16:08.715 "name": null, 00:16:08.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.715 "is_configured": false, 00:16:08.715 "data_offset": 2048, 00:16:08.715 "data_size": 63488 00:16:08.715 }, 00:16:08.715 { 00:16:08.715 "name": "pt2", 00:16:08.715 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:08.715 "is_configured": true, 00:16:08.715 "data_offset": 2048, 00:16:08.715 "data_size": 63488 00:16:08.715 } 00:16:08.715 ] 00:16:08.715 }' 00:16:08.715 14:08:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:08.715 14:08:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.279 14:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:09.536 [2024-07-15 14:08:55.342657] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:09.536 [2024-07-15 14:08:55.342871] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:09.536 [2024-07-15 14:08:55.343047] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:09.536 [2024-07-15 14:08:55.343131] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:09.536 [2024-07-15 14:08:55.343175] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:16:09.536 14:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:09.536 14:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:16:09.794 14:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:16:09.794 14:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:16:09.794 14:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:16:09.794 14:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:10.052 [2024-07-15 14:08:55.870754] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:10.052 [2024-07-15 14:08:55.871072] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:10.052 [2024-07-15 14:08:55.871295] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:16:10.052 [2024-07-15 14:08:55.871462] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:10.052 [2024-07-15 14:08:55.873364] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:10.052 [2024-07-15 14:08:55.873555] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:10.052 [2024-07-15 14:08:55.873791] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:10.052 [2024-07-15 14:08:55.873953] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:10.052 [2024-07-15 14:08:55.874181] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:10.052 [2024-07-15 14:08:55.874303] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:10.052 [2024-07-15 14:08:55.874427] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state configuring 00:16:10.052 [2024-07-15 14:08:55.874617] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:10.052 [2024-07-15 14:08:55.874797] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:16:10.052 [2024-07-15 14:08:55.874919] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:10.052 [2024-07-15 14:08:55.875108] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:10.052 [2024-07-15 14:08:55.875465] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:16:10.052 [2024-07-15 14:08:55.875586] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:16:10.052 [2024-07-15 14:08:55.875829] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:10.052 pt1 00:16:10.052 14:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:16:10.052 14:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:10.052 14:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:10.052 14:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:10.052 14:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:10.052 14:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:10.052 14:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:16:10.052 14:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:10.052 14:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:10.052 14:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:10.052 14:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:10.052 14:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:10.052 14:08:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.309 14:08:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:10.309 "name": "raid_bdev1", 00:16:10.309 "uuid": "fcc32d00-6680-451d-93cf-3dc9a2e6bc2e", 00:16:10.309 "strip_size_kb": 0, 00:16:10.309 "state": "online", 00:16:10.309 "raid_level": "raid1", 00:16:10.309 "superblock": true, 00:16:10.309 "num_base_bdevs": 2, 00:16:10.309 "num_base_bdevs_discovered": 1, 00:16:10.309 "num_base_bdevs_operational": 1, 00:16:10.309 "base_bdevs_list": [ 00:16:10.309 { 00:16:10.309 "name": null, 00:16:10.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.309 "is_configured": false, 00:16:10.309 "data_offset": 2048, 00:16:10.309 "data_size": 63488 00:16:10.309 }, 00:16:10.309 { 00:16:10.309 "name": "pt2", 00:16:10.309 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:10.309 "is_configured": true, 00:16:10.309 "data_offset": 2048, 00:16:10.309 "data_size": 63488 00:16:10.309 } 00:16:10.309 ] 00:16:10.309 }' 00:16:10.309 14:08:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:10.309 14:08:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.874 14:08:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:16:10.874 14:08:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:11.437 14:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:16:11.437 14:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:11.437 14:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:16:11.437 [2024-07-15 14:08:57.385153] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:11.437 14:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' fcc32d00-6680-451d-93cf-3dc9a2e6bc2e '!=' fcc32d00-6680-451d-93cf-3dc9a2e6bc2e ']' 00:16:11.437 14:08:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 190260 00:16:11.437 14:08:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 190260 ']' 00:16:11.437 14:08:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 190260 00:16:11.437 14:08:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:16:11.437 14:08:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:11.437 14:08:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 190260 00:16:11.437 14:08:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:11.437 14:08:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:11.437 14:08:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 190260' 00:16:11.437 killing process with pid 190260 00:16:11.437 14:08:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 190260 00:16:11.437 [2024-07-15 14:08:57.437655] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:11.437 14:08:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 190260 00:16:11.437 [2024-07-15 14:08:57.437890] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:11.437 [2024-07-15 14:08:57.438099] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:11.437 [2024-07-15 14:08:57.438233] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:16:11.695 [2024-07-15 14:08:57.610931] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:13.070 14:08:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:16:13.070 00:16:13.070 real 0m17.901s 00:16:13.070 user 0m32.352s 00:16:13.070 sys 0m2.084s 00:16:13.070 14:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:13.070 14:08:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.070 ************************************ 00:16:13.070 END TEST raid_superblock_test 00:16:13.070 ************************************ 00:16:13.070 14:08:58 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:16:13.070 14:08:58 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:16:13.070 14:08:58 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:16:13.070 14:08:58 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:13.070 14:08:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:13.070 ************************************ 00:16:13.070 START TEST raid_read_error_test 00:16:13.070 ************************************ 00:16:13.070 14:08:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 2 read 00:16:13.070 14:08:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:16:13.070 14:08:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:16:13.070 14:08:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:16:13.070 14:08:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:16:13.070 14:08:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:13.070 14:08:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:16:13.070 14:08:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:13.070 14:08:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:13.070 14:08:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:16:13.070 14:08:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:13.070 14:08:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:13.070 14:08:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:13.070 14:08:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:16:13.070 14:08:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:16:13.070 14:08:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:16:13.070 14:08:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:16:13.070 14:08:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:16:13.070 14:08:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:16:13.070 14:08:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:16:13.070 14:08:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:16:13.070 14:08:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:16:13.070 14:08:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.BmEqmWxZqa 00:16:13.070 14:08:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=190815 00:16:13.070 14:08:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:13.070 14:08:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 190815 /var/tmp/spdk-raid.sock 00:16:13.070 14:08:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 190815 ']' 00:16:13.070 14:08:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:13.070 14:08:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:13.070 14:08:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:13.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:13.070 14:08:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:13.070 14:08:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.070 [2024-07-15 14:08:58.863408] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:16:13.070 [2024-07-15 14:08:58.863832] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid190815 ] 00:16:13.070 [2024-07-15 14:08:59.025896] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:13.329 [2024-07-15 14:08:59.248121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:13.588 [2024-07-15 14:08:59.450322] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:14.154 14:08:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:14.154 14:08:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:16:14.154 14:08:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:14.155 14:08:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:14.441 BaseBdev1_malloc 00:16:14.441 14:09:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:16:14.700 true 00:16:14.700 14:09:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:14.959 [2024-07-15 14:09:00.770995] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:14.959 [2024-07-15 14:09:00.771802] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:14.959 [2024-07-15 14:09:00.772060] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:16:14.959 [2024-07-15 14:09:00.772287] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:14.959 [2024-07-15 14:09:00.774307] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:14.959 [2024-07-15 14:09:00.774574] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:14.959 BaseBdev1 00:16:14.959 14:09:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:14.959 14:09:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:15.217 BaseBdev2_malloc 00:16:15.217 14:09:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:16:15.475 true 00:16:15.475 14:09:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:15.733 [2024-07-15 14:09:01.710165] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:15.733 [2024-07-15 14:09:01.710652] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:15.733 [2024-07-15 14:09:01.710924] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:15.733 [2024-07-15 14:09:01.711144] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:15.733 [2024-07-15 14:09:01.713167] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:15.733 [2024-07-15 14:09:01.713410] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:15.733 BaseBdev2 00:16:15.733 14:09:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:16:16.303 [2024-07-15 14:09:02.046317] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:16.303 [2024-07-15 14:09:02.048194] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:16.303 [2024-07-15 14:09:02.048531] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008480 00:16:16.303 [2024-07-15 14:09:02.048683] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:16.303 [2024-07-15 14:09:02.049011] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:16:16.303 [2024-07-15 14:09:02.049413] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008480 00:16:16.303 [2024-07-15 14:09:02.049545] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008480 00:16:16.303 [2024-07-15 14:09:02.049810] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:16.303 14:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:16.303 14:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:16.303 14:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:16.303 14:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:16.303 14:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:16.303 14:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:16.303 14:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:16.303 14:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:16.303 14:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:16.303 14:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:16.303 14:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:16.303 14:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.563 14:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:16.563 "name": "raid_bdev1", 00:16:16.563 "uuid": "19614f28-2e72-4cce-b26b-a8fa632ae7d4", 00:16:16.563 "strip_size_kb": 0, 00:16:16.563 "state": "online", 00:16:16.563 "raid_level": "raid1", 00:16:16.563 "superblock": true, 00:16:16.563 "num_base_bdevs": 2, 00:16:16.563 "num_base_bdevs_discovered": 2, 00:16:16.563 "num_base_bdevs_operational": 2, 00:16:16.563 "base_bdevs_list": [ 00:16:16.563 { 00:16:16.563 "name": "BaseBdev1", 00:16:16.563 "uuid": "7aacf2d3-97c4-5999-9918-facffe7a6d5d", 00:16:16.563 "is_configured": true, 00:16:16.563 "data_offset": 2048, 00:16:16.563 "data_size": 63488 00:16:16.563 }, 00:16:16.563 { 00:16:16.563 "name": "BaseBdev2", 00:16:16.563 "uuid": "077be790-3555-542c-9501-28aa362c4a2e", 00:16:16.563 "is_configured": true, 00:16:16.563 "data_offset": 2048, 00:16:16.563 "data_size": 63488 00:16:16.563 } 00:16:16.563 ] 00:16:16.563 }' 00:16:16.563 14:09:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:16.563 14:09:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.131 14:09:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:16:17.131 14:09:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:16:17.392 [2024-07-15 14:09:03.182241] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:16:18.332 14:09:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:16:18.591 14:09:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:16:18.591 14:09:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:16:18.591 14:09:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ read = \w\r\i\t\e ]] 00:16:18.591 14:09:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:16:18.591 14:09:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:18.591 14:09:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:18.591 14:09:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:18.591 14:09:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:18.591 14:09:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:18.591 14:09:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:18.591 14:09:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:18.591 14:09:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:18.591 14:09:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:18.591 14:09:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:18.591 14:09:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:18.591 14:09:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.850 14:09:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:18.850 "name": "raid_bdev1", 00:16:18.850 "uuid": "19614f28-2e72-4cce-b26b-a8fa632ae7d4", 00:16:18.850 "strip_size_kb": 0, 00:16:18.850 "state": "online", 00:16:18.850 "raid_level": "raid1", 00:16:18.850 "superblock": true, 00:16:18.850 "num_base_bdevs": 2, 00:16:18.850 "num_base_bdevs_discovered": 2, 00:16:18.850 "num_base_bdevs_operational": 2, 00:16:18.850 "base_bdevs_list": [ 00:16:18.850 { 00:16:18.850 "name": "BaseBdev1", 00:16:18.850 "uuid": "7aacf2d3-97c4-5999-9918-facffe7a6d5d", 00:16:18.850 "is_configured": true, 00:16:18.850 "data_offset": 2048, 00:16:18.850 "data_size": 63488 00:16:18.850 }, 00:16:18.850 { 00:16:18.850 "name": "BaseBdev2", 00:16:18.850 "uuid": "077be790-3555-542c-9501-28aa362c4a2e", 00:16:18.850 "is_configured": true, 00:16:18.850 "data_offset": 2048, 00:16:18.850 "data_size": 63488 00:16:18.850 } 00:16:18.850 ] 00:16:18.850 }' 00:16:18.850 14:09:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:18.850 14:09:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.416 14:09:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:19.687 [2024-07-15 14:09:05.653252] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:19.687 [2024-07-15 14:09:05.653563] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:19.687 [2024-07-15 14:09:05.655055] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:19.687 [2024-07-15 14:09:05.655210] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:19.687 [2024-07-15 14:09:05.655304] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:19.687 [2024-07-15 14:09:05.655451] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state offline 00:16:19.687 0 00:16:19.687 14:09:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 190815 00:16:19.687 14:09:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 190815 ']' 00:16:19.687 14:09:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 190815 00:16:19.687 14:09:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:16:19.687 14:09:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:19.687 14:09:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 190815 00:16:19.945 14:09:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:19.945 14:09:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:19.945 14:09:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 190815' 00:16:19.945 killing process with pid 190815 00:16:19.945 14:09:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 190815 00:16:19.945 14:09:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 190815 00:16:19.945 [2024-07-15 14:09:05.709222] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:19.945 [2024-07-15 14:09:05.823788] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:21.323 14:09:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.BmEqmWxZqa 00:16:21.323 14:09:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:16:21.323 14:09:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:16:21.323 14:09:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:16:21.323 14:09:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:16:21.323 14:09:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:21.323 14:09:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:16:21.323 14:09:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:16:21.323 00:16:21.323 real 0m8.233s 00:16:21.323 user 0m12.667s 00:16:21.323 sys 0m0.913s 00:16:21.323 14:09:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:21.323 14:09:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.323 ************************************ 00:16:21.323 END TEST raid_read_error_test 00:16:21.323 ************************************ 00:16:21.323 14:09:07 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:16:21.323 14:09:07 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:16:21.323 14:09:07 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:16:21.323 14:09:07 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:21.323 14:09:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:21.323 ************************************ 00:16:21.323 START TEST raid_write_error_test 00:16:21.323 ************************************ 00:16:21.323 14:09:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 2 write 00:16:21.323 14:09:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:16:21.323 14:09:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:16:21.323 14:09:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:16:21.323 14:09:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:16:21.323 14:09:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:21.323 14:09:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:16:21.323 14:09:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:21.323 14:09:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:21.323 14:09:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:16:21.323 14:09:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:21.323 14:09:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:21.323 14:09:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:21.323 14:09:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:16:21.323 14:09:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:16:21.323 14:09:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:16:21.323 14:09:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:16:21.323 14:09:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:16:21.323 14:09:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:16:21.323 14:09:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:16:21.323 14:09:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:16:21.323 14:09:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:16:21.323 14:09:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.0GrH6p7Iu0 00:16:21.323 14:09:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=191017 00:16:21.323 14:09:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 191017 /var/tmp/spdk-raid.sock 00:16:21.323 14:09:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:21.323 14:09:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 191017 ']' 00:16:21.323 14:09:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:21.323 14:09:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:21.323 14:09:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:21.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:21.323 14:09:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:21.323 14:09:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.323 [2024-07-15 14:09:07.155927] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:16:21.323 [2024-07-15 14:09:07.156430] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid191017 ] 00:16:21.324 [2024-07-15 14:09:07.316771] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:21.582 [2024-07-15 14:09:07.566515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:21.840 [2024-07-15 14:09:07.768815] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:22.410 14:09:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:22.410 14:09:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:16:22.410 14:09:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:22.410 14:09:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:22.667 BaseBdev1_malloc 00:16:22.667 14:09:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:16:22.925 true 00:16:22.925 14:09:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:23.182 [2024-07-15 14:09:09.049266] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:23.182 [2024-07-15 14:09:09.050030] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:23.182 [2024-07-15 14:09:09.050292] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:16:23.182 [2024-07-15 14:09:09.050548] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:23.182 [2024-07-15 14:09:09.052699] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:23.182 [2024-07-15 14:09:09.053001] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:23.182 BaseBdev1 00:16:23.182 14:09:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:23.183 14:09:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:23.441 BaseBdev2_malloc 00:16:23.441 14:09:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:16:23.699 true 00:16:23.699 14:09:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:23.963 [2024-07-15 14:09:09.914352] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:23.963 [2024-07-15 14:09:09.914884] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:23.963 [2024-07-15 14:09:09.915133] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:23.963 [2024-07-15 14:09:09.915354] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:23.963 [2024-07-15 14:09:09.917382] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:23.963 [2024-07-15 14:09:09.917623] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:23.963 BaseBdev2 00:16:23.963 14:09:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:16:24.223 [2024-07-15 14:09:10.170505] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:24.223 [2024-07-15 14:09:10.172437] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:24.223 [2024-07-15 14:09:10.172798] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008480 00:16:24.223 [2024-07-15 14:09:10.172937] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:24.223 [2024-07-15 14:09:10.173093] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:16:24.223 [2024-07-15 14:09:10.173490] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008480 00:16:24.223 [2024-07-15 14:09:10.173632] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008480 00:16:24.223 [2024-07-15 14:09:10.173884] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:24.223 14:09:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:24.223 14:09:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:24.223 14:09:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:24.223 14:09:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:24.223 14:09:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:24.223 14:09:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:24.223 14:09:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:24.223 14:09:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:24.223 14:09:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:24.223 14:09:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:24.223 14:09:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:24.223 14:09:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:24.794 14:09:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:24.794 "name": "raid_bdev1", 00:16:24.794 "uuid": "5e832025-26c8-4097-a538-15224ed085ee", 00:16:24.794 "strip_size_kb": 0, 00:16:24.794 "state": "online", 00:16:24.794 "raid_level": "raid1", 00:16:24.794 "superblock": true, 00:16:24.794 "num_base_bdevs": 2, 00:16:24.794 "num_base_bdevs_discovered": 2, 00:16:24.794 "num_base_bdevs_operational": 2, 00:16:24.794 "base_bdevs_list": [ 00:16:24.794 { 00:16:24.794 "name": "BaseBdev1", 00:16:24.794 "uuid": "16a528a9-97b3-53ac-8203-6ed01f41a627", 00:16:24.794 "is_configured": true, 00:16:24.794 "data_offset": 2048, 00:16:24.794 "data_size": 63488 00:16:24.794 }, 00:16:24.794 { 00:16:24.794 "name": "BaseBdev2", 00:16:24.794 "uuid": "d5a631cc-1b50-5d43-b668-6afbd06fd0dc", 00:16:24.794 "is_configured": true, 00:16:24.794 "data_offset": 2048, 00:16:24.794 "data_size": 63488 00:16:24.794 } 00:16:24.794 ] 00:16:24.794 }' 00:16:24.794 14:09:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:24.794 14:09:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.361 14:09:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:16:25.361 14:09:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:16:25.361 [2024-07-15 14:09:11.323862] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:16:26.298 14:09:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:16:26.557 [2024-07-15 14:09:12.503281] bdev_raid.c:2221:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:16:26.557 [2024-07-15 14:09:12.503806] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:26.557 [2024-07-15 14:09:12.504026] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005d40 00:16:26.557 14:09:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:16:26.557 14:09:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:16:26.557 14:09:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ write = \w\r\i\t\e ]] 00:16:26.557 14:09:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # expected_num_base_bdevs=1 00:16:26.557 14:09:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:26.557 14:09:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:26.557 14:09:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:26.557 14:09:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:26.557 14:09:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:26.557 14:09:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:16:26.557 14:09:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:26.557 14:09:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:26.557 14:09:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:26.557 14:09:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:26.557 14:09:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:26.557 14:09:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:26.816 14:09:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:26.816 "name": "raid_bdev1", 00:16:26.816 "uuid": "5e832025-26c8-4097-a538-15224ed085ee", 00:16:26.816 "strip_size_kb": 0, 00:16:26.816 "state": "online", 00:16:26.816 "raid_level": "raid1", 00:16:26.816 "superblock": true, 00:16:26.816 "num_base_bdevs": 2, 00:16:26.816 "num_base_bdevs_discovered": 1, 00:16:26.816 "num_base_bdevs_operational": 1, 00:16:26.816 "base_bdevs_list": [ 00:16:26.816 { 00:16:26.816 "name": null, 00:16:26.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.816 "is_configured": false, 00:16:26.816 "data_offset": 2048, 00:16:26.816 "data_size": 63488 00:16:26.816 }, 00:16:26.816 { 00:16:26.816 "name": "BaseBdev2", 00:16:26.816 "uuid": "d5a631cc-1b50-5d43-b668-6afbd06fd0dc", 00:16:26.816 "is_configured": true, 00:16:26.816 "data_offset": 2048, 00:16:26.816 "data_size": 63488 00:16:26.816 } 00:16:26.816 ] 00:16:26.816 }' 00:16:26.816 14:09:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:26.816 14:09:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.749 14:09:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:27.749 [2024-07-15 14:09:13.740112] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:27.749 [2024-07-15 14:09:13.740169] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:27.749 [2024-07-15 14:09:13.741447] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:27.749 [2024-07-15 14:09:13.741500] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:27.749 [2024-07-15 14:09:13.741535] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:27.749 [2024-07-15 14:09:13.741545] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state offline 00:16:27.749 0 00:16:28.014 14:09:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 191017 00:16:28.014 14:09:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 191017 ']' 00:16:28.014 14:09:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 191017 00:16:28.014 14:09:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:16:28.014 14:09:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:28.014 14:09:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 191017 00:16:28.014 14:09:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:28.014 14:09:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:28.014 14:09:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 191017' 00:16:28.014 killing process with pid 191017 00:16:28.014 14:09:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 191017 00:16:28.014 14:09:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 191017 00:16:28.014 [2024-07-15 14:09:13.789250] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:28.014 [2024-07-15 14:09:13.902564] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:29.391 14:09:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.0GrH6p7Iu0 00:16:29.391 14:09:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:16:29.391 14:09:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:16:29.391 14:09:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:16:29.391 14:09:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:16:29.391 14:09:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:29.391 14:09:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:16:29.391 14:09:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:16:29.391 00:16:29.391 real 0m8.013s 00:16:29.391 user 0m12.273s 00:16:29.391 sys 0m0.868s 00:16:29.391 14:09:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:29.391 14:09:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.391 ************************************ 00:16:29.391 END TEST raid_write_error_test 00:16:29.391 ************************************ 00:16:29.391 14:09:15 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:16:29.391 14:09:15 bdev_raid -- bdev/bdev_raid.sh@865 -- # for n in {2..4} 00:16:29.391 14:09:15 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:16:29.391 14:09:15 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:16:29.391 14:09:15 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:16:29.391 14:09:15 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:29.391 14:09:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:29.391 ************************************ 00:16:29.391 START TEST raid_state_function_test 00:16:29.391 ************************************ 00:16:29.391 14:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 3 false 00:16:29.391 14:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:16:29.391 14:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:16:29.391 14:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:16:29.391 14:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:16:29.391 14:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:16:29.391 14:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:29.391 14:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:16:29.392 14:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:29.392 14:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:29.392 14:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:16:29.392 14:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:29.392 14:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:29.392 14:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:16:29.392 14:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:29.392 14:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:29.392 14:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:29.392 14:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:16:29.392 14:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:16:29.392 14:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:16:29.392 14:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:16:29.392 14:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:16:29.392 14:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:16:29.392 14:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:16:29.392 14:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:16:29.392 14:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:16:29.392 14:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:16:29.392 14:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=191211 00:16:29.392 14:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:29.392 14:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 191211' 00:16:29.392 Process raid pid: 191211 00:16:29.392 14:09:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 191211 /var/tmp/spdk-raid.sock 00:16:29.392 14:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 191211 ']' 00:16:29.392 14:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:29.392 14:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:29.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:29.392 14:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:29.392 14:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:29.392 14:09:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.392 [2024-07-15 14:09:15.221683] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:16:29.392 [2024-07-15 14:09:15.222323] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:29.392 [2024-07-15 14:09:15.384170] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:29.651 [2024-07-15 14:09:15.626874] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:29.909 [2024-07-15 14:09:15.832404] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:30.166 14:09:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:30.167 14:09:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:16:30.167 14:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:30.424 [2024-07-15 14:09:16.375802] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:30.424 [2024-07-15 14:09:16.376254] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:30.424 [2024-07-15 14:09:16.376290] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:30.424 [2024-07-15 14:09:16.376398] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:30.424 [2024-07-15 14:09:16.376415] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:30.424 [2024-07-15 14:09:16.376491] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:30.424 14:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:30.424 14:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:30.424 14:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:30.424 14:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:30.424 14:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:30.424 14:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:30.424 14:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:30.424 14:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:30.424 14:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:30.424 14:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:30.424 14:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:30.424 14:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:30.681 14:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:30.681 "name": "Existed_Raid", 00:16:30.681 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.681 "strip_size_kb": 64, 00:16:30.681 "state": "configuring", 00:16:30.681 "raid_level": "raid0", 00:16:30.681 "superblock": false, 00:16:30.681 "num_base_bdevs": 3, 00:16:30.681 "num_base_bdevs_discovered": 0, 00:16:30.681 "num_base_bdevs_operational": 3, 00:16:30.681 "base_bdevs_list": [ 00:16:30.681 { 00:16:30.681 "name": "BaseBdev1", 00:16:30.681 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.681 "is_configured": false, 00:16:30.681 "data_offset": 0, 00:16:30.681 "data_size": 0 00:16:30.681 }, 00:16:30.681 { 00:16:30.681 "name": "BaseBdev2", 00:16:30.681 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.681 "is_configured": false, 00:16:30.681 "data_offset": 0, 00:16:30.681 "data_size": 0 00:16:30.681 }, 00:16:30.681 { 00:16:30.681 "name": "BaseBdev3", 00:16:30.681 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.681 "is_configured": false, 00:16:30.681 "data_offset": 0, 00:16:30.681 "data_size": 0 00:16:30.681 } 00:16:30.681 ] 00:16:30.681 }' 00:16:30.681 14:09:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:30.681 14:09:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.622 14:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:31.622 [2024-07-15 14:09:17.571864] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:31.622 [2024-07-15 14:09:17.571922] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:16:31.622 14:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:31.879 [2024-07-15 14:09:17.807922] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:31.879 [2024-07-15 14:09:17.808340] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:31.880 [2024-07-15 14:09:17.808374] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:31.880 [2024-07-15 14:09:17.808482] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:31.880 [2024-07-15 14:09:17.808498] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:31.880 [2024-07-15 14:09:17.808604] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:31.880 14:09:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:32.138 [2024-07-15 14:09:18.135242] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:32.138 BaseBdev1 00:16:32.395 14:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:16:32.395 14:09:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:16:32.395 14:09:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:32.395 14:09:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:32.395 14:09:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:32.395 14:09:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:32.395 14:09:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:32.395 14:09:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:32.651 [ 00:16:32.651 { 00:16:32.651 "name": "BaseBdev1", 00:16:32.651 "aliases": [ 00:16:32.651 "f72c9097-9698-43e1-b97e-8e8c097f6f6a" 00:16:32.651 ], 00:16:32.651 "product_name": "Malloc disk", 00:16:32.651 "block_size": 512, 00:16:32.651 "num_blocks": 65536, 00:16:32.651 "uuid": "f72c9097-9698-43e1-b97e-8e8c097f6f6a", 00:16:32.651 "assigned_rate_limits": { 00:16:32.651 "rw_ios_per_sec": 0, 00:16:32.651 "rw_mbytes_per_sec": 0, 00:16:32.651 "r_mbytes_per_sec": 0, 00:16:32.651 "w_mbytes_per_sec": 0 00:16:32.651 }, 00:16:32.651 "claimed": true, 00:16:32.651 "claim_type": "exclusive_write", 00:16:32.651 "zoned": false, 00:16:32.651 "supported_io_types": { 00:16:32.651 "read": true, 00:16:32.651 "write": true, 00:16:32.651 "unmap": true, 00:16:32.651 "flush": true, 00:16:32.651 "reset": true, 00:16:32.651 "nvme_admin": false, 00:16:32.651 "nvme_io": false, 00:16:32.651 "nvme_io_md": false, 00:16:32.651 "write_zeroes": true, 00:16:32.651 "zcopy": true, 00:16:32.651 "get_zone_info": false, 00:16:32.651 "zone_management": false, 00:16:32.651 "zone_append": false, 00:16:32.651 "compare": false, 00:16:32.651 "compare_and_write": false, 00:16:32.651 "abort": true, 00:16:32.651 "seek_hole": false, 00:16:32.651 "seek_data": false, 00:16:32.651 "copy": true, 00:16:32.651 "nvme_iov_md": false 00:16:32.651 }, 00:16:32.651 "memory_domains": [ 00:16:32.651 { 00:16:32.651 "dma_device_id": "system", 00:16:32.651 "dma_device_type": 1 00:16:32.651 }, 00:16:32.651 { 00:16:32.651 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:32.651 "dma_device_type": 2 00:16:32.651 } 00:16:32.651 ], 00:16:32.651 "driver_specific": {} 00:16:32.651 } 00:16:32.651 ] 00:16:32.651 14:09:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:32.651 14:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:32.651 14:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:32.652 14:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:32.652 14:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:32.652 14:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:32.652 14:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:32.652 14:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:32.652 14:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:32.652 14:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:32.652 14:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:32.652 14:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:32.652 14:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:32.908 14:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:32.908 "name": "Existed_Raid", 00:16:32.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.908 "strip_size_kb": 64, 00:16:32.908 "state": "configuring", 00:16:32.908 "raid_level": "raid0", 00:16:32.908 "superblock": false, 00:16:32.908 "num_base_bdevs": 3, 00:16:32.908 "num_base_bdevs_discovered": 1, 00:16:32.908 "num_base_bdevs_operational": 3, 00:16:32.908 "base_bdevs_list": [ 00:16:32.908 { 00:16:32.908 "name": "BaseBdev1", 00:16:32.908 "uuid": "f72c9097-9698-43e1-b97e-8e8c097f6f6a", 00:16:32.908 "is_configured": true, 00:16:32.908 "data_offset": 0, 00:16:32.908 "data_size": 65536 00:16:32.908 }, 00:16:32.908 { 00:16:32.908 "name": "BaseBdev2", 00:16:32.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.909 "is_configured": false, 00:16:32.909 "data_offset": 0, 00:16:32.909 "data_size": 0 00:16:32.909 }, 00:16:32.909 { 00:16:32.909 "name": "BaseBdev3", 00:16:32.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.909 "is_configured": false, 00:16:32.909 "data_offset": 0, 00:16:32.909 "data_size": 0 00:16:32.909 } 00:16:32.909 ] 00:16:32.909 }' 00:16:32.909 14:09:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:32.909 14:09:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.842 14:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:33.842 [2024-07-15 14:09:19.763580] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:33.842 [2024-07-15 14:09:19.763648] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:16:33.842 14:09:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:34.100 [2024-07-15 14:09:19.999662] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:34.100 [2024-07-15 14:09:20.001188] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:34.100 [2024-07-15 14:09:20.001277] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:34.100 [2024-07-15 14:09:20.001292] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:34.100 [2024-07-15 14:09:20.001323] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:34.100 14:09:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:16:34.100 14:09:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:34.100 14:09:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:34.100 14:09:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:34.100 14:09:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:34.100 14:09:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:34.100 14:09:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:34.100 14:09:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:34.100 14:09:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:34.100 14:09:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:34.100 14:09:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:34.100 14:09:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:34.100 14:09:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:34.100 14:09:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:34.358 14:09:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:34.358 "name": "Existed_Raid", 00:16:34.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.358 "strip_size_kb": 64, 00:16:34.358 "state": "configuring", 00:16:34.358 "raid_level": "raid0", 00:16:34.358 "superblock": false, 00:16:34.358 "num_base_bdevs": 3, 00:16:34.358 "num_base_bdevs_discovered": 1, 00:16:34.358 "num_base_bdevs_operational": 3, 00:16:34.358 "base_bdevs_list": [ 00:16:34.358 { 00:16:34.358 "name": "BaseBdev1", 00:16:34.358 "uuid": "f72c9097-9698-43e1-b97e-8e8c097f6f6a", 00:16:34.358 "is_configured": true, 00:16:34.358 "data_offset": 0, 00:16:34.358 "data_size": 65536 00:16:34.358 }, 00:16:34.358 { 00:16:34.358 "name": "BaseBdev2", 00:16:34.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.358 "is_configured": false, 00:16:34.358 "data_offset": 0, 00:16:34.358 "data_size": 0 00:16:34.358 }, 00:16:34.358 { 00:16:34.358 "name": "BaseBdev3", 00:16:34.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.358 "is_configured": false, 00:16:34.358 "data_offset": 0, 00:16:34.358 "data_size": 0 00:16:34.358 } 00:16:34.358 ] 00:16:34.358 }' 00:16:34.358 14:09:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:34.358 14:09:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.300 14:09:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:35.300 [2024-07-15 14:09:21.245527] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:35.300 BaseBdev2 00:16:35.300 14:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:16:35.300 14:09:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:16:35.300 14:09:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:35.300 14:09:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:35.300 14:09:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:35.300 14:09:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:35.300 14:09:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:35.866 14:09:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:35.866 [ 00:16:35.866 { 00:16:35.866 "name": "BaseBdev2", 00:16:35.866 "aliases": [ 00:16:35.866 "09bd8d55-a494-4888-8129-2ff83078b5f4" 00:16:35.866 ], 00:16:35.866 "product_name": "Malloc disk", 00:16:35.866 "block_size": 512, 00:16:35.866 "num_blocks": 65536, 00:16:35.866 "uuid": "09bd8d55-a494-4888-8129-2ff83078b5f4", 00:16:35.866 "assigned_rate_limits": { 00:16:35.866 "rw_ios_per_sec": 0, 00:16:35.866 "rw_mbytes_per_sec": 0, 00:16:35.866 "r_mbytes_per_sec": 0, 00:16:35.866 "w_mbytes_per_sec": 0 00:16:35.866 }, 00:16:35.866 "claimed": true, 00:16:35.866 "claim_type": "exclusive_write", 00:16:35.866 "zoned": false, 00:16:35.866 "supported_io_types": { 00:16:35.866 "read": true, 00:16:35.866 "write": true, 00:16:35.866 "unmap": true, 00:16:35.866 "flush": true, 00:16:35.866 "reset": true, 00:16:35.866 "nvme_admin": false, 00:16:35.866 "nvme_io": false, 00:16:35.866 "nvme_io_md": false, 00:16:35.866 "write_zeroes": true, 00:16:35.866 "zcopy": true, 00:16:35.866 "get_zone_info": false, 00:16:35.866 "zone_management": false, 00:16:35.866 "zone_append": false, 00:16:35.866 "compare": false, 00:16:35.867 "compare_and_write": false, 00:16:35.867 "abort": true, 00:16:35.867 "seek_hole": false, 00:16:35.867 "seek_data": false, 00:16:35.867 "copy": true, 00:16:35.867 "nvme_iov_md": false 00:16:35.867 }, 00:16:35.867 "memory_domains": [ 00:16:35.867 { 00:16:35.867 "dma_device_id": "system", 00:16:35.867 "dma_device_type": 1 00:16:35.867 }, 00:16:35.867 { 00:16:35.867 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:35.867 "dma_device_type": 2 00:16:35.867 } 00:16:35.867 ], 00:16:35.867 "driver_specific": {} 00:16:35.867 } 00:16:35.867 ] 00:16:35.867 14:09:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:35.867 14:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:35.867 14:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:35.867 14:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:35.867 14:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:35.867 14:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:35.867 14:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:35.867 14:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:35.867 14:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:35.867 14:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:35.867 14:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:35.867 14:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:35.867 14:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:35.867 14:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:35.867 14:09:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:36.124 14:09:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:36.124 "name": "Existed_Raid", 00:16:36.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.124 "strip_size_kb": 64, 00:16:36.124 "state": "configuring", 00:16:36.124 "raid_level": "raid0", 00:16:36.124 "superblock": false, 00:16:36.124 "num_base_bdevs": 3, 00:16:36.124 "num_base_bdevs_discovered": 2, 00:16:36.124 "num_base_bdevs_operational": 3, 00:16:36.124 "base_bdevs_list": [ 00:16:36.124 { 00:16:36.124 "name": "BaseBdev1", 00:16:36.124 "uuid": "f72c9097-9698-43e1-b97e-8e8c097f6f6a", 00:16:36.124 "is_configured": true, 00:16:36.124 "data_offset": 0, 00:16:36.124 "data_size": 65536 00:16:36.124 }, 00:16:36.124 { 00:16:36.124 "name": "BaseBdev2", 00:16:36.124 "uuid": "09bd8d55-a494-4888-8129-2ff83078b5f4", 00:16:36.124 "is_configured": true, 00:16:36.124 "data_offset": 0, 00:16:36.124 "data_size": 65536 00:16:36.124 }, 00:16:36.124 { 00:16:36.124 "name": "BaseBdev3", 00:16:36.124 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.124 "is_configured": false, 00:16:36.124 "data_offset": 0, 00:16:36.124 "data_size": 0 00:16:36.124 } 00:16:36.124 ] 00:16:36.124 }' 00:16:36.124 14:09:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:36.124 14:09:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.059 14:09:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:37.317 [2024-07-15 14:09:23.073852] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:37.317 [2024-07-15 14:09:23.073918] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:16:37.317 [2024-07-15 14:09:23.073929] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:37.318 [2024-07-15 14:09:23.074026] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:16:37.318 [2024-07-15 14:09:23.074291] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:16:37.318 [2024-07-15 14:09:23.074314] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:16:37.318 [2024-07-15 14:09:23.074529] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:37.318 BaseBdev3 00:16:37.318 14:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:16:37.318 14:09:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:16:37.318 14:09:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:37.318 14:09:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:37.318 14:09:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:37.318 14:09:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:37.318 14:09:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:37.576 14:09:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:37.834 [ 00:16:37.834 { 00:16:37.834 "name": "BaseBdev3", 00:16:37.834 "aliases": [ 00:16:37.834 "7060d570-be09-4ccd-9534-f8aabc547cbe" 00:16:37.834 ], 00:16:37.834 "product_name": "Malloc disk", 00:16:37.834 "block_size": 512, 00:16:37.834 "num_blocks": 65536, 00:16:37.834 "uuid": "7060d570-be09-4ccd-9534-f8aabc547cbe", 00:16:37.834 "assigned_rate_limits": { 00:16:37.834 "rw_ios_per_sec": 0, 00:16:37.834 "rw_mbytes_per_sec": 0, 00:16:37.834 "r_mbytes_per_sec": 0, 00:16:37.834 "w_mbytes_per_sec": 0 00:16:37.834 }, 00:16:37.834 "claimed": true, 00:16:37.834 "claim_type": "exclusive_write", 00:16:37.834 "zoned": false, 00:16:37.834 "supported_io_types": { 00:16:37.834 "read": true, 00:16:37.834 "write": true, 00:16:37.834 "unmap": true, 00:16:37.834 "flush": true, 00:16:37.834 "reset": true, 00:16:37.834 "nvme_admin": false, 00:16:37.834 "nvme_io": false, 00:16:37.834 "nvme_io_md": false, 00:16:37.834 "write_zeroes": true, 00:16:37.834 "zcopy": true, 00:16:37.834 "get_zone_info": false, 00:16:37.834 "zone_management": false, 00:16:37.834 "zone_append": false, 00:16:37.834 "compare": false, 00:16:37.834 "compare_and_write": false, 00:16:37.834 "abort": true, 00:16:37.834 "seek_hole": false, 00:16:37.834 "seek_data": false, 00:16:37.834 "copy": true, 00:16:37.834 "nvme_iov_md": false 00:16:37.834 }, 00:16:37.834 "memory_domains": [ 00:16:37.834 { 00:16:37.834 "dma_device_id": "system", 00:16:37.834 "dma_device_type": 1 00:16:37.834 }, 00:16:37.834 { 00:16:37.834 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:37.834 "dma_device_type": 2 00:16:37.834 } 00:16:37.834 ], 00:16:37.834 "driver_specific": {} 00:16:37.834 } 00:16:37.834 ] 00:16:37.834 14:09:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:37.834 14:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:37.834 14:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:37.834 14:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:16:37.835 14:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:37.835 14:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:37.835 14:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:37.835 14:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:37.835 14:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:37.835 14:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:37.835 14:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:37.835 14:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:37.835 14:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:37.835 14:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:37.835 14:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:38.093 14:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:38.093 "name": "Existed_Raid", 00:16:38.093 "uuid": "8517a455-7c58-4040-82d3-97c6d2bca928", 00:16:38.093 "strip_size_kb": 64, 00:16:38.093 "state": "online", 00:16:38.093 "raid_level": "raid0", 00:16:38.093 "superblock": false, 00:16:38.093 "num_base_bdevs": 3, 00:16:38.093 "num_base_bdevs_discovered": 3, 00:16:38.093 "num_base_bdevs_operational": 3, 00:16:38.093 "base_bdevs_list": [ 00:16:38.093 { 00:16:38.093 "name": "BaseBdev1", 00:16:38.093 "uuid": "f72c9097-9698-43e1-b97e-8e8c097f6f6a", 00:16:38.093 "is_configured": true, 00:16:38.093 "data_offset": 0, 00:16:38.093 "data_size": 65536 00:16:38.093 }, 00:16:38.093 { 00:16:38.093 "name": "BaseBdev2", 00:16:38.093 "uuid": "09bd8d55-a494-4888-8129-2ff83078b5f4", 00:16:38.093 "is_configured": true, 00:16:38.093 "data_offset": 0, 00:16:38.093 "data_size": 65536 00:16:38.093 }, 00:16:38.093 { 00:16:38.093 "name": "BaseBdev3", 00:16:38.093 "uuid": "7060d570-be09-4ccd-9534-f8aabc547cbe", 00:16:38.093 "is_configured": true, 00:16:38.093 "data_offset": 0, 00:16:38.093 "data_size": 65536 00:16:38.093 } 00:16:38.093 ] 00:16:38.093 }' 00:16:38.093 14:09:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:38.093 14:09:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.660 14:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:16:38.660 14:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:38.660 14:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:38.660 14:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:38.660 14:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:38.660 14:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:38.660 14:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:38.660 14:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:38.919 [2024-07-15 14:09:24.849278] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:38.919 14:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:38.919 "name": "Existed_Raid", 00:16:38.919 "aliases": [ 00:16:38.919 "8517a455-7c58-4040-82d3-97c6d2bca928" 00:16:38.919 ], 00:16:38.919 "product_name": "Raid Volume", 00:16:38.919 "block_size": 512, 00:16:38.919 "num_blocks": 196608, 00:16:38.919 "uuid": "8517a455-7c58-4040-82d3-97c6d2bca928", 00:16:38.919 "assigned_rate_limits": { 00:16:38.919 "rw_ios_per_sec": 0, 00:16:38.919 "rw_mbytes_per_sec": 0, 00:16:38.919 "r_mbytes_per_sec": 0, 00:16:38.919 "w_mbytes_per_sec": 0 00:16:38.919 }, 00:16:38.919 "claimed": false, 00:16:38.919 "zoned": false, 00:16:38.919 "supported_io_types": { 00:16:38.919 "read": true, 00:16:38.919 "write": true, 00:16:38.919 "unmap": true, 00:16:38.919 "flush": true, 00:16:38.919 "reset": true, 00:16:38.919 "nvme_admin": false, 00:16:38.919 "nvme_io": false, 00:16:38.919 "nvme_io_md": false, 00:16:38.919 "write_zeroes": true, 00:16:38.919 "zcopy": false, 00:16:38.919 "get_zone_info": false, 00:16:38.919 "zone_management": false, 00:16:38.919 "zone_append": false, 00:16:38.919 "compare": false, 00:16:38.919 "compare_and_write": false, 00:16:38.919 "abort": false, 00:16:38.919 "seek_hole": false, 00:16:38.919 "seek_data": false, 00:16:38.919 "copy": false, 00:16:38.919 "nvme_iov_md": false 00:16:38.919 }, 00:16:38.919 "memory_domains": [ 00:16:38.919 { 00:16:38.919 "dma_device_id": "system", 00:16:38.919 "dma_device_type": 1 00:16:38.919 }, 00:16:38.919 { 00:16:38.919 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:38.919 "dma_device_type": 2 00:16:38.919 }, 00:16:38.919 { 00:16:38.919 "dma_device_id": "system", 00:16:38.919 "dma_device_type": 1 00:16:38.919 }, 00:16:38.919 { 00:16:38.919 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:38.919 "dma_device_type": 2 00:16:38.919 }, 00:16:38.919 { 00:16:38.919 "dma_device_id": "system", 00:16:38.919 "dma_device_type": 1 00:16:38.919 }, 00:16:38.919 { 00:16:38.919 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:38.919 "dma_device_type": 2 00:16:38.919 } 00:16:38.919 ], 00:16:38.919 "driver_specific": { 00:16:38.919 "raid": { 00:16:38.919 "uuid": "8517a455-7c58-4040-82d3-97c6d2bca928", 00:16:38.919 "strip_size_kb": 64, 00:16:38.919 "state": "online", 00:16:38.919 "raid_level": "raid0", 00:16:38.919 "superblock": false, 00:16:38.919 "num_base_bdevs": 3, 00:16:38.919 "num_base_bdevs_discovered": 3, 00:16:38.919 "num_base_bdevs_operational": 3, 00:16:38.919 "base_bdevs_list": [ 00:16:38.919 { 00:16:38.919 "name": "BaseBdev1", 00:16:38.919 "uuid": "f72c9097-9698-43e1-b97e-8e8c097f6f6a", 00:16:38.919 "is_configured": true, 00:16:38.919 "data_offset": 0, 00:16:38.919 "data_size": 65536 00:16:38.919 }, 00:16:38.919 { 00:16:38.919 "name": "BaseBdev2", 00:16:38.919 "uuid": "09bd8d55-a494-4888-8129-2ff83078b5f4", 00:16:38.919 "is_configured": true, 00:16:38.919 "data_offset": 0, 00:16:38.919 "data_size": 65536 00:16:38.919 }, 00:16:38.919 { 00:16:38.919 "name": "BaseBdev3", 00:16:38.919 "uuid": "7060d570-be09-4ccd-9534-f8aabc547cbe", 00:16:38.919 "is_configured": true, 00:16:38.919 "data_offset": 0, 00:16:38.919 "data_size": 65536 00:16:38.919 } 00:16:38.919 ] 00:16:38.919 } 00:16:38.919 } 00:16:38.919 }' 00:16:38.919 14:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:38.919 14:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:16:38.919 BaseBdev2 00:16:38.919 BaseBdev3' 00:16:38.919 14:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:39.178 14:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:16:39.178 14:09:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:39.437 14:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:39.437 "name": "BaseBdev1", 00:16:39.437 "aliases": [ 00:16:39.437 "f72c9097-9698-43e1-b97e-8e8c097f6f6a" 00:16:39.437 ], 00:16:39.437 "product_name": "Malloc disk", 00:16:39.437 "block_size": 512, 00:16:39.437 "num_blocks": 65536, 00:16:39.437 "uuid": "f72c9097-9698-43e1-b97e-8e8c097f6f6a", 00:16:39.437 "assigned_rate_limits": { 00:16:39.437 "rw_ios_per_sec": 0, 00:16:39.437 "rw_mbytes_per_sec": 0, 00:16:39.437 "r_mbytes_per_sec": 0, 00:16:39.437 "w_mbytes_per_sec": 0 00:16:39.437 }, 00:16:39.437 "claimed": true, 00:16:39.437 "claim_type": "exclusive_write", 00:16:39.437 "zoned": false, 00:16:39.437 "supported_io_types": { 00:16:39.437 "read": true, 00:16:39.437 "write": true, 00:16:39.437 "unmap": true, 00:16:39.437 "flush": true, 00:16:39.437 "reset": true, 00:16:39.437 "nvme_admin": false, 00:16:39.437 "nvme_io": false, 00:16:39.437 "nvme_io_md": false, 00:16:39.437 "write_zeroes": true, 00:16:39.437 "zcopy": true, 00:16:39.437 "get_zone_info": false, 00:16:39.437 "zone_management": false, 00:16:39.437 "zone_append": false, 00:16:39.438 "compare": false, 00:16:39.438 "compare_and_write": false, 00:16:39.438 "abort": true, 00:16:39.438 "seek_hole": false, 00:16:39.438 "seek_data": false, 00:16:39.438 "copy": true, 00:16:39.438 "nvme_iov_md": false 00:16:39.438 }, 00:16:39.438 "memory_domains": [ 00:16:39.438 { 00:16:39.438 "dma_device_id": "system", 00:16:39.438 "dma_device_type": 1 00:16:39.438 }, 00:16:39.438 { 00:16:39.438 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:39.438 "dma_device_type": 2 00:16:39.438 } 00:16:39.438 ], 00:16:39.438 "driver_specific": {} 00:16:39.438 }' 00:16:39.438 14:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:39.438 14:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:39.438 14:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:39.438 14:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:39.438 14:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:39.438 14:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:39.438 14:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:39.696 14:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:39.697 14:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:39.697 14:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:39.697 14:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:39.697 14:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:39.697 14:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:39.697 14:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:39.697 14:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:39.955 14:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:39.955 "name": "BaseBdev2", 00:16:39.955 "aliases": [ 00:16:39.955 "09bd8d55-a494-4888-8129-2ff83078b5f4" 00:16:39.955 ], 00:16:39.955 "product_name": "Malloc disk", 00:16:39.955 "block_size": 512, 00:16:39.955 "num_blocks": 65536, 00:16:39.955 "uuid": "09bd8d55-a494-4888-8129-2ff83078b5f4", 00:16:39.955 "assigned_rate_limits": { 00:16:39.955 "rw_ios_per_sec": 0, 00:16:39.955 "rw_mbytes_per_sec": 0, 00:16:39.955 "r_mbytes_per_sec": 0, 00:16:39.955 "w_mbytes_per_sec": 0 00:16:39.955 }, 00:16:39.955 "claimed": true, 00:16:39.955 "claim_type": "exclusive_write", 00:16:39.955 "zoned": false, 00:16:39.955 "supported_io_types": { 00:16:39.955 "read": true, 00:16:39.955 "write": true, 00:16:39.955 "unmap": true, 00:16:39.955 "flush": true, 00:16:39.955 "reset": true, 00:16:39.955 "nvme_admin": false, 00:16:39.955 "nvme_io": false, 00:16:39.955 "nvme_io_md": false, 00:16:39.955 "write_zeroes": true, 00:16:39.955 "zcopy": true, 00:16:39.955 "get_zone_info": false, 00:16:39.955 "zone_management": false, 00:16:39.955 "zone_append": false, 00:16:39.955 "compare": false, 00:16:39.955 "compare_and_write": false, 00:16:39.955 "abort": true, 00:16:39.955 "seek_hole": false, 00:16:39.955 "seek_data": false, 00:16:39.955 "copy": true, 00:16:39.955 "nvme_iov_md": false 00:16:39.955 }, 00:16:39.955 "memory_domains": [ 00:16:39.955 { 00:16:39.955 "dma_device_id": "system", 00:16:39.955 "dma_device_type": 1 00:16:39.955 }, 00:16:39.955 { 00:16:39.955 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:39.955 "dma_device_type": 2 00:16:39.955 } 00:16:39.955 ], 00:16:39.955 "driver_specific": {} 00:16:39.955 }' 00:16:39.955 14:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:39.955 14:09:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:40.213 14:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:40.213 14:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:40.213 14:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:40.213 14:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:40.213 14:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:40.213 14:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:40.213 14:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:40.213 14:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:40.472 14:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:40.472 14:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:40.472 14:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:40.472 14:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:40.472 14:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:16:40.730 14:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:40.730 "name": "BaseBdev3", 00:16:40.730 "aliases": [ 00:16:40.730 "7060d570-be09-4ccd-9534-f8aabc547cbe" 00:16:40.730 ], 00:16:40.730 "product_name": "Malloc disk", 00:16:40.730 "block_size": 512, 00:16:40.730 "num_blocks": 65536, 00:16:40.730 "uuid": "7060d570-be09-4ccd-9534-f8aabc547cbe", 00:16:40.730 "assigned_rate_limits": { 00:16:40.730 "rw_ios_per_sec": 0, 00:16:40.730 "rw_mbytes_per_sec": 0, 00:16:40.730 "r_mbytes_per_sec": 0, 00:16:40.730 "w_mbytes_per_sec": 0 00:16:40.730 }, 00:16:40.730 "claimed": true, 00:16:40.730 "claim_type": "exclusive_write", 00:16:40.730 "zoned": false, 00:16:40.730 "supported_io_types": { 00:16:40.730 "read": true, 00:16:40.730 "write": true, 00:16:40.730 "unmap": true, 00:16:40.730 "flush": true, 00:16:40.730 "reset": true, 00:16:40.730 "nvme_admin": false, 00:16:40.730 "nvme_io": false, 00:16:40.730 "nvme_io_md": false, 00:16:40.730 "write_zeroes": true, 00:16:40.730 "zcopy": true, 00:16:40.730 "get_zone_info": false, 00:16:40.730 "zone_management": false, 00:16:40.730 "zone_append": false, 00:16:40.730 "compare": false, 00:16:40.730 "compare_and_write": false, 00:16:40.730 "abort": true, 00:16:40.730 "seek_hole": false, 00:16:40.730 "seek_data": false, 00:16:40.730 "copy": true, 00:16:40.730 "nvme_iov_md": false 00:16:40.730 }, 00:16:40.730 "memory_domains": [ 00:16:40.730 { 00:16:40.730 "dma_device_id": "system", 00:16:40.730 "dma_device_type": 1 00:16:40.730 }, 00:16:40.730 { 00:16:40.730 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:40.730 "dma_device_type": 2 00:16:40.730 } 00:16:40.730 ], 00:16:40.730 "driver_specific": {} 00:16:40.730 }' 00:16:40.730 14:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:40.730 14:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:40.730 14:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:40.730 14:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:40.989 14:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:40.989 14:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:40.989 14:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:40.989 14:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:40.989 14:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:40.989 14:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:40.989 14:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:40.989 14:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:40.989 14:09:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:41.555 [2024-07-15 14:09:27.269510] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:41.555 [2024-07-15 14:09:27.269722] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:41.555 [2024-07-15 14:09:27.269899] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:41.555 14:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:16:41.555 14:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:16:41.555 14:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:41.555 14:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:16:41.555 14:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:16:41.555 14:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:16:41.555 14:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:41.555 14:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:16:41.555 14:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:41.555 14:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:41.555 14:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:41.555 14:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:41.555 14:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:41.555 14:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:41.555 14:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:41.555 14:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:41.555 14:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:41.864 14:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:41.864 "name": "Existed_Raid", 00:16:41.864 "uuid": "8517a455-7c58-4040-82d3-97c6d2bca928", 00:16:41.864 "strip_size_kb": 64, 00:16:41.864 "state": "offline", 00:16:41.864 "raid_level": "raid0", 00:16:41.864 "superblock": false, 00:16:41.864 "num_base_bdevs": 3, 00:16:41.864 "num_base_bdevs_discovered": 2, 00:16:41.864 "num_base_bdevs_operational": 2, 00:16:41.864 "base_bdevs_list": [ 00:16:41.864 { 00:16:41.864 "name": null, 00:16:41.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.864 "is_configured": false, 00:16:41.864 "data_offset": 0, 00:16:41.864 "data_size": 65536 00:16:41.864 }, 00:16:41.864 { 00:16:41.864 "name": "BaseBdev2", 00:16:41.864 "uuid": "09bd8d55-a494-4888-8129-2ff83078b5f4", 00:16:41.864 "is_configured": true, 00:16:41.864 "data_offset": 0, 00:16:41.864 "data_size": 65536 00:16:41.864 }, 00:16:41.864 { 00:16:41.864 "name": "BaseBdev3", 00:16:41.864 "uuid": "7060d570-be09-4ccd-9534-f8aabc547cbe", 00:16:41.864 "is_configured": true, 00:16:41.864 "data_offset": 0, 00:16:41.864 "data_size": 65536 00:16:41.864 } 00:16:41.864 ] 00:16:41.864 }' 00:16:41.864 14:09:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:41.864 14:09:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.433 14:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:16:42.433 14:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:42.433 14:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:42.433 14:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:42.691 14:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:42.691 14:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:42.691 14:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:42.960 [2024-07-15 14:09:28.850734] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:42.960 14:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:42.960 14:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:42.960 14:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:42.960 14:09:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:43.219 14:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:43.219 14:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:43.219 14:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:43.478 [2024-07-15 14:09:29.432192] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:43.478 [2024-07-15 14:09:29.432464] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:16:43.737 14:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:43.737 14:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:43.737 14:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:16:43.737 14:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:43.996 14:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:16:43.996 14:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:16:43.996 14:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:16:43.996 14:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:16:43.996 14:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:43.996 14:09:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:44.256 BaseBdev2 00:16:44.256 14:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:16:44.256 14:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:16:44.256 14:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:44.256 14:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:44.256 14:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:44.256 14:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:44.256 14:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:44.514 14:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:44.773 [ 00:16:44.773 { 00:16:44.773 "name": "BaseBdev2", 00:16:44.773 "aliases": [ 00:16:44.773 "1ef324ad-6957-4568-a3b5-9fb9cffb090b" 00:16:44.773 ], 00:16:44.773 "product_name": "Malloc disk", 00:16:44.773 "block_size": 512, 00:16:44.773 "num_blocks": 65536, 00:16:44.773 "uuid": "1ef324ad-6957-4568-a3b5-9fb9cffb090b", 00:16:44.773 "assigned_rate_limits": { 00:16:44.773 "rw_ios_per_sec": 0, 00:16:44.773 "rw_mbytes_per_sec": 0, 00:16:44.773 "r_mbytes_per_sec": 0, 00:16:44.773 "w_mbytes_per_sec": 0 00:16:44.773 }, 00:16:44.773 "claimed": false, 00:16:44.773 "zoned": false, 00:16:44.773 "supported_io_types": { 00:16:44.773 "read": true, 00:16:44.773 "write": true, 00:16:44.773 "unmap": true, 00:16:44.773 "flush": true, 00:16:44.773 "reset": true, 00:16:44.773 "nvme_admin": false, 00:16:44.773 "nvme_io": false, 00:16:44.773 "nvme_io_md": false, 00:16:44.773 "write_zeroes": true, 00:16:44.773 "zcopy": true, 00:16:44.773 "get_zone_info": false, 00:16:44.773 "zone_management": false, 00:16:44.773 "zone_append": false, 00:16:44.773 "compare": false, 00:16:44.773 "compare_and_write": false, 00:16:44.773 "abort": true, 00:16:44.773 "seek_hole": false, 00:16:44.773 "seek_data": false, 00:16:44.773 "copy": true, 00:16:44.773 "nvme_iov_md": false 00:16:44.773 }, 00:16:44.773 "memory_domains": [ 00:16:44.773 { 00:16:44.773 "dma_device_id": "system", 00:16:44.773 "dma_device_type": 1 00:16:44.773 }, 00:16:44.773 { 00:16:44.773 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:44.773 "dma_device_type": 2 00:16:44.773 } 00:16:44.773 ], 00:16:44.773 "driver_specific": {} 00:16:44.773 } 00:16:44.773 ] 00:16:44.773 14:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:44.773 14:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:16:44.774 14:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:44.774 14:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:45.045 BaseBdev3 00:16:45.045 14:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:16:45.045 14:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:16:45.045 14:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:45.045 14:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:45.045 14:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:45.045 14:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:45.045 14:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:45.335 14:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:45.335 [ 00:16:45.335 { 00:16:45.335 "name": "BaseBdev3", 00:16:45.335 "aliases": [ 00:16:45.335 "6b3e87ad-bc37-46b3-bf7b-fadd096abbb5" 00:16:45.335 ], 00:16:45.335 "product_name": "Malloc disk", 00:16:45.335 "block_size": 512, 00:16:45.335 "num_blocks": 65536, 00:16:45.335 "uuid": "6b3e87ad-bc37-46b3-bf7b-fadd096abbb5", 00:16:45.335 "assigned_rate_limits": { 00:16:45.335 "rw_ios_per_sec": 0, 00:16:45.335 "rw_mbytes_per_sec": 0, 00:16:45.335 "r_mbytes_per_sec": 0, 00:16:45.335 "w_mbytes_per_sec": 0 00:16:45.335 }, 00:16:45.335 "claimed": false, 00:16:45.335 "zoned": false, 00:16:45.335 "supported_io_types": { 00:16:45.335 "read": true, 00:16:45.335 "write": true, 00:16:45.335 "unmap": true, 00:16:45.335 "flush": true, 00:16:45.335 "reset": true, 00:16:45.335 "nvme_admin": false, 00:16:45.335 "nvme_io": false, 00:16:45.335 "nvme_io_md": false, 00:16:45.335 "write_zeroes": true, 00:16:45.335 "zcopy": true, 00:16:45.335 "get_zone_info": false, 00:16:45.335 "zone_management": false, 00:16:45.335 "zone_append": false, 00:16:45.335 "compare": false, 00:16:45.335 "compare_and_write": false, 00:16:45.335 "abort": true, 00:16:45.335 "seek_hole": false, 00:16:45.335 "seek_data": false, 00:16:45.335 "copy": true, 00:16:45.335 "nvme_iov_md": false 00:16:45.335 }, 00:16:45.335 "memory_domains": [ 00:16:45.335 { 00:16:45.335 "dma_device_id": "system", 00:16:45.335 "dma_device_type": 1 00:16:45.335 }, 00:16:45.335 { 00:16:45.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:45.335 "dma_device_type": 2 00:16:45.335 } 00:16:45.335 ], 00:16:45.335 "driver_specific": {} 00:16:45.335 } 00:16:45.335 ] 00:16:45.593 14:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:45.593 14:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:16:45.593 14:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:45.593 14:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:45.593 [2024-07-15 14:09:31.574096] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:45.593 [2024-07-15 14:09:31.574482] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:45.593 [2024-07-15 14:09:31.574691] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:45.593 [2024-07-15 14:09:31.576512] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:45.593 14:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:45.851 14:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:45.851 14:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:45.851 14:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:45.851 14:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:45.851 14:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:45.851 14:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:45.851 14:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:45.851 14:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:45.851 14:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:45.851 14:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:45.851 14:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:45.851 14:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:45.851 "name": "Existed_Raid", 00:16:45.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:45.851 "strip_size_kb": 64, 00:16:45.851 "state": "configuring", 00:16:45.851 "raid_level": "raid0", 00:16:45.851 "superblock": false, 00:16:45.851 "num_base_bdevs": 3, 00:16:45.851 "num_base_bdevs_discovered": 2, 00:16:45.851 "num_base_bdevs_operational": 3, 00:16:45.851 "base_bdevs_list": [ 00:16:45.851 { 00:16:45.851 "name": "BaseBdev1", 00:16:45.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:45.851 "is_configured": false, 00:16:45.851 "data_offset": 0, 00:16:45.851 "data_size": 0 00:16:45.851 }, 00:16:45.851 { 00:16:45.851 "name": "BaseBdev2", 00:16:45.851 "uuid": "1ef324ad-6957-4568-a3b5-9fb9cffb090b", 00:16:45.851 "is_configured": true, 00:16:45.851 "data_offset": 0, 00:16:45.851 "data_size": 65536 00:16:45.851 }, 00:16:45.851 { 00:16:45.851 "name": "BaseBdev3", 00:16:45.851 "uuid": "6b3e87ad-bc37-46b3-bf7b-fadd096abbb5", 00:16:45.851 "is_configured": true, 00:16:45.851 "data_offset": 0, 00:16:45.851 "data_size": 65536 00:16:45.851 } 00:16:45.851 ] 00:16:45.851 }' 00:16:45.851 14:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:45.851 14:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.786 14:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:16:47.044 [2024-07-15 14:09:32.798208] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:47.044 14:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:47.044 14:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:47.044 14:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:47.044 14:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:47.044 14:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:47.044 14:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:47.044 14:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:47.044 14:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:47.044 14:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:47.044 14:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:47.044 14:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:47.044 14:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:47.303 14:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:47.303 "name": "Existed_Raid", 00:16:47.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.303 "strip_size_kb": 64, 00:16:47.303 "state": "configuring", 00:16:47.303 "raid_level": "raid0", 00:16:47.303 "superblock": false, 00:16:47.303 "num_base_bdevs": 3, 00:16:47.303 "num_base_bdevs_discovered": 1, 00:16:47.303 "num_base_bdevs_operational": 3, 00:16:47.303 "base_bdevs_list": [ 00:16:47.303 { 00:16:47.303 "name": "BaseBdev1", 00:16:47.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.303 "is_configured": false, 00:16:47.303 "data_offset": 0, 00:16:47.303 "data_size": 0 00:16:47.303 }, 00:16:47.303 { 00:16:47.303 "name": null, 00:16:47.303 "uuid": "1ef324ad-6957-4568-a3b5-9fb9cffb090b", 00:16:47.303 "is_configured": false, 00:16:47.303 "data_offset": 0, 00:16:47.303 "data_size": 65536 00:16:47.303 }, 00:16:47.303 { 00:16:47.303 "name": "BaseBdev3", 00:16:47.303 "uuid": "6b3e87ad-bc37-46b3-bf7b-fadd096abbb5", 00:16:47.303 "is_configured": true, 00:16:47.303 "data_offset": 0, 00:16:47.303 "data_size": 65536 00:16:47.303 } 00:16:47.303 ] 00:16:47.303 }' 00:16:47.303 14:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:47.303 14:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.869 14:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:47.869 14:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:48.127 14:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:16:48.127 14:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:48.385 [2024-07-15 14:09:34.256256] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:48.385 BaseBdev1 00:16:48.385 14:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:16:48.385 14:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:16:48.385 14:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:48.385 14:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:48.385 14:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:48.385 14:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:48.385 14:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:48.644 14:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:48.902 [ 00:16:48.902 { 00:16:48.902 "name": "BaseBdev1", 00:16:48.902 "aliases": [ 00:16:48.902 "cbe20b1c-f737-44bd-bc90-5431e6ac05dc" 00:16:48.902 ], 00:16:48.902 "product_name": "Malloc disk", 00:16:48.902 "block_size": 512, 00:16:48.902 "num_blocks": 65536, 00:16:48.902 "uuid": "cbe20b1c-f737-44bd-bc90-5431e6ac05dc", 00:16:48.902 "assigned_rate_limits": { 00:16:48.902 "rw_ios_per_sec": 0, 00:16:48.902 "rw_mbytes_per_sec": 0, 00:16:48.902 "r_mbytes_per_sec": 0, 00:16:48.902 "w_mbytes_per_sec": 0 00:16:48.902 }, 00:16:48.902 "claimed": true, 00:16:48.902 "claim_type": "exclusive_write", 00:16:48.902 "zoned": false, 00:16:48.902 "supported_io_types": { 00:16:48.902 "read": true, 00:16:48.902 "write": true, 00:16:48.902 "unmap": true, 00:16:48.902 "flush": true, 00:16:48.902 "reset": true, 00:16:48.902 "nvme_admin": false, 00:16:48.902 "nvme_io": false, 00:16:48.902 "nvme_io_md": false, 00:16:48.902 "write_zeroes": true, 00:16:48.902 "zcopy": true, 00:16:48.902 "get_zone_info": false, 00:16:48.902 "zone_management": false, 00:16:48.902 "zone_append": false, 00:16:48.902 "compare": false, 00:16:48.902 "compare_and_write": false, 00:16:48.902 "abort": true, 00:16:48.902 "seek_hole": false, 00:16:48.902 "seek_data": false, 00:16:48.902 "copy": true, 00:16:48.902 "nvme_iov_md": false 00:16:48.902 }, 00:16:48.902 "memory_domains": [ 00:16:48.902 { 00:16:48.902 "dma_device_id": "system", 00:16:48.902 "dma_device_type": 1 00:16:48.902 }, 00:16:48.902 { 00:16:48.902 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:48.902 "dma_device_type": 2 00:16:48.902 } 00:16:48.902 ], 00:16:48.902 "driver_specific": {} 00:16:48.902 } 00:16:48.902 ] 00:16:48.902 14:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:48.902 14:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:48.902 14:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:48.902 14:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:48.902 14:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:48.902 14:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:48.902 14:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:48.902 14:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:48.902 14:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:48.902 14:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:48.902 14:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:48.902 14:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:48.902 14:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:49.161 14:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:49.161 "name": "Existed_Raid", 00:16:49.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.161 "strip_size_kb": 64, 00:16:49.161 "state": "configuring", 00:16:49.161 "raid_level": "raid0", 00:16:49.161 "superblock": false, 00:16:49.161 "num_base_bdevs": 3, 00:16:49.161 "num_base_bdevs_discovered": 2, 00:16:49.161 "num_base_bdevs_operational": 3, 00:16:49.161 "base_bdevs_list": [ 00:16:49.161 { 00:16:49.161 "name": "BaseBdev1", 00:16:49.161 "uuid": "cbe20b1c-f737-44bd-bc90-5431e6ac05dc", 00:16:49.161 "is_configured": true, 00:16:49.161 "data_offset": 0, 00:16:49.161 "data_size": 65536 00:16:49.161 }, 00:16:49.161 { 00:16:49.161 "name": null, 00:16:49.161 "uuid": "1ef324ad-6957-4568-a3b5-9fb9cffb090b", 00:16:49.161 "is_configured": false, 00:16:49.161 "data_offset": 0, 00:16:49.161 "data_size": 65536 00:16:49.161 }, 00:16:49.161 { 00:16:49.161 "name": "BaseBdev3", 00:16:49.161 "uuid": "6b3e87ad-bc37-46b3-bf7b-fadd096abbb5", 00:16:49.161 "is_configured": true, 00:16:49.161 "data_offset": 0, 00:16:49.161 "data_size": 65536 00:16:49.161 } 00:16:49.161 ] 00:16:49.161 }' 00:16:49.161 14:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:49.161 14:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.726 14:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:49.726 14:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:49.984 14:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:16:49.984 14:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:16:50.243 [2024-07-15 14:09:36.149090] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:50.243 14:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:50.243 14:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:50.243 14:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:50.243 14:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:50.243 14:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:50.243 14:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:50.243 14:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:50.243 14:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:50.243 14:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:50.243 14:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:50.243 14:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:50.243 14:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:50.501 14:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:50.501 "name": "Existed_Raid", 00:16:50.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.501 "strip_size_kb": 64, 00:16:50.501 "state": "configuring", 00:16:50.501 "raid_level": "raid0", 00:16:50.501 "superblock": false, 00:16:50.501 "num_base_bdevs": 3, 00:16:50.501 "num_base_bdevs_discovered": 1, 00:16:50.501 "num_base_bdevs_operational": 3, 00:16:50.501 "base_bdevs_list": [ 00:16:50.501 { 00:16:50.501 "name": "BaseBdev1", 00:16:50.501 "uuid": "cbe20b1c-f737-44bd-bc90-5431e6ac05dc", 00:16:50.501 "is_configured": true, 00:16:50.501 "data_offset": 0, 00:16:50.501 "data_size": 65536 00:16:50.501 }, 00:16:50.501 { 00:16:50.501 "name": null, 00:16:50.501 "uuid": "1ef324ad-6957-4568-a3b5-9fb9cffb090b", 00:16:50.501 "is_configured": false, 00:16:50.501 "data_offset": 0, 00:16:50.501 "data_size": 65536 00:16:50.501 }, 00:16:50.501 { 00:16:50.501 "name": null, 00:16:50.501 "uuid": "6b3e87ad-bc37-46b3-bf7b-fadd096abbb5", 00:16:50.501 "is_configured": false, 00:16:50.501 "data_offset": 0, 00:16:50.501 "data_size": 65536 00:16:50.501 } 00:16:50.501 ] 00:16:50.501 }' 00:16:50.501 14:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:50.501 14:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.440 14:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:51.440 14:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:51.440 14:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:16:51.440 14:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:51.698 [2024-07-15 14:09:37.637401] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:51.698 14:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:51.698 14:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:51.698 14:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:51.698 14:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:51.698 14:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:51.698 14:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:51.698 14:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:51.698 14:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:51.698 14:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:51.698 14:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:51.698 14:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:51.698 14:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:51.956 14:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:51.956 "name": "Existed_Raid", 00:16:51.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.956 "strip_size_kb": 64, 00:16:51.956 "state": "configuring", 00:16:51.956 "raid_level": "raid0", 00:16:51.956 "superblock": false, 00:16:51.956 "num_base_bdevs": 3, 00:16:51.956 "num_base_bdevs_discovered": 2, 00:16:51.956 "num_base_bdevs_operational": 3, 00:16:51.956 "base_bdevs_list": [ 00:16:51.956 { 00:16:51.956 "name": "BaseBdev1", 00:16:51.956 "uuid": "cbe20b1c-f737-44bd-bc90-5431e6ac05dc", 00:16:51.956 "is_configured": true, 00:16:51.956 "data_offset": 0, 00:16:51.956 "data_size": 65536 00:16:51.956 }, 00:16:51.956 { 00:16:51.956 "name": null, 00:16:51.956 "uuid": "1ef324ad-6957-4568-a3b5-9fb9cffb090b", 00:16:51.956 "is_configured": false, 00:16:51.956 "data_offset": 0, 00:16:51.956 "data_size": 65536 00:16:51.956 }, 00:16:51.956 { 00:16:51.956 "name": "BaseBdev3", 00:16:51.956 "uuid": "6b3e87ad-bc37-46b3-bf7b-fadd096abbb5", 00:16:51.956 "is_configured": true, 00:16:51.956 "data_offset": 0, 00:16:51.956 "data_size": 65536 00:16:51.956 } 00:16:51.956 ] 00:16:51.956 }' 00:16:51.956 14:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:51.956 14:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.891 14:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:52.891 14:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:52.891 14:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:16:52.891 14:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:53.150 [2024-07-15 14:09:39.152459] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:53.408 14:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:53.408 14:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:53.409 14:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:53.409 14:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:53.409 14:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:53.409 14:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:53.409 14:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:53.409 14:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:53.409 14:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:53.409 14:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:53.409 14:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:53.409 14:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:53.669 14:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:53.669 "name": "Existed_Raid", 00:16:53.669 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.669 "strip_size_kb": 64, 00:16:53.669 "state": "configuring", 00:16:53.669 "raid_level": "raid0", 00:16:53.669 "superblock": false, 00:16:53.669 "num_base_bdevs": 3, 00:16:53.669 "num_base_bdevs_discovered": 1, 00:16:53.669 "num_base_bdevs_operational": 3, 00:16:53.669 "base_bdevs_list": [ 00:16:53.669 { 00:16:53.669 "name": null, 00:16:53.669 "uuid": "cbe20b1c-f737-44bd-bc90-5431e6ac05dc", 00:16:53.669 "is_configured": false, 00:16:53.669 "data_offset": 0, 00:16:53.669 "data_size": 65536 00:16:53.669 }, 00:16:53.669 { 00:16:53.669 "name": null, 00:16:53.669 "uuid": "1ef324ad-6957-4568-a3b5-9fb9cffb090b", 00:16:53.669 "is_configured": false, 00:16:53.669 "data_offset": 0, 00:16:53.669 "data_size": 65536 00:16:53.669 }, 00:16:53.669 { 00:16:53.669 "name": "BaseBdev3", 00:16:53.669 "uuid": "6b3e87ad-bc37-46b3-bf7b-fadd096abbb5", 00:16:53.669 "is_configured": true, 00:16:53.669 "data_offset": 0, 00:16:53.669 "data_size": 65536 00:16:53.669 } 00:16:53.669 ] 00:16:53.669 }' 00:16:53.669 14:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:53.669 14:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.644 14:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:54.644 14:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:54.644 14:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:16:54.644 14:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:54.919 [2024-07-15 14:09:40.824556] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:54.920 14:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:54.920 14:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:54.920 14:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:54.920 14:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:54.920 14:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:54.920 14:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:54.920 14:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:54.920 14:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:54.920 14:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:54.920 14:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:54.920 14:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:54.920 14:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:55.179 14:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:55.179 "name": "Existed_Raid", 00:16:55.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.179 "strip_size_kb": 64, 00:16:55.179 "state": "configuring", 00:16:55.179 "raid_level": "raid0", 00:16:55.179 "superblock": false, 00:16:55.179 "num_base_bdevs": 3, 00:16:55.179 "num_base_bdevs_discovered": 2, 00:16:55.179 "num_base_bdevs_operational": 3, 00:16:55.179 "base_bdevs_list": [ 00:16:55.179 { 00:16:55.179 "name": null, 00:16:55.179 "uuid": "cbe20b1c-f737-44bd-bc90-5431e6ac05dc", 00:16:55.179 "is_configured": false, 00:16:55.179 "data_offset": 0, 00:16:55.179 "data_size": 65536 00:16:55.179 }, 00:16:55.179 { 00:16:55.179 "name": "BaseBdev2", 00:16:55.179 "uuid": "1ef324ad-6957-4568-a3b5-9fb9cffb090b", 00:16:55.179 "is_configured": true, 00:16:55.179 "data_offset": 0, 00:16:55.179 "data_size": 65536 00:16:55.179 }, 00:16:55.179 { 00:16:55.179 "name": "BaseBdev3", 00:16:55.179 "uuid": "6b3e87ad-bc37-46b3-bf7b-fadd096abbb5", 00:16:55.179 "is_configured": true, 00:16:55.179 "data_offset": 0, 00:16:55.179 "data_size": 65536 00:16:55.179 } 00:16:55.179 ] 00:16:55.179 }' 00:16:55.179 14:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:55.179 14:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.115 14:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:56.115 14:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:56.115 14:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:16:56.115 14:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:56.115 14:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:56.374 14:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u cbe20b1c-f737-44bd-bc90-5431e6ac05dc 00:16:56.634 [2024-07-15 14:09:42.575570] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:56.634 [2024-07-15 14:09:42.575883] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:16:56.634 [2024-07-15 14:09:42.575937] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:56.634 [2024-07-15 14:09:42.576145] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:56.634 [2024-07-15 14:09:42.576489] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:16:56.634 [2024-07-15 14:09:42.576640] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000008a80 00:16:56.634 [2024-07-15 14:09:42.576959] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:56.634 NewBaseBdev 00:16:56.634 14:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:16:56.634 14:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:16:56.634 14:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:56.634 14:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:16:56.634 14:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:56.634 14:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:56.634 14:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:56.893 14:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:57.152 [ 00:16:57.152 { 00:16:57.152 "name": "NewBaseBdev", 00:16:57.152 "aliases": [ 00:16:57.152 "cbe20b1c-f737-44bd-bc90-5431e6ac05dc" 00:16:57.152 ], 00:16:57.152 "product_name": "Malloc disk", 00:16:57.152 "block_size": 512, 00:16:57.152 "num_blocks": 65536, 00:16:57.152 "uuid": "cbe20b1c-f737-44bd-bc90-5431e6ac05dc", 00:16:57.152 "assigned_rate_limits": { 00:16:57.152 "rw_ios_per_sec": 0, 00:16:57.152 "rw_mbytes_per_sec": 0, 00:16:57.152 "r_mbytes_per_sec": 0, 00:16:57.152 "w_mbytes_per_sec": 0 00:16:57.152 }, 00:16:57.152 "claimed": true, 00:16:57.152 "claim_type": "exclusive_write", 00:16:57.152 "zoned": false, 00:16:57.152 "supported_io_types": { 00:16:57.152 "read": true, 00:16:57.152 "write": true, 00:16:57.152 "unmap": true, 00:16:57.152 "flush": true, 00:16:57.152 "reset": true, 00:16:57.152 "nvme_admin": false, 00:16:57.152 "nvme_io": false, 00:16:57.152 "nvme_io_md": false, 00:16:57.152 "write_zeroes": true, 00:16:57.152 "zcopy": true, 00:16:57.152 "get_zone_info": false, 00:16:57.152 "zone_management": false, 00:16:57.152 "zone_append": false, 00:16:57.152 "compare": false, 00:16:57.152 "compare_and_write": false, 00:16:57.152 "abort": true, 00:16:57.152 "seek_hole": false, 00:16:57.152 "seek_data": false, 00:16:57.152 "copy": true, 00:16:57.152 "nvme_iov_md": false 00:16:57.152 }, 00:16:57.152 "memory_domains": [ 00:16:57.152 { 00:16:57.152 "dma_device_id": "system", 00:16:57.152 "dma_device_type": 1 00:16:57.152 }, 00:16:57.152 { 00:16:57.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:57.152 "dma_device_type": 2 00:16:57.152 } 00:16:57.152 ], 00:16:57.152 "driver_specific": {} 00:16:57.152 } 00:16:57.152 ] 00:16:57.152 14:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:16:57.152 14:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:16:57.152 14:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:57.152 14:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:57.152 14:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:57.152 14:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:57.152 14:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:57.152 14:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:57.152 14:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:57.152 14:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:57.152 14:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:57.152 14:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:57.152 14:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:57.409 14:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:57.409 "name": "Existed_Raid", 00:16:57.409 "uuid": "66105776-76cf-46de-b1e0-b6ccf90764e7", 00:16:57.409 "strip_size_kb": 64, 00:16:57.409 "state": "online", 00:16:57.409 "raid_level": "raid0", 00:16:57.409 "superblock": false, 00:16:57.409 "num_base_bdevs": 3, 00:16:57.409 "num_base_bdevs_discovered": 3, 00:16:57.409 "num_base_bdevs_operational": 3, 00:16:57.409 "base_bdevs_list": [ 00:16:57.409 { 00:16:57.409 "name": "NewBaseBdev", 00:16:57.409 "uuid": "cbe20b1c-f737-44bd-bc90-5431e6ac05dc", 00:16:57.409 "is_configured": true, 00:16:57.409 "data_offset": 0, 00:16:57.409 "data_size": 65536 00:16:57.409 }, 00:16:57.409 { 00:16:57.409 "name": "BaseBdev2", 00:16:57.409 "uuid": "1ef324ad-6957-4568-a3b5-9fb9cffb090b", 00:16:57.409 "is_configured": true, 00:16:57.409 "data_offset": 0, 00:16:57.409 "data_size": 65536 00:16:57.409 }, 00:16:57.409 { 00:16:57.409 "name": "BaseBdev3", 00:16:57.409 "uuid": "6b3e87ad-bc37-46b3-bf7b-fadd096abbb5", 00:16:57.409 "is_configured": true, 00:16:57.409 "data_offset": 0, 00:16:57.409 "data_size": 65536 00:16:57.409 } 00:16:57.409 ] 00:16:57.409 }' 00:16:57.409 14:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:57.409 14:09:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.344 14:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:16:58.344 14:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:58.344 14:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:58.344 14:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:58.344 14:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:58.344 14:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:58.344 14:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:58.344 14:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:58.344 [2024-07-15 14:09:44.244332] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:58.344 14:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:58.344 "name": "Existed_Raid", 00:16:58.344 "aliases": [ 00:16:58.344 "66105776-76cf-46de-b1e0-b6ccf90764e7" 00:16:58.344 ], 00:16:58.344 "product_name": "Raid Volume", 00:16:58.344 "block_size": 512, 00:16:58.344 "num_blocks": 196608, 00:16:58.344 "uuid": "66105776-76cf-46de-b1e0-b6ccf90764e7", 00:16:58.344 "assigned_rate_limits": { 00:16:58.344 "rw_ios_per_sec": 0, 00:16:58.344 "rw_mbytes_per_sec": 0, 00:16:58.344 "r_mbytes_per_sec": 0, 00:16:58.344 "w_mbytes_per_sec": 0 00:16:58.344 }, 00:16:58.344 "claimed": false, 00:16:58.344 "zoned": false, 00:16:58.344 "supported_io_types": { 00:16:58.344 "read": true, 00:16:58.344 "write": true, 00:16:58.344 "unmap": true, 00:16:58.344 "flush": true, 00:16:58.344 "reset": true, 00:16:58.344 "nvme_admin": false, 00:16:58.344 "nvme_io": false, 00:16:58.344 "nvme_io_md": false, 00:16:58.344 "write_zeroes": true, 00:16:58.344 "zcopy": false, 00:16:58.344 "get_zone_info": false, 00:16:58.344 "zone_management": false, 00:16:58.344 "zone_append": false, 00:16:58.344 "compare": false, 00:16:58.344 "compare_and_write": false, 00:16:58.344 "abort": false, 00:16:58.344 "seek_hole": false, 00:16:58.344 "seek_data": false, 00:16:58.344 "copy": false, 00:16:58.344 "nvme_iov_md": false 00:16:58.344 }, 00:16:58.344 "memory_domains": [ 00:16:58.344 { 00:16:58.344 "dma_device_id": "system", 00:16:58.344 "dma_device_type": 1 00:16:58.344 }, 00:16:58.344 { 00:16:58.344 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:58.344 "dma_device_type": 2 00:16:58.344 }, 00:16:58.344 { 00:16:58.344 "dma_device_id": "system", 00:16:58.344 "dma_device_type": 1 00:16:58.344 }, 00:16:58.344 { 00:16:58.344 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:58.344 "dma_device_type": 2 00:16:58.344 }, 00:16:58.344 { 00:16:58.344 "dma_device_id": "system", 00:16:58.344 "dma_device_type": 1 00:16:58.344 }, 00:16:58.344 { 00:16:58.344 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:58.344 "dma_device_type": 2 00:16:58.345 } 00:16:58.345 ], 00:16:58.345 "driver_specific": { 00:16:58.345 "raid": { 00:16:58.345 "uuid": "66105776-76cf-46de-b1e0-b6ccf90764e7", 00:16:58.345 "strip_size_kb": 64, 00:16:58.345 "state": "online", 00:16:58.345 "raid_level": "raid0", 00:16:58.345 "superblock": false, 00:16:58.345 "num_base_bdevs": 3, 00:16:58.345 "num_base_bdevs_discovered": 3, 00:16:58.345 "num_base_bdevs_operational": 3, 00:16:58.345 "base_bdevs_list": [ 00:16:58.345 { 00:16:58.345 "name": "NewBaseBdev", 00:16:58.345 "uuid": "cbe20b1c-f737-44bd-bc90-5431e6ac05dc", 00:16:58.345 "is_configured": true, 00:16:58.345 "data_offset": 0, 00:16:58.345 "data_size": 65536 00:16:58.345 }, 00:16:58.345 { 00:16:58.345 "name": "BaseBdev2", 00:16:58.345 "uuid": "1ef324ad-6957-4568-a3b5-9fb9cffb090b", 00:16:58.345 "is_configured": true, 00:16:58.345 "data_offset": 0, 00:16:58.345 "data_size": 65536 00:16:58.345 }, 00:16:58.345 { 00:16:58.345 "name": "BaseBdev3", 00:16:58.345 "uuid": "6b3e87ad-bc37-46b3-bf7b-fadd096abbb5", 00:16:58.345 "is_configured": true, 00:16:58.345 "data_offset": 0, 00:16:58.345 "data_size": 65536 00:16:58.345 } 00:16:58.345 ] 00:16:58.345 } 00:16:58.345 } 00:16:58.345 }' 00:16:58.345 14:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:58.345 14:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:16:58.345 BaseBdev2 00:16:58.345 BaseBdev3' 00:16:58.345 14:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:58.345 14:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:16:58.345 14:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:58.911 14:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:58.911 "name": "NewBaseBdev", 00:16:58.911 "aliases": [ 00:16:58.911 "cbe20b1c-f737-44bd-bc90-5431e6ac05dc" 00:16:58.911 ], 00:16:58.911 "product_name": "Malloc disk", 00:16:58.911 "block_size": 512, 00:16:58.911 "num_blocks": 65536, 00:16:58.911 "uuid": "cbe20b1c-f737-44bd-bc90-5431e6ac05dc", 00:16:58.911 "assigned_rate_limits": { 00:16:58.911 "rw_ios_per_sec": 0, 00:16:58.911 "rw_mbytes_per_sec": 0, 00:16:58.911 "r_mbytes_per_sec": 0, 00:16:58.911 "w_mbytes_per_sec": 0 00:16:58.911 }, 00:16:58.911 "claimed": true, 00:16:58.911 "claim_type": "exclusive_write", 00:16:58.911 "zoned": false, 00:16:58.911 "supported_io_types": { 00:16:58.911 "read": true, 00:16:58.911 "write": true, 00:16:58.911 "unmap": true, 00:16:58.911 "flush": true, 00:16:58.911 "reset": true, 00:16:58.911 "nvme_admin": false, 00:16:58.911 "nvme_io": false, 00:16:58.911 "nvme_io_md": false, 00:16:58.911 "write_zeroes": true, 00:16:58.911 "zcopy": true, 00:16:58.911 "get_zone_info": false, 00:16:58.911 "zone_management": false, 00:16:58.911 "zone_append": false, 00:16:58.911 "compare": false, 00:16:58.911 "compare_and_write": false, 00:16:58.911 "abort": true, 00:16:58.911 "seek_hole": false, 00:16:58.911 "seek_data": false, 00:16:58.911 "copy": true, 00:16:58.911 "nvme_iov_md": false 00:16:58.911 }, 00:16:58.911 "memory_domains": [ 00:16:58.911 { 00:16:58.911 "dma_device_id": "system", 00:16:58.911 "dma_device_type": 1 00:16:58.911 }, 00:16:58.911 { 00:16:58.911 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:58.911 "dma_device_type": 2 00:16:58.911 } 00:16:58.911 ], 00:16:58.911 "driver_specific": {} 00:16:58.911 }' 00:16:58.911 14:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:58.911 14:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:58.911 14:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:58.911 14:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:58.911 14:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:58.911 14:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:58.911 14:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:58.911 14:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:59.169 14:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:59.169 14:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:59.169 14:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:59.169 14:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:59.169 14:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:59.169 14:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:59.169 14:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:59.426 14:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:59.426 "name": "BaseBdev2", 00:16:59.426 "aliases": [ 00:16:59.426 "1ef324ad-6957-4568-a3b5-9fb9cffb090b" 00:16:59.426 ], 00:16:59.426 "product_name": "Malloc disk", 00:16:59.426 "block_size": 512, 00:16:59.427 "num_blocks": 65536, 00:16:59.427 "uuid": "1ef324ad-6957-4568-a3b5-9fb9cffb090b", 00:16:59.427 "assigned_rate_limits": { 00:16:59.427 "rw_ios_per_sec": 0, 00:16:59.427 "rw_mbytes_per_sec": 0, 00:16:59.427 "r_mbytes_per_sec": 0, 00:16:59.427 "w_mbytes_per_sec": 0 00:16:59.427 }, 00:16:59.427 "claimed": true, 00:16:59.427 "claim_type": "exclusive_write", 00:16:59.427 "zoned": false, 00:16:59.427 "supported_io_types": { 00:16:59.427 "read": true, 00:16:59.427 "write": true, 00:16:59.427 "unmap": true, 00:16:59.427 "flush": true, 00:16:59.427 "reset": true, 00:16:59.427 "nvme_admin": false, 00:16:59.427 "nvme_io": false, 00:16:59.427 "nvme_io_md": false, 00:16:59.427 "write_zeroes": true, 00:16:59.427 "zcopy": true, 00:16:59.427 "get_zone_info": false, 00:16:59.427 "zone_management": false, 00:16:59.427 "zone_append": false, 00:16:59.427 "compare": false, 00:16:59.427 "compare_and_write": false, 00:16:59.427 "abort": true, 00:16:59.427 "seek_hole": false, 00:16:59.427 "seek_data": false, 00:16:59.427 "copy": true, 00:16:59.427 "nvme_iov_md": false 00:16:59.427 }, 00:16:59.427 "memory_domains": [ 00:16:59.427 { 00:16:59.427 "dma_device_id": "system", 00:16:59.427 "dma_device_type": 1 00:16:59.427 }, 00:16:59.427 { 00:16:59.427 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:59.427 "dma_device_type": 2 00:16:59.427 } 00:16:59.427 ], 00:16:59.427 "driver_specific": {} 00:16:59.427 }' 00:16:59.427 14:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:59.427 14:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:59.427 14:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:59.427 14:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:59.685 14:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:59.685 14:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:59.685 14:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:59.685 14:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:59.685 14:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:59.685 14:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:59.685 14:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:59.685 14:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:59.685 14:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:59.685 14:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:16:59.685 14:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:00.252 14:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:00.252 "name": "BaseBdev3", 00:17:00.252 "aliases": [ 00:17:00.252 "6b3e87ad-bc37-46b3-bf7b-fadd096abbb5" 00:17:00.252 ], 00:17:00.252 "product_name": "Malloc disk", 00:17:00.252 "block_size": 512, 00:17:00.252 "num_blocks": 65536, 00:17:00.252 "uuid": "6b3e87ad-bc37-46b3-bf7b-fadd096abbb5", 00:17:00.252 "assigned_rate_limits": { 00:17:00.252 "rw_ios_per_sec": 0, 00:17:00.252 "rw_mbytes_per_sec": 0, 00:17:00.252 "r_mbytes_per_sec": 0, 00:17:00.252 "w_mbytes_per_sec": 0 00:17:00.252 }, 00:17:00.252 "claimed": true, 00:17:00.252 "claim_type": "exclusive_write", 00:17:00.252 "zoned": false, 00:17:00.252 "supported_io_types": { 00:17:00.252 "read": true, 00:17:00.252 "write": true, 00:17:00.252 "unmap": true, 00:17:00.252 "flush": true, 00:17:00.252 "reset": true, 00:17:00.252 "nvme_admin": false, 00:17:00.252 "nvme_io": false, 00:17:00.252 "nvme_io_md": false, 00:17:00.252 "write_zeroes": true, 00:17:00.252 "zcopy": true, 00:17:00.252 "get_zone_info": false, 00:17:00.252 "zone_management": false, 00:17:00.252 "zone_append": false, 00:17:00.252 "compare": false, 00:17:00.252 "compare_and_write": false, 00:17:00.252 "abort": true, 00:17:00.252 "seek_hole": false, 00:17:00.252 "seek_data": false, 00:17:00.252 "copy": true, 00:17:00.252 "nvme_iov_md": false 00:17:00.252 }, 00:17:00.252 "memory_domains": [ 00:17:00.252 { 00:17:00.252 "dma_device_id": "system", 00:17:00.252 "dma_device_type": 1 00:17:00.252 }, 00:17:00.252 { 00:17:00.252 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:00.252 "dma_device_type": 2 00:17:00.252 } 00:17:00.252 ], 00:17:00.252 "driver_specific": {} 00:17:00.252 }' 00:17:00.252 14:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:00.252 14:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:00.252 14:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:00.252 14:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:00.252 14:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:00.252 14:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:00.252 14:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:00.252 14:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:00.252 14:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:00.252 14:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:00.511 14:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:00.511 14:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:00.511 14:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:00.769 [2024-07-15 14:09:46.576603] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:00.769 [2024-07-15 14:09:46.576871] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:00.769 [2024-07-15 14:09:46.577073] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:00.769 [2024-07-15 14:09:46.577245] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:00.769 [2024-07-15 14:09:46.577367] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name Existed_Raid, state offline 00:17:00.769 14:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 191211 00:17:00.769 14:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 191211 ']' 00:17:00.769 14:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 191211 00:17:00.769 14:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:17:00.769 14:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:00.769 14:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 191211 00:17:00.769 14:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:00.769 14:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:00.769 14:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 191211' 00:17:00.769 killing process with pid 191211 00:17:00.769 14:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 191211 00:17:00.769 [2024-07-15 14:09:46.621701] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:00.769 14:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 191211 00:17:01.027 [2024-07-15 14:09:46.853994] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:02.400 14:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:17:02.400 00:17:02.400 real 0m32.804s 00:17:02.400 user 1m0.428s 00:17:02.400 sys 0m3.760s 00:17:02.400 14:09:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:02.400 14:09:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.400 ************************************ 00:17:02.400 END TEST raid_state_function_test 00:17:02.400 ************************************ 00:17:02.400 14:09:48 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:17:02.400 14:09:48 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:17:02.400 14:09:48 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:17:02.400 14:09:48 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:02.400 14:09:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:02.400 ************************************ 00:17:02.400 START TEST raid_state_function_test_sb 00:17:02.400 ************************************ 00:17:02.400 14:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 3 true 00:17:02.400 14:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:17:02.400 14:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:17:02.400 14:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:17:02.400 14:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:17:02.400 14:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:17:02.400 14:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:02.400 14:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:17:02.400 14:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:02.400 14:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:02.400 14:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:17:02.400 14:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:02.400 14:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:02.400 14:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:17:02.400 14:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:02.400 14:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:02.400 14:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:02.400 14:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:17:02.400 14:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:17:02.400 14:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:17:02.400 14:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:17:02.400 14:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:17:02.400 14:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:17:02.400 14:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:17:02.400 14:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:17:02.400 14:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:17:02.400 14:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:17:02.400 14:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=192220 00:17:02.400 14:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 192220' 00:17:02.400 14:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:02.400 Process raid pid: 192220 00:17:02.400 14:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 192220 /var/tmp/spdk-raid.sock 00:17:02.400 14:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 192220 ']' 00:17:02.400 14:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:02.400 14:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:02.400 14:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:02.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:02.401 14:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:02.401 14:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.401 [2024-07-15 14:09:48.090702] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:17:02.401 [2024-07-15 14:09:48.091580] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:02.401 [2024-07-15 14:09:48.249035] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:02.659 [2024-07-15 14:09:48.469300] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:02.950 [2024-07-15 14:09:48.697339] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:03.209 14:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:03.209 14:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:17:03.209 14:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:03.467 [2024-07-15 14:09:49.321275] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:03.467 [2024-07-15 14:09:49.323000] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:03.467 [2024-07-15 14:09:49.323177] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:03.467 [2024-07-15 14:09:49.323331] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:03.467 [2024-07-15 14:09:49.323454] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:03.467 [2024-07-15 14:09:49.323649] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:03.467 14:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:03.467 14:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:03.467 14:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:03.467 14:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:03.467 14:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:03.467 14:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:03.467 14:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:03.467 14:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:03.467 14:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:03.467 14:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:03.467 14:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:03.467 14:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:03.725 14:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:03.725 "name": "Existed_Raid", 00:17:03.725 "uuid": "23548b16-9891-4ab8-b0d0-5e06952ea5e3", 00:17:03.725 "strip_size_kb": 64, 00:17:03.725 "state": "configuring", 00:17:03.725 "raid_level": "raid0", 00:17:03.725 "superblock": true, 00:17:03.725 "num_base_bdevs": 3, 00:17:03.725 "num_base_bdevs_discovered": 0, 00:17:03.725 "num_base_bdevs_operational": 3, 00:17:03.725 "base_bdevs_list": [ 00:17:03.725 { 00:17:03.725 "name": "BaseBdev1", 00:17:03.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:03.725 "is_configured": false, 00:17:03.725 "data_offset": 0, 00:17:03.725 "data_size": 0 00:17:03.725 }, 00:17:03.725 { 00:17:03.725 "name": "BaseBdev2", 00:17:03.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:03.725 "is_configured": false, 00:17:03.725 "data_offset": 0, 00:17:03.725 "data_size": 0 00:17:03.725 }, 00:17:03.725 { 00:17:03.725 "name": "BaseBdev3", 00:17:03.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:03.725 "is_configured": false, 00:17:03.725 "data_offset": 0, 00:17:03.725 "data_size": 0 00:17:03.725 } 00:17:03.725 ] 00:17:03.725 }' 00:17:03.725 14:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:03.725 14:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.658 14:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:04.658 [2024-07-15 14:09:50.533358] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:04.658 [2024-07-15 14:09:50.533591] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:17:04.658 14:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:04.916 [2024-07-15 14:09:50.765419] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:04.916 [2024-07-15 14:09:50.766067] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:04.916 [2024-07-15 14:09:50.766276] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:04.916 [2024-07-15 14:09:50.766407] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:04.916 [2024-07-15 14:09:50.766637] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:04.916 [2024-07-15 14:09:50.766828] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:04.916 14:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:05.175 [2024-07-15 14:09:51.037370] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:05.175 BaseBdev1 00:17:05.175 14:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:17:05.175 14:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:17:05.175 14:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:05.175 14:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:17:05.175 14:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:05.175 14:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:05.175 14:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:05.432 14:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:05.691 [ 00:17:05.691 { 00:17:05.691 "name": "BaseBdev1", 00:17:05.691 "aliases": [ 00:17:05.691 "151247bc-6904-443b-8eec-5b50891fedfd" 00:17:05.691 ], 00:17:05.691 "product_name": "Malloc disk", 00:17:05.691 "block_size": 512, 00:17:05.691 "num_blocks": 65536, 00:17:05.691 "uuid": "151247bc-6904-443b-8eec-5b50891fedfd", 00:17:05.691 "assigned_rate_limits": { 00:17:05.691 "rw_ios_per_sec": 0, 00:17:05.691 "rw_mbytes_per_sec": 0, 00:17:05.691 "r_mbytes_per_sec": 0, 00:17:05.691 "w_mbytes_per_sec": 0 00:17:05.691 }, 00:17:05.691 "claimed": true, 00:17:05.691 "claim_type": "exclusive_write", 00:17:05.691 "zoned": false, 00:17:05.691 "supported_io_types": { 00:17:05.691 "read": true, 00:17:05.691 "write": true, 00:17:05.691 "unmap": true, 00:17:05.691 "flush": true, 00:17:05.691 "reset": true, 00:17:05.691 "nvme_admin": false, 00:17:05.691 "nvme_io": false, 00:17:05.691 "nvme_io_md": false, 00:17:05.691 "write_zeroes": true, 00:17:05.691 "zcopy": true, 00:17:05.691 "get_zone_info": false, 00:17:05.691 "zone_management": false, 00:17:05.691 "zone_append": false, 00:17:05.691 "compare": false, 00:17:05.691 "compare_and_write": false, 00:17:05.691 "abort": true, 00:17:05.691 "seek_hole": false, 00:17:05.691 "seek_data": false, 00:17:05.691 "copy": true, 00:17:05.691 "nvme_iov_md": false 00:17:05.691 }, 00:17:05.691 "memory_domains": [ 00:17:05.691 { 00:17:05.691 "dma_device_id": "system", 00:17:05.691 "dma_device_type": 1 00:17:05.691 }, 00:17:05.691 { 00:17:05.691 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:05.691 "dma_device_type": 2 00:17:05.691 } 00:17:05.691 ], 00:17:05.691 "driver_specific": {} 00:17:05.691 } 00:17:05.691 ] 00:17:05.691 14:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:17:05.691 14:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:05.691 14:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:05.691 14:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:05.691 14:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:05.691 14:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:05.691 14:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:05.691 14:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:05.691 14:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:05.691 14:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:05.691 14:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:05.691 14:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:05.691 14:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:05.950 14:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:05.950 "name": "Existed_Raid", 00:17:05.950 "uuid": "9b5e9e85-8b75-4d05-93a7-4664f5774e4a", 00:17:05.950 "strip_size_kb": 64, 00:17:05.950 "state": "configuring", 00:17:05.950 "raid_level": "raid0", 00:17:05.950 "superblock": true, 00:17:05.950 "num_base_bdevs": 3, 00:17:05.950 "num_base_bdevs_discovered": 1, 00:17:05.950 "num_base_bdevs_operational": 3, 00:17:05.950 "base_bdevs_list": [ 00:17:05.950 { 00:17:05.950 "name": "BaseBdev1", 00:17:05.950 "uuid": "151247bc-6904-443b-8eec-5b50891fedfd", 00:17:05.950 "is_configured": true, 00:17:05.950 "data_offset": 2048, 00:17:05.950 "data_size": 63488 00:17:05.950 }, 00:17:05.950 { 00:17:05.950 "name": "BaseBdev2", 00:17:05.950 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.950 "is_configured": false, 00:17:05.950 "data_offset": 0, 00:17:05.950 "data_size": 0 00:17:05.950 }, 00:17:05.950 { 00:17:05.950 "name": "BaseBdev3", 00:17:05.950 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.950 "is_configured": false, 00:17:05.950 "data_offset": 0, 00:17:05.950 "data_size": 0 00:17:05.950 } 00:17:05.950 ] 00:17:05.950 }' 00:17:05.950 14:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:05.950 14:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.883 14:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:06.883 [2024-07-15 14:09:52.825675] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:06.883 [2024-07-15 14:09:52.825943] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:17:06.884 14:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:07.142 [2024-07-15 14:09:53.129791] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:07.142 [2024-07-15 14:09:53.131442] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:07.142 [2024-07-15 14:09:53.131993] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:07.142 [2024-07-15 14:09:53.132148] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:07.142 [2024-07-15 14:09:53.132298] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:07.400 14:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:17:07.400 14:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:07.400 14:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:07.400 14:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:07.400 14:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:07.400 14:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:07.400 14:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:07.400 14:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:07.400 14:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:07.400 14:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:07.400 14:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:07.400 14:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:07.400 14:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:07.400 14:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:07.658 14:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:07.658 "name": "Existed_Raid", 00:17:07.658 "uuid": "3dfebc5b-aaee-4cf6-bb7f-186d7f23cec5", 00:17:07.658 "strip_size_kb": 64, 00:17:07.658 "state": "configuring", 00:17:07.658 "raid_level": "raid0", 00:17:07.658 "superblock": true, 00:17:07.658 "num_base_bdevs": 3, 00:17:07.658 "num_base_bdevs_discovered": 1, 00:17:07.658 "num_base_bdevs_operational": 3, 00:17:07.658 "base_bdevs_list": [ 00:17:07.658 { 00:17:07.658 "name": "BaseBdev1", 00:17:07.658 "uuid": "151247bc-6904-443b-8eec-5b50891fedfd", 00:17:07.658 "is_configured": true, 00:17:07.658 "data_offset": 2048, 00:17:07.658 "data_size": 63488 00:17:07.658 }, 00:17:07.658 { 00:17:07.658 "name": "BaseBdev2", 00:17:07.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.658 "is_configured": false, 00:17:07.658 "data_offset": 0, 00:17:07.658 "data_size": 0 00:17:07.658 }, 00:17:07.658 { 00:17:07.658 "name": "BaseBdev3", 00:17:07.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.658 "is_configured": false, 00:17:07.658 "data_offset": 0, 00:17:07.658 "data_size": 0 00:17:07.658 } 00:17:07.658 ] 00:17:07.658 }' 00:17:07.658 14:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:07.658 14:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.223 14:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:08.482 [2024-07-15 14:09:54.433741] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:08.482 BaseBdev2 00:17:08.482 14:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:17:08.482 14:09:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:17:08.482 14:09:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:08.482 14:09:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:17:08.482 14:09:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:08.482 14:09:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:08.482 14:09:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:09.048 14:09:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:09.048 [ 00:17:09.048 { 00:17:09.048 "name": "BaseBdev2", 00:17:09.048 "aliases": [ 00:17:09.048 "8d47a15c-7629-4ca6-a38e-1002dc5f744d" 00:17:09.048 ], 00:17:09.048 "product_name": "Malloc disk", 00:17:09.048 "block_size": 512, 00:17:09.048 "num_blocks": 65536, 00:17:09.049 "uuid": "8d47a15c-7629-4ca6-a38e-1002dc5f744d", 00:17:09.049 "assigned_rate_limits": { 00:17:09.049 "rw_ios_per_sec": 0, 00:17:09.049 "rw_mbytes_per_sec": 0, 00:17:09.049 "r_mbytes_per_sec": 0, 00:17:09.049 "w_mbytes_per_sec": 0 00:17:09.049 }, 00:17:09.049 "claimed": true, 00:17:09.049 "claim_type": "exclusive_write", 00:17:09.049 "zoned": false, 00:17:09.049 "supported_io_types": { 00:17:09.049 "read": true, 00:17:09.049 "write": true, 00:17:09.049 "unmap": true, 00:17:09.049 "flush": true, 00:17:09.049 "reset": true, 00:17:09.049 "nvme_admin": false, 00:17:09.049 "nvme_io": false, 00:17:09.049 "nvme_io_md": false, 00:17:09.049 "write_zeroes": true, 00:17:09.049 "zcopy": true, 00:17:09.049 "get_zone_info": false, 00:17:09.049 "zone_management": false, 00:17:09.049 "zone_append": false, 00:17:09.049 "compare": false, 00:17:09.049 "compare_and_write": false, 00:17:09.049 "abort": true, 00:17:09.049 "seek_hole": false, 00:17:09.049 "seek_data": false, 00:17:09.049 "copy": true, 00:17:09.049 "nvme_iov_md": false 00:17:09.049 }, 00:17:09.049 "memory_domains": [ 00:17:09.049 { 00:17:09.049 "dma_device_id": "system", 00:17:09.049 "dma_device_type": 1 00:17:09.049 }, 00:17:09.049 { 00:17:09.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:09.049 "dma_device_type": 2 00:17:09.049 } 00:17:09.049 ], 00:17:09.049 "driver_specific": {} 00:17:09.049 } 00:17:09.049 ] 00:17:09.049 14:09:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:17:09.049 14:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:09.049 14:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:09.049 14:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:09.049 14:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:09.049 14:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:09.049 14:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:09.049 14:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:09.049 14:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:09.049 14:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:09.049 14:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:09.049 14:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:09.049 14:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:09.049 14:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:09.049 14:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:09.306 14:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:09.306 "name": "Existed_Raid", 00:17:09.306 "uuid": "3dfebc5b-aaee-4cf6-bb7f-186d7f23cec5", 00:17:09.306 "strip_size_kb": 64, 00:17:09.306 "state": "configuring", 00:17:09.306 "raid_level": "raid0", 00:17:09.306 "superblock": true, 00:17:09.306 "num_base_bdevs": 3, 00:17:09.306 "num_base_bdevs_discovered": 2, 00:17:09.306 "num_base_bdevs_operational": 3, 00:17:09.306 "base_bdevs_list": [ 00:17:09.306 { 00:17:09.306 "name": "BaseBdev1", 00:17:09.306 "uuid": "151247bc-6904-443b-8eec-5b50891fedfd", 00:17:09.306 "is_configured": true, 00:17:09.306 "data_offset": 2048, 00:17:09.306 "data_size": 63488 00:17:09.306 }, 00:17:09.306 { 00:17:09.306 "name": "BaseBdev2", 00:17:09.306 "uuid": "8d47a15c-7629-4ca6-a38e-1002dc5f744d", 00:17:09.306 "is_configured": true, 00:17:09.306 "data_offset": 2048, 00:17:09.306 "data_size": 63488 00:17:09.306 }, 00:17:09.306 { 00:17:09.306 "name": "BaseBdev3", 00:17:09.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.306 "is_configured": false, 00:17:09.306 "data_offset": 0, 00:17:09.306 "data_size": 0 00:17:09.306 } 00:17:09.306 ] 00:17:09.306 }' 00:17:09.306 14:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:09.306 14:09:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.872 14:09:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:10.131 [2024-07-15 14:09:56.111036] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:10.131 [2024-07-15 14:09:56.111269] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:17:10.131 [2024-07-15 14:09:56.111286] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:10.131 [2024-07-15 14:09:56.111417] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:17:10.131 [2024-07-15 14:09:56.111660] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:17:10.131 [2024-07-15 14:09:56.111686] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:17:10.131 [2024-07-15 14:09:56.111833] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:10.131 BaseBdev3 00:17:10.131 14:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:17:10.131 14:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:17:10.131 14:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:10.131 14:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:17:10.131 14:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:10.131 14:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:10.131 14:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:10.389 14:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:10.647 [ 00:17:10.647 { 00:17:10.648 "name": "BaseBdev3", 00:17:10.648 "aliases": [ 00:17:10.648 "23d21441-b5ed-4a25-8480-c3a2d77f887b" 00:17:10.648 ], 00:17:10.648 "product_name": "Malloc disk", 00:17:10.648 "block_size": 512, 00:17:10.648 "num_blocks": 65536, 00:17:10.648 "uuid": "23d21441-b5ed-4a25-8480-c3a2d77f887b", 00:17:10.648 "assigned_rate_limits": { 00:17:10.648 "rw_ios_per_sec": 0, 00:17:10.648 "rw_mbytes_per_sec": 0, 00:17:10.648 "r_mbytes_per_sec": 0, 00:17:10.648 "w_mbytes_per_sec": 0 00:17:10.648 }, 00:17:10.648 "claimed": true, 00:17:10.648 "claim_type": "exclusive_write", 00:17:10.648 "zoned": false, 00:17:10.648 "supported_io_types": { 00:17:10.648 "read": true, 00:17:10.648 "write": true, 00:17:10.648 "unmap": true, 00:17:10.648 "flush": true, 00:17:10.648 "reset": true, 00:17:10.648 "nvme_admin": false, 00:17:10.648 "nvme_io": false, 00:17:10.648 "nvme_io_md": false, 00:17:10.648 "write_zeroes": true, 00:17:10.648 "zcopy": true, 00:17:10.648 "get_zone_info": false, 00:17:10.648 "zone_management": false, 00:17:10.648 "zone_append": false, 00:17:10.648 "compare": false, 00:17:10.648 "compare_and_write": false, 00:17:10.648 "abort": true, 00:17:10.648 "seek_hole": false, 00:17:10.648 "seek_data": false, 00:17:10.648 "copy": true, 00:17:10.648 "nvme_iov_md": false 00:17:10.648 }, 00:17:10.648 "memory_domains": [ 00:17:10.648 { 00:17:10.648 "dma_device_id": "system", 00:17:10.648 "dma_device_type": 1 00:17:10.648 }, 00:17:10.648 { 00:17:10.648 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:10.648 "dma_device_type": 2 00:17:10.648 } 00:17:10.648 ], 00:17:10.648 "driver_specific": {} 00:17:10.648 } 00:17:10.648 ] 00:17:10.648 14:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:17:10.648 14:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:10.648 14:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:10.648 14:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:17:10.648 14:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:10.648 14:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:10.648 14:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:10.648 14:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:10.648 14:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:10.648 14:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:10.648 14:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:10.648 14:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:10.648 14:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:10.648 14:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:10.648 14:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:10.905 14:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:10.905 "name": "Existed_Raid", 00:17:10.905 "uuid": "3dfebc5b-aaee-4cf6-bb7f-186d7f23cec5", 00:17:10.905 "strip_size_kb": 64, 00:17:10.905 "state": "online", 00:17:10.905 "raid_level": "raid0", 00:17:10.905 "superblock": true, 00:17:10.905 "num_base_bdevs": 3, 00:17:10.905 "num_base_bdevs_discovered": 3, 00:17:10.905 "num_base_bdevs_operational": 3, 00:17:10.905 "base_bdevs_list": [ 00:17:10.905 { 00:17:10.905 "name": "BaseBdev1", 00:17:10.905 "uuid": "151247bc-6904-443b-8eec-5b50891fedfd", 00:17:10.905 "is_configured": true, 00:17:10.905 "data_offset": 2048, 00:17:10.905 "data_size": 63488 00:17:10.905 }, 00:17:10.905 { 00:17:10.905 "name": "BaseBdev2", 00:17:10.905 "uuid": "8d47a15c-7629-4ca6-a38e-1002dc5f744d", 00:17:10.905 "is_configured": true, 00:17:10.905 "data_offset": 2048, 00:17:10.905 "data_size": 63488 00:17:10.905 }, 00:17:10.905 { 00:17:10.905 "name": "BaseBdev3", 00:17:10.905 "uuid": "23d21441-b5ed-4a25-8480-c3a2d77f887b", 00:17:10.905 "is_configured": true, 00:17:10.905 "data_offset": 2048, 00:17:10.905 "data_size": 63488 00:17:10.905 } 00:17:10.905 ] 00:17:10.905 }' 00:17:10.905 14:09:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:10.905 14:09:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.855 14:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:17:11.855 14:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:17:11.855 14:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:11.855 14:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:11.855 14:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:11.855 14:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:17:11.855 14:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:11.855 14:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:11.855 [2024-07-15 14:09:57.799639] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:11.855 14:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:11.855 "name": "Existed_Raid", 00:17:11.856 "aliases": [ 00:17:11.856 "3dfebc5b-aaee-4cf6-bb7f-186d7f23cec5" 00:17:11.856 ], 00:17:11.856 "product_name": "Raid Volume", 00:17:11.856 "block_size": 512, 00:17:11.856 "num_blocks": 190464, 00:17:11.856 "uuid": "3dfebc5b-aaee-4cf6-bb7f-186d7f23cec5", 00:17:11.856 "assigned_rate_limits": { 00:17:11.856 "rw_ios_per_sec": 0, 00:17:11.856 "rw_mbytes_per_sec": 0, 00:17:11.856 "r_mbytes_per_sec": 0, 00:17:11.856 "w_mbytes_per_sec": 0 00:17:11.856 }, 00:17:11.856 "claimed": false, 00:17:11.856 "zoned": false, 00:17:11.856 "supported_io_types": { 00:17:11.856 "read": true, 00:17:11.856 "write": true, 00:17:11.856 "unmap": true, 00:17:11.856 "flush": true, 00:17:11.856 "reset": true, 00:17:11.856 "nvme_admin": false, 00:17:11.856 "nvme_io": false, 00:17:11.856 "nvme_io_md": false, 00:17:11.856 "write_zeroes": true, 00:17:11.856 "zcopy": false, 00:17:11.856 "get_zone_info": false, 00:17:11.856 "zone_management": false, 00:17:11.856 "zone_append": false, 00:17:11.856 "compare": false, 00:17:11.856 "compare_and_write": false, 00:17:11.856 "abort": false, 00:17:11.856 "seek_hole": false, 00:17:11.856 "seek_data": false, 00:17:11.856 "copy": false, 00:17:11.856 "nvme_iov_md": false 00:17:11.856 }, 00:17:11.856 "memory_domains": [ 00:17:11.856 { 00:17:11.856 "dma_device_id": "system", 00:17:11.856 "dma_device_type": 1 00:17:11.856 }, 00:17:11.856 { 00:17:11.856 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:11.856 "dma_device_type": 2 00:17:11.856 }, 00:17:11.856 { 00:17:11.856 "dma_device_id": "system", 00:17:11.856 "dma_device_type": 1 00:17:11.856 }, 00:17:11.856 { 00:17:11.856 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:11.856 "dma_device_type": 2 00:17:11.856 }, 00:17:11.856 { 00:17:11.856 "dma_device_id": "system", 00:17:11.856 "dma_device_type": 1 00:17:11.856 }, 00:17:11.856 { 00:17:11.856 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:11.856 "dma_device_type": 2 00:17:11.856 } 00:17:11.856 ], 00:17:11.856 "driver_specific": { 00:17:11.856 "raid": { 00:17:11.856 "uuid": "3dfebc5b-aaee-4cf6-bb7f-186d7f23cec5", 00:17:11.856 "strip_size_kb": 64, 00:17:11.856 "state": "online", 00:17:11.856 "raid_level": "raid0", 00:17:11.856 "superblock": true, 00:17:11.856 "num_base_bdevs": 3, 00:17:11.856 "num_base_bdevs_discovered": 3, 00:17:11.856 "num_base_bdevs_operational": 3, 00:17:11.856 "base_bdevs_list": [ 00:17:11.856 { 00:17:11.856 "name": "BaseBdev1", 00:17:11.856 "uuid": "151247bc-6904-443b-8eec-5b50891fedfd", 00:17:11.856 "is_configured": true, 00:17:11.856 "data_offset": 2048, 00:17:11.856 "data_size": 63488 00:17:11.856 }, 00:17:11.856 { 00:17:11.856 "name": "BaseBdev2", 00:17:11.856 "uuid": "8d47a15c-7629-4ca6-a38e-1002dc5f744d", 00:17:11.856 "is_configured": true, 00:17:11.856 "data_offset": 2048, 00:17:11.856 "data_size": 63488 00:17:11.856 }, 00:17:11.856 { 00:17:11.856 "name": "BaseBdev3", 00:17:11.856 "uuid": "23d21441-b5ed-4a25-8480-c3a2d77f887b", 00:17:11.856 "is_configured": true, 00:17:11.856 "data_offset": 2048, 00:17:11.856 "data_size": 63488 00:17:11.856 } 00:17:11.856 ] 00:17:11.856 } 00:17:11.856 } 00:17:11.856 }' 00:17:11.856 14:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:12.113 14:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:17:12.113 BaseBdev2 00:17:12.113 BaseBdev3' 00:17:12.113 14:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:12.113 14:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:17:12.113 14:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:12.369 14:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:12.369 "name": "BaseBdev1", 00:17:12.369 "aliases": [ 00:17:12.369 "151247bc-6904-443b-8eec-5b50891fedfd" 00:17:12.369 ], 00:17:12.370 "product_name": "Malloc disk", 00:17:12.370 "block_size": 512, 00:17:12.370 "num_blocks": 65536, 00:17:12.370 "uuid": "151247bc-6904-443b-8eec-5b50891fedfd", 00:17:12.370 "assigned_rate_limits": { 00:17:12.370 "rw_ios_per_sec": 0, 00:17:12.370 "rw_mbytes_per_sec": 0, 00:17:12.370 "r_mbytes_per_sec": 0, 00:17:12.370 "w_mbytes_per_sec": 0 00:17:12.370 }, 00:17:12.370 "claimed": true, 00:17:12.370 "claim_type": "exclusive_write", 00:17:12.370 "zoned": false, 00:17:12.370 "supported_io_types": { 00:17:12.370 "read": true, 00:17:12.370 "write": true, 00:17:12.370 "unmap": true, 00:17:12.370 "flush": true, 00:17:12.370 "reset": true, 00:17:12.370 "nvme_admin": false, 00:17:12.370 "nvme_io": false, 00:17:12.370 "nvme_io_md": false, 00:17:12.370 "write_zeroes": true, 00:17:12.370 "zcopy": true, 00:17:12.370 "get_zone_info": false, 00:17:12.370 "zone_management": false, 00:17:12.370 "zone_append": false, 00:17:12.370 "compare": false, 00:17:12.370 "compare_and_write": false, 00:17:12.370 "abort": true, 00:17:12.370 "seek_hole": false, 00:17:12.370 "seek_data": false, 00:17:12.370 "copy": true, 00:17:12.370 "nvme_iov_md": false 00:17:12.370 }, 00:17:12.370 "memory_domains": [ 00:17:12.370 { 00:17:12.370 "dma_device_id": "system", 00:17:12.370 "dma_device_type": 1 00:17:12.370 }, 00:17:12.370 { 00:17:12.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:12.370 "dma_device_type": 2 00:17:12.370 } 00:17:12.370 ], 00:17:12.370 "driver_specific": {} 00:17:12.370 }' 00:17:12.370 14:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:12.370 14:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:12.370 14:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:12.370 14:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:12.370 14:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:12.370 14:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:12.370 14:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:12.627 14:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:12.627 14:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:12.627 14:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:12.627 14:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:12.627 14:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:12.627 14:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:12.627 14:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:12.627 14:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:12.884 14:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:12.884 "name": "BaseBdev2", 00:17:12.884 "aliases": [ 00:17:12.884 "8d47a15c-7629-4ca6-a38e-1002dc5f744d" 00:17:12.884 ], 00:17:12.884 "product_name": "Malloc disk", 00:17:12.884 "block_size": 512, 00:17:12.884 "num_blocks": 65536, 00:17:12.884 "uuid": "8d47a15c-7629-4ca6-a38e-1002dc5f744d", 00:17:12.884 "assigned_rate_limits": { 00:17:12.884 "rw_ios_per_sec": 0, 00:17:12.884 "rw_mbytes_per_sec": 0, 00:17:12.884 "r_mbytes_per_sec": 0, 00:17:12.884 "w_mbytes_per_sec": 0 00:17:12.884 }, 00:17:12.884 "claimed": true, 00:17:12.884 "claim_type": "exclusive_write", 00:17:12.884 "zoned": false, 00:17:12.884 "supported_io_types": { 00:17:12.884 "read": true, 00:17:12.884 "write": true, 00:17:12.884 "unmap": true, 00:17:12.884 "flush": true, 00:17:12.884 "reset": true, 00:17:12.884 "nvme_admin": false, 00:17:12.884 "nvme_io": false, 00:17:12.884 "nvme_io_md": false, 00:17:12.884 "write_zeroes": true, 00:17:12.884 "zcopy": true, 00:17:12.884 "get_zone_info": false, 00:17:12.884 "zone_management": false, 00:17:12.884 "zone_append": false, 00:17:12.884 "compare": false, 00:17:12.884 "compare_and_write": false, 00:17:12.884 "abort": true, 00:17:12.884 "seek_hole": false, 00:17:12.884 "seek_data": false, 00:17:12.884 "copy": true, 00:17:12.884 "nvme_iov_md": false 00:17:12.884 }, 00:17:12.884 "memory_domains": [ 00:17:12.884 { 00:17:12.884 "dma_device_id": "system", 00:17:12.884 "dma_device_type": 1 00:17:12.884 }, 00:17:12.884 { 00:17:12.884 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:12.884 "dma_device_type": 2 00:17:12.884 } 00:17:12.884 ], 00:17:12.884 "driver_specific": {} 00:17:12.884 }' 00:17:12.884 14:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:12.884 14:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:13.142 14:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:13.142 14:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:13.142 14:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:13.142 14:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:13.142 14:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:13.142 14:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:13.142 14:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:13.142 14:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:13.400 14:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:13.400 14:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:13.400 14:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:13.400 14:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:17:13.400 14:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:13.659 14:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:13.659 "name": "BaseBdev3", 00:17:13.659 "aliases": [ 00:17:13.659 "23d21441-b5ed-4a25-8480-c3a2d77f887b" 00:17:13.659 ], 00:17:13.659 "product_name": "Malloc disk", 00:17:13.659 "block_size": 512, 00:17:13.659 "num_blocks": 65536, 00:17:13.659 "uuid": "23d21441-b5ed-4a25-8480-c3a2d77f887b", 00:17:13.659 "assigned_rate_limits": { 00:17:13.659 "rw_ios_per_sec": 0, 00:17:13.659 "rw_mbytes_per_sec": 0, 00:17:13.659 "r_mbytes_per_sec": 0, 00:17:13.659 "w_mbytes_per_sec": 0 00:17:13.659 }, 00:17:13.659 "claimed": true, 00:17:13.659 "claim_type": "exclusive_write", 00:17:13.659 "zoned": false, 00:17:13.659 "supported_io_types": { 00:17:13.659 "read": true, 00:17:13.659 "write": true, 00:17:13.659 "unmap": true, 00:17:13.659 "flush": true, 00:17:13.659 "reset": true, 00:17:13.659 "nvme_admin": false, 00:17:13.659 "nvme_io": false, 00:17:13.659 "nvme_io_md": false, 00:17:13.659 "write_zeroes": true, 00:17:13.659 "zcopy": true, 00:17:13.659 "get_zone_info": false, 00:17:13.659 "zone_management": false, 00:17:13.659 "zone_append": false, 00:17:13.659 "compare": false, 00:17:13.659 "compare_and_write": false, 00:17:13.659 "abort": true, 00:17:13.659 "seek_hole": false, 00:17:13.659 "seek_data": false, 00:17:13.659 "copy": true, 00:17:13.659 "nvme_iov_md": false 00:17:13.659 }, 00:17:13.659 "memory_domains": [ 00:17:13.659 { 00:17:13.659 "dma_device_id": "system", 00:17:13.659 "dma_device_type": 1 00:17:13.659 }, 00:17:13.659 { 00:17:13.659 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:13.659 "dma_device_type": 2 00:17:13.659 } 00:17:13.659 ], 00:17:13.659 "driver_specific": {} 00:17:13.659 }' 00:17:13.659 14:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:13.659 14:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:13.659 14:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:13.659 14:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:13.916 14:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:13.916 14:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:13.916 14:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:13.916 14:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:13.916 14:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:13.916 14:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:13.916 14:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:14.174 14:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:14.174 14:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:14.432 [2024-07-15 14:10:00.273603] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:14.432 [2024-07-15 14:10:00.273654] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:14.432 [2024-07-15 14:10:00.273714] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:14.432 14:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:17:14.432 14:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:17:14.432 14:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:14.432 14:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:17:14.432 14:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:17:14.432 14:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:17:14.432 14:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:14.432 14:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:17:14.432 14:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:14.432 14:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:14.432 14:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:14.432 14:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:14.432 14:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:14.432 14:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:14.432 14:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:14.432 14:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:14.432 14:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:14.690 14:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:14.690 "name": "Existed_Raid", 00:17:14.690 "uuid": "3dfebc5b-aaee-4cf6-bb7f-186d7f23cec5", 00:17:14.690 "strip_size_kb": 64, 00:17:14.690 "state": "offline", 00:17:14.690 "raid_level": "raid0", 00:17:14.690 "superblock": true, 00:17:14.690 "num_base_bdevs": 3, 00:17:14.690 "num_base_bdevs_discovered": 2, 00:17:14.690 "num_base_bdevs_operational": 2, 00:17:14.690 "base_bdevs_list": [ 00:17:14.690 { 00:17:14.690 "name": null, 00:17:14.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:14.690 "is_configured": false, 00:17:14.690 "data_offset": 2048, 00:17:14.690 "data_size": 63488 00:17:14.690 }, 00:17:14.690 { 00:17:14.690 "name": "BaseBdev2", 00:17:14.690 "uuid": "8d47a15c-7629-4ca6-a38e-1002dc5f744d", 00:17:14.690 "is_configured": true, 00:17:14.690 "data_offset": 2048, 00:17:14.690 "data_size": 63488 00:17:14.690 }, 00:17:14.690 { 00:17:14.690 "name": "BaseBdev3", 00:17:14.690 "uuid": "23d21441-b5ed-4a25-8480-c3a2d77f887b", 00:17:14.690 "is_configured": true, 00:17:14.690 "data_offset": 2048, 00:17:14.690 "data_size": 63488 00:17:14.690 } 00:17:14.690 ] 00:17:14.690 }' 00:17:14.690 14:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:14.690 14:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:15.624 14:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:17:15.624 14:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:15.624 14:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:15.624 14:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:15.624 14:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:15.624 14:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:15.624 14:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:16.190 [2024-07-15 14:10:01.897351] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:16.190 14:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:16.190 14:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:16.190 14:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:16.190 14:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:16.448 14:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:16.449 14:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:16.449 14:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:16.707 [2024-07-15 14:10:02.553219] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:16.707 [2024-07-15 14:10:02.553310] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:17:16.707 14:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:16.707 14:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:16.707 14:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:16.707 14:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:17:16.965 14:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:17:16.965 14:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:17:16.965 14:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:17:16.965 14:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:17:16.965 14:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:16.965 14:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:17.532 BaseBdev2 00:17:17.532 14:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:17:17.532 14:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:17:17.532 14:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:17.532 14:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:17:17.532 14:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:17.532 14:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:17.532 14:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:17.532 14:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:17.790 [ 00:17:17.790 { 00:17:17.790 "name": "BaseBdev2", 00:17:17.790 "aliases": [ 00:17:17.790 "dfa12536-d60f-4a26-90ef-204975331101" 00:17:17.790 ], 00:17:17.790 "product_name": "Malloc disk", 00:17:17.790 "block_size": 512, 00:17:17.790 "num_blocks": 65536, 00:17:17.790 "uuid": "dfa12536-d60f-4a26-90ef-204975331101", 00:17:17.790 "assigned_rate_limits": { 00:17:17.790 "rw_ios_per_sec": 0, 00:17:17.790 "rw_mbytes_per_sec": 0, 00:17:17.790 "r_mbytes_per_sec": 0, 00:17:17.790 "w_mbytes_per_sec": 0 00:17:17.790 }, 00:17:17.790 "claimed": false, 00:17:17.790 "zoned": false, 00:17:17.790 "supported_io_types": { 00:17:17.790 "read": true, 00:17:17.790 "write": true, 00:17:17.790 "unmap": true, 00:17:17.790 "flush": true, 00:17:17.790 "reset": true, 00:17:17.790 "nvme_admin": false, 00:17:17.790 "nvme_io": false, 00:17:17.790 "nvme_io_md": false, 00:17:17.790 "write_zeroes": true, 00:17:17.790 "zcopy": true, 00:17:17.790 "get_zone_info": false, 00:17:17.790 "zone_management": false, 00:17:17.790 "zone_append": false, 00:17:17.790 "compare": false, 00:17:17.790 "compare_and_write": false, 00:17:17.790 "abort": true, 00:17:17.790 "seek_hole": false, 00:17:17.790 "seek_data": false, 00:17:17.790 "copy": true, 00:17:17.790 "nvme_iov_md": false 00:17:17.790 }, 00:17:17.790 "memory_domains": [ 00:17:17.790 { 00:17:17.790 "dma_device_id": "system", 00:17:17.790 "dma_device_type": 1 00:17:17.790 }, 00:17:17.790 { 00:17:17.790 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:17.790 "dma_device_type": 2 00:17:17.790 } 00:17:17.790 ], 00:17:17.790 "driver_specific": {} 00:17:17.790 } 00:17:17.790 ] 00:17:17.790 14:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:17:17.790 14:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:17:17.790 14:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:17.790 14:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:18.356 BaseBdev3 00:17:18.356 14:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:17:18.356 14:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:17:18.356 14:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:18.356 14:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:17:18.356 14:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:18.356 14:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:18.356 14:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:18.356 14:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:18.614 [ 00:17:18.615 { 00:17:18.615 "name": "BaseBdev3", 00:17:18.615 "aliases": [ 00:17:18.615 "684d0e3d-8ff4-4e6c-98f6-94bf28b44393" 00:17:18.615 ], 00:17:18.615 "product_name": "Malloc disk", 00:17:18.615 "block_size": 512, 00:17:18.615 "num_blocks": 65536, 00:17:18.615 "uuid": "684d0e3d-8ff4-4e6c-98f6-94bf28b44393", 00:17:18.615 "assigned_rate_limits": { 00:17:18.615 "rw_ios_per_sec": 0, 00:17:18.615 "rw_mbytes_per_sec": 0, 00:17:18.615 "r_mbytes_per_sec": 0, 00:17:18.615 "w_mbytes_per_sec": 0 00:17:18.615 }, 00:17:18.615 "claimed": false, 00:17:18.615 "zoned": false, 00:17:18.615 "supported_io_types": { 00:17:18.615 "read": true, 00:17:18.615 "write": true, 00:17:18.615 "unmap": true, 00:17:18.615 "flush": true, 00:17:18.615 "reset": true, 00:17:18.615 "nvme_admin": false, 00:17:18.615 "nvme_io": false, 00:17:18.615 "nvme_io_md": false, 00:17:18.615 "write_zeroes": true, 00:17:18.615 "zcopy": true, 00:17:18.615 "get_zone_info": false, 00:17:18.615 "zone_management": false, 00:17:18.615 "zone_append": false, 00:17:18.615 "compare": false, 00:17:18.615 "compare_and_write": false, 00:17:18.615 "abort": true, 00:17:18.615 "seek_hole": false, 00:17:18.615 "seek_data": false, 00:17:18.615 "copy": true, 00:17:18.615 "nvme_iov_md": false 00:17:18.615 }, 00:17:18.615 "memory_domains": [ 00:17:18.615 { 00:17:18.615 "dma_device_id": "system", 00:17:18.615 "dma_device_type": 1 00:17:18.615 }, 00:17:18.615 { 00:17:18.615 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:18.615 "dma_device_type": 2 00:17:18.615 } 00:17:18.615 ], 00:17:18.615 "driver_specific": {} 00:17:18.615 } 00:17:18.615 ] 00:17:18.615 14:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:17:18.615 14:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:17:18.615 14:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:18.615 14:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:18.873 [2024-07-15 14:10:04.772444] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:18.873 [2024-07-15 14:10:04.772990] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:18.873 [2024-07-15 14:10:04.773089] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:18.873 [2024-07-15 14:10:04.774550] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:18.873 14:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:18.873 14:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:18.873 14:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:18.873 14:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:18.873 14:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:18.874 14:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:18.874 14:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:18.874 14:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:18.874 14:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:18.874 14:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:18.874 14:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:18.874 14:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:19.132 14:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:19.132 "name": "Existed_Raid", 00:17:19.132 "uuid": "8fc681c9-f2dc-49ef-9a84-80b970cb8787", 00:17:19.132 "strip_size_kb": 64, 00:17:19.132 "state": "configuring", 00:17:19.132 "raid_level": "raid0", 00:17:19.132 "superblock": true, 00:17:19.132 "num_base_bdevs": 3, 00:17:19.132 "num_base_bdevs_discovered": 2, 00:17:19.132 "num_base_bdevs_operational": 3, 00:17:19.132 "base_bdevs_list": [ 00:17:19.132 { 00:17:19.132 "name": "BaseBdev1", 00:17:19.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.132 "is_configured": false, 00:17:19.132 "data_offset": 0, 00:17:19.132 "data_size": 0 00:17:19.132 }, 00:17:19.132 { 00:17:19.132 "name": "BaseBdev2", 00:17:19.132 "uuid": "dfa12536-d60f-4a26-90ef-204975331101", 00:17:19.132 "is_configured": true, 00:17:19.132 "data_offset": 2048, 00:17:19.132 "data_size": 63488 00:17:19.132 }, 00:17:19.132 { 00:17:19.132 "name": "BaseBdev3", 00:17:19.132 "uuid": "684d0e3d-8ff4-4e6c-98f6-94bf28b44393", 00:17:19.132 "is_configured": true, 00:17:19.132 "data_offset": 2048, 00:17:19.132 "data_size": 63488 00:17:19.132 } 00:17:19.132 ] 00:17:19.132 }' 00:17:19.132 14:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:19.132 14:10:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:20.068 14:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:17:20.325 [2024-07-15 14:10:06.088120] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:20.325 14:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:20.325 14:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:20.325 14:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:20.325 14:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:20.325 14:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:20.325 14:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:20.325 14:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:20.325 14:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:20.325 14:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:20.325 14:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:20.325 14:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:20.325 14:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:20.582 14:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:20.582 "name": "Existed_Raid", 00:17:20.582 "uuid": "8fc681c9-f2dc-49ef-9a84-80b970cb8787", 00:17:20.582 "strip_size_kb": 64, 00:17:20.582 "state": "configuring", 00:17:20.582 "raid_level": "raid0", 00:17:20.582 "superblock": true, 00:17:20.582 "num_base_bdevs": 3, 00:17:20.582 "num_base_bdevs_discovered": 1, 00:17:20.582 "num_base_bdevs_operational": 3, 00:17:20.582 "base_bdevs_list": [ 00:17:20.582 { 00:17:20.582 "name": "BaseBdev1", 00:17:20.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.582 "is_configured": false, 00:17:20.582 "data_offset": 0, 00:17:20.582 "data_size": 0 00:17:20.582 }, 00:17:20.582 { 00:17:20.582 "name": null, 00:17:20.582 "uuid": "dfa12536-d60f-4a26-90ef-204975331101", 00:17:20.582 "is_configured": false, 00:17:20.582 "data_offset": 2048, 00:17:20.582 "data_size": 63488 00:17:20.582 }, 00:17:20.582 { 00:17:20.582 "name": "BaseBdev3", 00:17:20.582 "uuid": "684d0e3d-8ff4-4e6c-98f6-94bf28b44393", 00:17:20.582 "is_configured": true, 00:17:20.582 "data_offset": 2048, 00:17:20.582 "data_size": 63488 00:17:20.582 } 00:17:20.582 ] 00:17:20.582 }' 00:17:20.582 14:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:20.582 14:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:21.148 14:10:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:21.148 14:10:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:21.407 14:10:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:17:21.407 14:10:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:21.665 [2024-07-15 14:10:07.540610] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:21.665 BaseBdev1 00:17:21.665 14:10:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:17:21.665 14:10:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:17:21.665 14:10:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:21.665 14:10:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:17:21.665 14:10:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:21.665 14:10:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:21.665 14:10:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:21.923 14:10:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:22.181 [ 00:17:22.181 { 00:17:22.181 "name": "BaseBdev1", 00:17:22.181 "aliases": [ 00:17:22.181 "8c8f205a-6b26-402a-830e-d9e855ee9658" 00:17:22.181 ], 00:17:22.181 "product_name": "Malloc disk", 00:17:22.181 "block_size": 512, 00:17:22.181 "num_blocks": 65536, 00:17:22.181 "uuid": "8c8f205a-6b26-402a-830e-d9e855ee9658", 00:17:22.181 "assigned_rate_limits": { 00:17:22.181 "rw_ios_per_sec": 0, 00:17:22.181 "rw_mbytes_per_sec": 0, 00:17:22.181 "r_mbytes_per_sec": 0, 00:17:22.181 "w_mbytes_per_sec": 0 00:17:22.181 }, 00:17:22.181 "claimed": true, 00:17:22.181 "claim_type": "exclusive_write", 00:17:22.181 "zoned": false, 00:17:22.181 "supported_io_types": { 00:17:22.181 "read": true, 00:17:22.181 "write": true, 00:17:22.181 "unmap": true, 00:17:22.181 "flush": true, 00:17:22.181 "reset": true, 00:17:22.181 "nvme_admin": false, 00:17:22.181 "nvme_io": false, 00:17:22.181 "nvme_io_md": false, 00:17:22.181 "write_zeroes": true, 00:17:22.181 "zcopy": true, 00:17:22.181 "get_zone_info": false, 00:17:22.181 "zone_management": false, 00:17:22.181 "zone_append": false, 00:17:22.181 "compare": false, 00:17:22.181 "compare_and_write": false, 00:17:22.181 "abort": true, 00:17:22.181 "seek_hole": false, 00:17:22.181 "seek_data": false, 00:17:22.181 "copy": true, 00:17:22.181 "nvme_iov_md": false 00:17:22.181 }, 00:17:22.181 "memory_domains": [ 00:17:22.181 { 00:17:22.181 "dma_device_id": "system", 00:17:22.181 "dma_device_type": 1 00:17:22.181 }, 00:17:22.181 { 00:17:22.181 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:22.182 "dma_device_type": 2 00:17:22.182 } 00:17:22.182 ], 00:17:22.182 "driver_specific": {} 00:17:22.182 } 00:17:22.182 ] 00:17:22.182 14:10:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:17:22.182 14:10:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:22.182 14:10:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:22.182 14:10:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:22.182 14:10:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:22.182 14:10:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:22.182 14:10:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:22.182 14:10:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:22.182 14:10:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:22.182 14:10:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:22.182 14:10:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:22.182 14:10:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:22.182 14:10:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:22.440 14:10:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:22.440 "name": "Existed_Raid", 00:17:22.440 "uuid": "8fc681c9-f2dc-49ef-9a84-80b970cb8787", 00:17:22.440 "strip_size_kb": 64, 00:17:22.440 "state": "configuring", 00:17:22.440 "raid_level": "raid0", 00:17:22.440 "superblock": true, 00:17:22.440 "num_base_bdevs": 3, 00:17:22.440 "num_base_bdevs_discovered": 2, 00:17:22.440 "num_base_bdevs_operational": 3, 00:17:22.440 "base_bdevs_list": [ 00:17:22.440 { 00:17:22.440 "name": "BaseBdev1", 00:17:22.440 "uuid": "8c8f205a-6b26-402a-830e-d9e855ee9658", 00:17:22.440 "is_configured": true, 00:17:22.440 "data_offset": 2048, 00:17:22.440 "data_size": 63488 00:17:22.440 }, 00:17:22.440 { 00:17:22.440 "name": null, 00:17:22.440 "uuid": "dfa12536-d60f-4a26-90ef-204975331101", 00:17:22.440 "is_configured": false, 00:17:22.440 "data_offset": 2048, 00:17:22.440 "data_size": 63488 00:17:22.440 }, 00:17:22.440 { 00:17:22.440 "name": "BaseBdev3", 00:17:22.440 "uuid": "684d0e3d-8ff4-4e6c-98f6-94bf28b44393", 00:17:22.440 "is_configured": true, 00:17:22.440 "data_offset": 2048, 00:17:22.440 "data_size": 63488 00:17:22.440 } 00:17:22.440 ] 00:17:22.440 }' 00:17:22.440 14:10:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:22.440 14:10:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:23.017 14:10:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:23.017 14:10:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:23.295 14:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:17:23.295 14:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:17:23.554 [2024-07-15 14:10:09.453087] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:23.554 14:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:23.554 14:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:23.554 14:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:23.554 14:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:23.554 14:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:23.554 14:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:23.554 14:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:23.554 14:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:23.554 14:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:23.554 14:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:23.554 14:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:23.554 14:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:23.814 14:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:23.814 "name": "Existed_Raid", 00:17:23.814 "uuid": "8fc681c9-f2dc-49ef-9a84-80b970cb8787", 00:17:23.814 "strip_size_kb": 64, 00:17:23.814 "state": "configuring", 00:17:23.814 "raid_level": "raid0", 00:17:23.814 "superblock": true, 00:17:23.814 "num_base_bdevs": 3, 00:17:23.814 "num_base_bdevs_discovered": 1, 00:17:23.814 "num_base_bdevs_operational": 3, 00:17:23.814 "base_bdevs_list": [ 00:17:23.814 { 00:17:23.814 "name": "BaseBdev1", 00:17:23.814 "uuid": "8c8f205a-6b26-402a-830e-d9e855ee9658", 00:17:23.814 "is_configured": true, 00:17:23.814 "data_offset": 2048, 00:17:23.814 "data_size": 63488 00:17:23.814 }, 00:17:23.814 { 00:17:23.814 "name": null, 00:17:23.814 "uuid": "dfa12536-d60f-4a26-90ef-204975331101", 00:17:23.814 "is_configured": false, 00:17:23.814 "data_offset": 2048, 00:17:23.814 "data_size": 63488 00:17:23.814 }, 00:17:23.814 { 00:17:23.814 "name": null, 00:17:23.814 "uuid": "684d0e3d-8ff4-4e6c-98f6-94bf28b44393", 00:17:23.814 "is_configured": false, 00:17:23.814 "data_offset": 2048, 00:17:23.814 "data_size": 63488 00:17:23.814 } 00:17:23.814 ] 00:17:23.814 }' 00:17:23.814 14:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:23.814 14:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:24.752 14:10:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:24.752 14:10:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:24.752 14:10:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:17:24.752 14:10:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:25.011 [2024-07-15 14:10:10.940182] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:25.011 14:10:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:25.011 14:10:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:25.011 14:10:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:25.011 14:10:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:25.011 14:10:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:25.011 14:10:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:25.011 14:10:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:25.011 14:10:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:25.011 14:10:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:25.011 14:10:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:25.011 14:10:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:25.011 14:10:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:25.269 14:10:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:25.269 "name": "Existed_Raid", 00:17:25.269 "uuid": "8fc681c9-f2dc-49ef-9a84-80b970cb8787", 00:17:25.269 "strip_size_kb": 64, 00:17:25.269 "state": "configuring", 00:17:25.269 "raid_level": "raid0", 00:17:25.269 "superblock": true, 00:17:25.269 "num_base_bdevs": 3, 00:17:25.269 "num_base_bdevs_discovered": 2, 00:17:25.269 "num_base_bdevs_operational": 3, 00:17:25.269 "base_bdevs_list": [ 00:17:25.269 { 00:17:25.269 "name": "BaseBdev1", 00:17:25.269 "uuid": "8c8f205a-6b26-402a-830e-d9e855ee9658", 00:17:25.269 "is_configured": true, 00:17:25.269 "data_offset": 2048, 00:17:25.269 "data_size": 63488 00:17:25.269 }, 00:17:25.269 { 00:17:25.269 "name": null, 00:17:25.269 "uuid": "dfa12536-d60f-4a26-90ef-204975331101", 00:17:25.269 "is_configured": false, 00:17:25.269 "data_offset": 2048, 00:17:25.269 "data_size": 63488 00:17:25.269 }, 00:17:25.269 { 00:17:25.269 "name": "BaseBdev3", 00:17:25.269 "uuid": "684d0e3d-8ff4-4e6c-98f6-94bf28b44393", 00:17:25.269 "is_configured": true, 00:17:25.269 "data_offset": 2048, 00:17:25.269 "data_size": 63488 00:17:25.269 } 00:17:25.269 ] 00:17:25.269 }' 00:17:25.269 14:10:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:25.269 14:10:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:25.836 14:10:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:25.836 14:10:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:26.404 14:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:17:26.404 14:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:26.404 [2024-07-15 14:10:12.369425] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:26.662 14:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:26.662 14:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:26.662 14:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:26.662 14:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:26.662 14:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:26.662 14:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:26.662 14:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:26.662 14:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:26.662 14:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:26.662 14:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:26.662 14:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:26.662 14:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:26.942 14:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:26.942 "name": "Existed_Raid", 00:17:26.942 "uuid": "8fc681c9-f2dc-49ef-9a84-80b970cb8787", 00:17:26.942 "strip_size_kb": 64, 00:17:26.942 "state": "configuring", 00:17:26.942 "raid_level": "raid0", 00:17:26.942 "superblock": true, 00:17:26.942 "num_base_bdevs": 3, 00:17:26.942 "num_base_bdevs_discovered": 1, 00:17:26.942 "num_base_bdevs_operational": 3, 00:17:26.942 "base_bdevs_list": [ 00:17:26.942 { 00:17:26.942 "name": null, 00:17:26.942 "uuid": "8c8f205a-6b26-402a-830e-d9e855ee9658", 00:17:26.942 "is_configured": false, 00:17:26.942 "data_offset": 2048, 00:17:26.942 "data_size": 63488 00:17:26.942 }, 00:17:26.942 { 00:17:26.942 "name": null, 00:17:26.942 "uuid": "dfa12536-d60f-4a26-90ef-204975331101", 00:17:26.942 "is_configured": false, 00:17:26.942 "data_offset": 2048, 00:17:26.942 "data_size": 63488 00:17:26.942 }, 00:17:26.942 { 00:17:26.942 "name": "BaseBdev3", 00:17:26.942 "uuid": "684d0e3d-8ff4-4e6c-98f6-94bf28b44393", 00:17:26.942 "is_configured": true, 00:17:26.942 "data_offset": 2048, 00:17:26.942 "data_size": 63488 00:17:26.942 } 00:17:26.942 ] 00:17:26.942 }' 00:17:26.942 14:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:26.942 14:10:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:27.507 14:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:27.507 14:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:27.765 14:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:17:27.765 14:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:28.022 [2024-07-15 14:10:13.852175] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:28.022 14:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:28.022 14:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:28.022 14:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:28.022 14:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:28.022 14:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:28.022 14:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:28.022 14:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:28.022 14:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:28.022 14:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:28.022 14:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:28.022 14:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:28.023 14:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:28.280 14:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:28.280 "name": "Existed_Raid", 00:17:28.280 "uuid": "8fc681c9-f2dc-49ef-9a84-80b970cb8787", 00:17:28.280 "strip_size_kb": 64, 00:17:28.280 "state": "configuring", 00:17:28.280 "raid_level": "raid0", 00:17:28.280 "superblock": true, 00:17:28.280 "num_base_bdevs": 3, 00:17:28.280 "num_base_bdevs_discovered": 2, 00:17:28.280 "num_base_bdevs_operational": 3, 00:17:28.280 "base_bdevs_list": [ 00:17:28.280 { 00:17:28.280 "name": null, 00:17:28.280 "uuid": "8c8f205a-6b26-402a-830e-d9e855ee9658", 00:17:28.280 "is_configured": false, 00:17:28.280 "data_offset": 2048, 00:17:28.280 "data_size": 63488 00:17:28.280 }, 00:17:28.280 { 00:17:28.280 "name": "BaseBdev2", 00:17:28.280 "uuid": "dfa12536-d60f-4a26-90ef-204975331101", 00:17:28.280 "is_configured": true, 00:17:28.280 "data_offset": 2048, 00:17:28.280 "data_size": 63488 00:17:28.280 }, 00:17:28.280 { 00:17:28.280 "name": "BaseBdev3", 00:17:28.280 "uuid": "684d0e3d-8ff4-4e6c-98f6-94bf28b44393", 00:17:28.280 "is_configured": true, 00:17:28.280 "data_offset": 2048, 00:17:28.280 "data_size": 63488 00:17:28.280 } 00:17:28.280 ] 00:17:28.280 }' 00:17:28.280 14:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:28.280 14:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:28.846 14:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:28.846 14:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:29.104 14:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:17:29.104 14:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:29.104 14:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:29.361 14:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 8c8f205a-6b26-402a-830e-d9e855ee9658 00:17:29.618 [2024-07-15 14:10:15.519596] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:29.618 [2024-07-15 14:10:15.519822] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:17:29.618 [2024-07-15 14:10:15.519838] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:29.618 [2024-07-15 14:10:15.519922] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:29.618 [2024-07-15 14:10:15.520146] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:17:29.618 [2024-07-15 14:10:15.520161] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000008a80 00:17:29.618 [2024-07-15 14:10:15.520260] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:29.618 NewBaseBdev 00:17:29.618 14:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:17:29.618 14:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:17:29.618 14:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:29.618 14:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:17:29.618 14:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:29.618 14:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:29.618 14:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:29.875 14:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:30.131 [ 00:17:30.131 { 00:17:30.131 "name": "NewBaseBdev", 00:17:30.131 "aliases": [ 00:17:30.131 "8c8f205a-6b26-402a-830e-d9e855ee9658" 00:17:30.131 ], 00:17:30.131 "product_name": "Malloc disk", 00:17:30.131 "block_size": 512, 00:17:30.131 "num_blocks": 65536, 00:17:30.132 "uuid": "8c8f205a-6b26-402a-830e-d9e855ee9658", 00:17:30.132 "assigned_rate_limits": { 00:17:30.132 "rw_ios_per_sec": 0, 00:17:30.132 "rw_mbytes_per_sec": 0, 00:17:30.132 "r_mbytes_per_sec": 0, 00:17:30.132 "w_mbytes_per_sec": 0 00:17:30.132 }, 00:17:30.132 "claimed": true, 00:17:30.132 "claim_type": "exclusive_write", 00:17:30.132 "zoned": false, 00:17:30.132 "supported_io_types": { 00:17:30.132 "read": true, 00:17:30.132 "write": true, 00:17:30.132 "unmap": true, 00:17:30.132 "flush": true, 00:17:30.132 "reset": true, 00:17:30.132 "nvme_admin": false, 00:17:30.132 "nvme_io": false, 00:17:30.132 "nvme_io_md": false, 00:17:30.132 "write_zeroes": true, 00:17:30.132 "zcopy": true, 00:17:30.132 "get_zone_info": false, 00:17:30.132 "zone_management": false, 00:17:30.132 "zone_append": false, 00:17:30.132 "compare": false, 00:17:30.132 "compare_and_write": false, 00:17:30.132 "abort": true, 00:17:30.132 "seek_hole": false, 00:17:30.132 "seek_data": false, 00:17:30.132 "copy": true, 00:17:30.132 "nvme_iov_md": false 00:17:30.132 }, 00:17:30.132 "memory_domains": [ 00:17:30.132 { 00:17:30.132 "dma_device_id": "system", 00:17:30.132 "dma_device_type": 1 00:17:30.132 }, 00:17:30.132 { 00:17:30.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:30.132 "dma_device_type": 2 00:17:30.132 } 00:17:30.132 ], 00:17:30.132 "driver_specific": {} 00:17:30.132 } 00:17:30.132 ] 00:17:30.132 14:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:17:30.132 14:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:17:30.132 14:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:30.132 14:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:30.132 14:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:30.132 14:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:30.132 14:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:30.132 14:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:30.132 14:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:30.132 14:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:30.132 14:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:30.132 14:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:30.132 14:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:30.390 14:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:30.390 "name": "Existed_Raid", 00:17:30.390 "uuid": "8fc681c9-f2dc-49ef-9a84-80b970cb8787", 00:17:30.390 "strip_size_kb": 64, 00:17:30.390 "state": "online", 00:17:30.390 "raid_level": "raid0", 00:17:30.390 "superblock": true, 00:17:30.390 "num_base_bdevs": 3, 00:17:30.390 "num_base_bdevs_discovered": 3, 00:17:30.390 "num_base_bdevs_operational": 3, 00:17:30.390 "base_bdevs_list": [ 00:17:30.390 { 00:17:30.390 "name": "NewBaseBdev", 00:17:30.390 "uuid": "8c8f205a-6b26-402a-830e-d9e855ee9658", 00:17:30.390 "is_configured": true, 00:17:30.390 "data_offset": 2048, 00:17:30.390 "data_size": 63488 00:17:30.390 }, 00:17:30.390 { 00:17:30.390 "name": "BaseBdev2", 00:17:30.390 "uuid": "dfa12536-d60f-4a26-90ef-204975331101", 00:17:30.390 "is_configured": true, 00:17:30.390 "data_offset": 2048, 00:17:30.390 "data_size": 63488 00:17:30.390 }, 00:17:30.390 { 00:17:30.390 "name": "BaseBdev3", 00:17:30.390 "uuid": "684d0e3d-8ff4-4e6c-98f6-94bf28b44393", 00:17:30.390 "is_configured": true, 00:17:30.390 "data_offset": 2048, 00:17:30.390 "data_size": 63488 00:17:30.390 } 00:17:30.390 ] 00:17:30.390 }' 00:17:30.390 14:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:30.390 14:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:31.013 14:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:17:31.013 14:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:17:31.013 14:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:31.013 14:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:31.013 14:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:31.013 14:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:17:31.013 14:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:31.013 14:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:31.275 [2024-07-15 14:10:17.200130] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:31.275 14:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:31.275 "name": "Existed_Raid", 00:17:31.275 "aliases": [ 00:17:31.275 "8fc681c9-f2dc-49ef-9a84-80b970cb8787" 00:17:31.275 ], 00:17:31.275 "product_name": "Raid Volume", 00:17:31.275 "block_size": 512, 00:17:31.275 "num_blocks": 190464, 00:17:31.275 "uuid": "8fc681c9-f2dc-49ef-9a84-80b970cb8787", 00:17:31.275 "assigned_rate_limits": { 00:17:31.275 "rw_ios_per_sec": 0, 00:17:31.275 "rw_mbytes_per_sec": 0, 00:17:31.275 "r_mbytes_per_sec": 0, 00:17:31.275 "w_mbytes_per_sec": 0 00:17:31.275 }, 00:17:31.275 "claimed": false, 00:17:31.275 "zoned": false, 00:17:31.275 "supported_io_types": { 00:17:31.275 "read": true, 00:17:31.275 "write": true, 00:17:31.275 "unmap": true, 00:17:31.275 "flush": true, 00:17:31.275 "reset": true, 00:17:31.276 "nvme_admin": false, 00:17:31.276 "nvme_io": false, 00:17:31.276 "nvme_io_md": false, 00:17:31.276 "write_zeroes": true, 00:17:31.276 "zcopy": false, 00:17:31.276 "get_zone_info": false, 00:17:31.276 "zone_management": false, 00:17:31.276 "zone_append": false, 00:17:31.276 "compare": false, 00:17:31.276 "compare_and_write": false, 00:17:31.276 "abort": false, 00:17:31.276 "seek_hole": false, 00:17:31.276 "seek_data": false, 00:17:31.276 "copy": false, 00:17:31.276 "nvme_iov_md": false 00:17:31.276 }, 00:17:31.276 "memory_domains": [ 00:17:31.276 { 00:17:31.276 "dma_device_id": "system", 00:17:31.276 "dma_device_type": 1 00:17:31.276 }, 00:17:31.276 { 00:17:31.276 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:31.276 "dma_device_type": 2 00:17:31.276 }, 00:17:31.276 { 00:17:31.276 "dma_device_id": "system", 00:17:31.276 "dma_device_type": 1 00:17:31.276 }, 00:17:31.276 { 00:17:31.276 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:31.276 "dma_device_type": 2 00:17:31.276 }, 00:17:31.276 { 00:17:31.276 "dma_device_id": "system", 00:17:31.276 "dma_device_type": 1 00:17:31.276 }, 00:17:31.276 { 00:17:31.276 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:31.276 "dma_device_type": 2 00:17:31.276 } 00:17:31.276 ], 00:17:31.276 "driver_specific": { 00:17:31.276 "raid": { 00:17:31.276 "uuid": "8fc681c9-f2dc-49ef-9a84-80b970cb8787", 00:17:31.276 "strip_size_kb": 64, 00:17:31.276 "state": "online", 00:17:31.276 "raid_level": "raid0", 00:17:31.276 "superblock": true, 00:17:31.276 "num_base_bdevs": 3, 00:17:31.276 "num_base_bdevs_discovered": 3, 00:17:31.276 "num_base_bdevs_operational": 3, 00:17:31.276 "base_bdevs_list": [ 00:17:31.276 { 00:17:31.276 "name": "NewBaseBdev", 00:17:31.276 "uuid": "8c8f205a-6b26-402a-830e-d9e855ee9658", 00:17:31.276 "is_configured": true, 00:17:31.276 "data_offset": 2048, 00:17:31.276 "data_size": 63488 00:17:31.276 }, 00:17:31.276 { 00:17:31.276 "name": "BaseBdev2", 00:17:31.276 "uuid": "dfa12536-d60f-4a26-90ef-204975331101", 00:17:31.276 "is_configured": true, 00:17:31.276 "data_offset": 2048, 00:17:31.276 "data_size": 63488 00:17:31.276 }, 00:17:31.276 { 00:17:31.276 "name": "BaseBdev3", 00:17:31.276 "uuid": "684d0e3d-8ff4-4e6c-98f6-94bf28b44393", 00:17:31.276 "is_configured": true, 00:17:31.276 "data_offset": 2048, 00:17:31.276 "data_size": 63488 00:17:31.276 } 00:17:31.276 ] 00:17:31.276 } 00:17:31.276 } 00:17:31.276 }' 00:17:31.276 14:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:31.276 14:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:17:31.276 BaseBdev2 00:17:31.276 BaseBdev3' 00:17:31.276 14:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:31.276 14:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:17:31.276 14:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:31.844 14:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:31.844 "name": "NewBaseBdev", 00:17:31.844 "aliases": [ 00:17:31.844 "8c8f205a-6b26-402a-830e-d9e855ee9658" 00:17:31.844 ], 00:17:31.844 "product_name": "Malloc disk", 00:17:31.844 "block_size": 512, 00:17:31.844 "num_blocks": 65536, 00:17:31.844 "uuid": "8c8f205a-6b26-402a-830e-d9e855ee9658", 00:17:31.844 "assigned_rate_limits": { 00:17:31.844 "rw_ios_per_sec": 0, 00:17:31.844 "rw_mbytes_per_sec": 0, 00:17:31.844 "r_mbytes_per_sec": 0, 00:17:31.844 "w_mbytes_per_sec": 0 00:17:31.844 }, 00:17:31.844 "claimed": true, 00:17:31.844 "claim_type": "exclusive_write", 00:17:31.844 "zoned": false, 00:17:31.844 "supported_io_types": { 00:17:31.844 "read": true, 00:17:31.844 "write": true, 00:17:31.844 "unmap": true, 00:17:31.844 "flush": true, 00:17:31.844 "reset": true, 00:17:31.844 "nvme_admin": false, 00:17:31.844 "nvme_io": false, 00:17:31.844 "nvme_io_md": false, 00:17:31.844 "write_zeroes": true, 00:17:31.844 "zcopy": true, 00:17:31.844 "get_zone_info": false, 00:17:31.844 "zone_management": false, 00:17:31.844 "zone_append": false, 00:17:31.844 "compare": false, 00:17:31.844 "compare_and_write": false, 00:17:31.844 "abort": true, 00:17:31.844 "seek_hole": false, 00:17:31.844 "seek_data": false, 00:17:31.844 "copy": true, 00:17:31.844 "nvme_iov_md": false 00:17:31.844 }, 00:17:31.844 "memory_domains": [ 00:17:31.844 { 00:17:31.844 "dma_device_id": "system", 00:17:31.844 "dma_device_type": 1 00:17:31.844 }, 00:17:31.844 { 00:17:31.844 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:31.844 "dma_device_type": 2 00:17:31.844 } 00:17:31.844 ], 00:17:31.844 "driver_specific": {} 00:17:31.844 }' 00:17:31.844 14:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:31.844 14:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:31.844 14:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:31.844 14:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:31.844 14:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:31.844 14:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:31.844 14:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:31.844 14:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:31.844 14:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:31.844 14:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:32.120 14:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:32.120 14:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:32.120 14:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:32.120 14:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:32.120 14:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:32.379 14:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:32.379 "name": "BaseBdev2", 00:17:32.379 "aliases": [ 00:17:32.379 "dfa12536-d60f-4a26-90ef-204975331101" 00:17:32.379 ], 00:17:32.379 "product_name": "Malloc disk", 00:17:32.379 "block_size": 512, 00:17:32.379 "num_blocks": 65536, 00:17:32.379 "uuid": "dfa12536-d60f-4a26-90ef-204975331101", 00:17:32.379 "assigned_rate_limits": { 00:17:32.379 "rw_ios_per_sec": 0, 00:17:32.379 "rw_mbytes_per_sec": 0, 00:17:32.379 "r_mbytes_per_sec": 0, 00:17:32.379 "w_mbytes_per_sec": 0 00:17:32.379 }, 00:17:32.379 "claimed": true, 00:17:32.379 "claim_type": "exclusive_write", 00:17:32.379 "zoned": false, 00:17:32.379 "supported_io_types": { 00:17:32.379 "read": true, 00:17:32.379 "write": true, 00:17:32.379 "unmap": true, 00:17:32.379 "flush": true, 00:17:32.379 "reset": true, 00:17:32.379 "nvme_admin": false, 00:17:32.379 "nvme_io": false, 00:17:32.379 "nvme_io_md": false, 00:17:32.379 "write_zeroes": true, 00:17:32.379 "zcopy": true, 00:17:32.379 "get_zone_info": false, 00:17:32.379 "zone_management": false, 00:17:32.379 "zone_append": false, 00:17:32.379 "compare": false, 00:17:32.379 "compare_and_write": false, 00:17:32.379 "abort": true, 00:17:32.379 "seek_hole": false, 00:17:32.379 "seek_data": false, 00:17:32.379 "copy": true, 00:17:32.379 "nvme_iov_md": false 00:17:32.379 }, 00:17:32.379 "memory_domains": [ 00:17:32.379 { 00:17:32.379 "dma_device_id": "system", 00:17:32.379 "dma_device_type": 1 00:17:32.379 }, 00:17:32.379 { 00:17:32.379 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:32.379 "dma_device_type": 2 00:17:32.379 } 00:17:32.379 ], 00:17:32.379 "driver_specific": {} 00:17:32.379 }' 00:17:32.379 14:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:32.379 14:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:32.379 14:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:32.379 14:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:32.379 14:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:32.379 14:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:32.379 14:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:32.639 14:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:32.639 14:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:32.639 14:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:32.639 14:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:32.639 14:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:32.639 14:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:32.639 14:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:17:32.639 14:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:32.898 14:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:32.898 "name": "BaseBdev3", 00:17:32.898 "aliases": [ 00:17:32.898 "684d0e3d-8ff4-4e6c-98f6-94bf28b44393" 00:17:32.898 ], 00:17:32.898 "product_name": "Malloc disk", 00:17:32.898 "block_size": 512, 00:17:32.898 "num_blocks": 65536, 00:17:32.898 "uuid": "684d0e3d-8ff4-4e6c-98f6-94bf28b44393", 00:17:32.898 "assigned_rate_limits": { 00:17:32.898 "rw_ios_per_sec": 0, 00:17:32.898 "rw_mbytes_per_sec": 0, 00:17:32.898 "r_mbytes_per_sec": 0, 00:17:32.898 "w_mbytes_per_sec": 0 00:17:32.898 }, 00:17:32.898 "claimed": true, 00:17:32.898 "claim_type": "exclusive_write", 00:17:32.898 "zoned": false, 00:17:32.898 "supported_io_types": { 00:17:32.898 "read": true, 00:17:32.898 "write": true, 00:17:32.898 "unmap": true, 00:17:32.898 "flush": true, 00:17:32.898 "reset": true, 00:17:32.898 "nvme_admin": false, 00:17:32.898 "nvme_io": false, 00:17:32.898 "nvme_io_md": false, 00:17:32.898 "write_zeroes": true, 00:17:32.898 "zcopy": true, 00:17:32.898 "get_zone_info": false, 00:17:32.898 "zone_management": false, 00:17:32.898 "zone_append": false, 00:17:32.898 "compare": false, 00:17:32.898 "compare_and_write": false, 00:17:32.898 "abort": true, 00:17:32.898 "seek_hole": false, 00:17:32.898 "seek_data": false, 00:17:32.898 "copy": true, 00:17:32.898 "nvme_iov_md": false 00:17:32.898 }, 00:17:32.898 "memory_domains": [ 00:17:32.898 { 00:17:32.898 "dma_device_id": "system", 00:17:32.898 "dma_device_type": 1 00:17:32.898 }, 00:17:32.898 { 00:17:32.898 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:32.898 "dma_device_type": 2 00:17:32.898 } 00:17:32.898 ], 00:17:32.898 "driver_specific": {} 00:17:32.898 }' 00:17:32.898 14:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:33.158 14:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:33.158 14:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:33.158 14:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:33.158 14:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:33.158 14:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:33.158 14:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:33.158 14:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:33.427 14:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:33.427 14:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:33.427 14:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:33.428 14:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:33.428 14:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:33.686 [2024-07-15 14:10:19.540344] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:33.686 [2024-07-15 14:10:19.540636] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:33.686 [2024-07-15 14:10:19.540851] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:33.686 [2024-07-15 14:10:19.541024] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:33.686 [2024-07-15 14:10:19.541147] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name Existed_Raid, state offline 00:17:33.686 14:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 192220 00:17:33.686 14:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 192220 ']' 00:17:33.686 14:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 192220 00:17:33.686 14:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:17:33.686 14:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:33.686 14:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 192220 00:17:33.686 14:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:33.686 14:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:33.686 14:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 192220' 00:17:33.686 killing process with pid 192220 00:17:33.686 14:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 192220 00:17:33.686 [2024-07-15 14:10:19.588150] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:33.686 14:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 192220 00:17:33.945 [2024-07-15 14:10:19.846365] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:35.329 14:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:17:35.329 00:17:35.329 real 0m32.923s 00:17:35.329 user 1m0.723s 00:17:35.329 sys 0m3.757s 00:17:35.329 14:10:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:35.329 14:10:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:35.329 ************************************ 00:17:35.329 END TEST raid_state_function_test_sb 00:17:35.329 ************************************ 00:17:35.329 14:10:21 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:17:35.329 14:10:21 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:17:35.329 14:10:21 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:17:35.329 14:10:21 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:35.329 14:10:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:35.329 ************************************ 00:17:35.329 START TEST raid_superblock_test 00:17:35.329 ************************************ 00:17:35.329 14:10:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid0 3 00:17:35.329 14:10:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid0 00:17:35.329 14:10:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:17:35.329 14:10:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:17:35.329 14:10:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:17:35.329 14:10:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:17:35.329 14:10:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:17:35.329 14:10:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:17:35.329 14:10:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:17:35.329 14:10:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:17:35.329 14:10:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:17:35.329 14:10:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:17:35.329 14:10:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:17:35.329 14:10:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:17:35.329 14:10:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid0 '!=' raid1 ']' 00:17:35.329 14:10:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:17:35.329 14:10:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:17:35.329 14:10:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=193238 00:17:35.329 14:10:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:17:35.329 14:10:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 193238 /var/tmp/spdk-raid.sock 00:17:35.329 14:10:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 193238 ']' 00:17:35.329 14:10:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:35.329 14:10:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:35.329 14:10:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:35.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:35.329 14:10:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:35.329 14:10:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.329 [2024-07-15 14:10:21.063833] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:17:35.329 [2024-07-15 14:10:21.064251] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid193238 ] 00:17:35.329 [2024-07-15 14:10:21.228516] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:35.588 [2024-07-15 14:10:21.474375] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:35.847 [2024-07-15 14:10:21.668853] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:36.106 14:10:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:36.106 14:10:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:17:36.106 14:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:17:36.106 14:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:36.106 14:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:17:36.106 14:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:17:36.106 14:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:36.106 14:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:36.106 14:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:17:36.106 14:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:36.106 14:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:17:36.364 malloc1 00:17:36.364 14:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:36.623 [2024-07-15 14:10:22.527598] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:36.623 [2024-07-15 14:10:22.527938] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:36.623 [2024-07-15 14:10:22.528097] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:17:36.623 [2024-07-15 14:10:22.528226] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:36.623 [2024-07-15 14:10:22.530120] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:36.623 [2024-07-15 14:10:22.530338] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:36.623 pt1 00:17:36.623 14:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:17:36.623 14:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:36.623 14:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:17:36.623 14:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:17:36.623 14:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:36.623 14:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:36.623 14:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:17:36.623 14:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:36.623 14:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:17:36.881 malloc2 00:17:36.881 14:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:37.140 [2024-07-15 14:10:23.042104] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:37.140 [2024-07-15 14:10:23.042522] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:37.140 [2024-07-15 14:10:23.042683] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:17:37.140 [2024-07-15 14:10:23.042835] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:37.140 [2024-07-15 14:10:23.044643] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:37.140 [2024-07-15 14:10:23.044834] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:37.140 pt2 00:17:37.140 14:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:17:37.140 14:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:37.140 14:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:17:37.140 14:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:17:37.140 14:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:37.140 14:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:37.140 14:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:17:37.140 14:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:37.140 14:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:17:37.402 malloc3 00:17:37.402 14:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:37.659 [2024-07-15 14:10:23.601952] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:37.659 [2024-07-15 14:10:23.602262] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:37.659 [2024-07-15 14:10:23.602415] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:17:37.659 [2024-07-15 14:10:23.602551] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:37.659 [2024-07-15 14:10:23.604345] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:37.659 [2024-07-15 14:10:23.604515] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:37.659 pt3 00:17:37.659 14:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:17:37.659 14:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:37.659 14:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:17:37.917 [2024-07-15 14:10:23.838025] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:37.917 [2024-07-15 14:10:23.839779] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:37.917 [2024-07-15 14:10:23.840025] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:37.917 [2024-07-15 14:10:23.840353] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:17:37.917 [2024-07-15 14:10:23.840493] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:37.917 [2024-07-15 14:10:23.840714] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:37.917 [2024-07-15 14:10:23.841331] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:17:37.917 [2024-07-15 14:10:23.841478] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:17:37.917 [2024-07-15 14:10:23.841718] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:37.917 14:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:17:37.917 14:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:37.917 14:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:37.917 14:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:37.917 14:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:37.917 14:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:37.917 14:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:37.917 14:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:37.917 14:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:37.917 14:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:37.917 14:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.917 14:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:38.175 14:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:38.175 "name": "raid_bdev1", 00:17:38.175 "uuid": "c0cbeda6-a0df-404d-b482-e7f5e6e096eb", 00:17:38.175 "strip_size_kb": 64, 00:17:38.175 "state": "online", 00:17:38.175 "raid_level": "raid0", 00:17:38.175 "superblock": true, 00:17:38.175 "num_base_bdevs": 3, 00:17:38.175 "num_base_bdevs_discovered": 3, 00:17:38.175 "num_base_bdevs_operational": 3, 00:17:38.175 "base_bdevs_list": [ 00:17:38.175 { 00:17:38.175 "name": "pt1", 00:17:38.175 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:38.175 "is_configured": true, 00:17:38.175 "data_offset": 2048, 00:17:38.175 "data_size": 63488 00:17:38.175 }, 00:17:38.175 { 00:17:38.175 "name": "pt2", 00:17:38.175 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:38.175 "is_configured": true, 00:17:38.175 "data_offset": 2048, 00:17:38.175 "data_size": 63488 00:17:38.175 }, 00:17:38.175 { 00:17:38.175 "name": "pt3", 00:17:38.175 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:38.175 "is_configured": true, 00:17:38.175 "data_offset": 2048, 00:17:38.175 "data_size": 63488 00:17:38.175 } 00:17:38.175 ] 00:17:38.175 }' 00:17:38.175 14:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:38.175 14:10:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.131 14:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:17:39.131 14:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:17:39.131 14:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:39.131 14:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:39.131 14:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:39.131 14:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:17:39.131 14:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:39.131 14:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:39.131 [2024-07-15 14:10:25.082556] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:39.131 14:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:39.131 "name": "raid_bdev1", 00:17:39.131 "aliases": [ 00:17:39.131 "c0cbeda6-a0df-404d-b482-e7f5e6e096eb" 00:17:39.131 ], 00:17:39.131 "product_name": "Raid Volume", 00:17:39.131 "block_size": 512, 00:17:39.131 "num_blocks": 190464, 00:17:39.131 "uuid": "c0cbeda6-a0df-404d-b482-e7f5e6e096eb", 00:17:39.131 "assigned_rate_limits": { 00:17:39.131 "rw_ios_per_sec": 0, 00:17:39.131 "rw_mbytes_per_sec": 0, 00:17:39.131 "r_mbytes_per_sec": 0, 00:17:39.131 "w_mbytes_per_sec": 0 00:17:39.131 }, 00:17:39.131 "claimed": false, 00:17:39.131 "zoned": false, 00:17:39.131 "supported_io_types": { 00:17:39.131 "read": true, 00:17:39.131 "write": true, 00:17:39.131 "unmap": true, 00:17:39.131 "flush": true, 00:17:39.131 "reset": true, 00:17:39.131 "nvme_admin": false, 00:17:39.131 "nvme_io": false, 00:17:39.131 "nvme_io_md": false, 00:17:39.131 "write_zeroes": true, 00:17:39.131 "zcopy": false, 00:17:39.131 "get_zone_info": false, 00:17:39.131 "zone_management": false, 00:17:39.131 "zone_append": false, 00:17:39.131 "compare": false, 00:17:39.131 "compare_and_write": false, 00:17:39.131 "abort": false, 00:17:39.131 "seek_hole": false, 00:17:39.131 "seek_data": false, 00:17:39.131 "copy": false, 00:17:39.131 "nvme_iov_md": false 00:17:39.131 }, 00:17:39.131 "memory_domains": [ 00:17:39.131 { 00:17:39.131 "dma_device_id": "system", 00:17:39.131 "dma_device_type": 1 00:17:39.131 }, 00:17:39.131 { 00:17:39.131 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:39.131 "dma_device_type": 2 00:17:39.131 }, 00:17:39.131 { 00:17:39.131 "dma_device_id": "system", 00:17:39.131 "dma_device_type": 1 00:17:39.131 }, 00:17:39.131 { 00:17:39.131 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:39.131 "dma_device_type": 2 00:17:39.131 }, 00:17:39.131 { 00:17:39.131 "dma_device_id": "system", 00:17:39.131 "dma_device_type": 1 00:17:39.131 }, 00:17:39.131 { 00:17:39.131 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:39.131 "dma_device_type": 2 00:17:39.131 } 00:17:39.131 ], 00:17:39.131 "driver_specific": { 00:17:39.131 "raid": { 00:17:39.131 "uuid": "c0cbeda6-a0df-404d-b482-e7f5e6e096eb", 00:17:39.131 "strip_size_kb": 64, 00:17:39.131 "state": "online", 00:17:39.131 "raid_level": "raid0", 00:17:39.131 "superblock": true, 00:17:39.131 "num_base_bdevs": 3, 00:17:39.131 "num_base_bdevs_discovered": 3, 00:17:39.131 "num_base_bdevs_operational": 3, 00:17:39.132 "base_bdevs_list": [ 00:17:39.132 { 00:17:39.132 "name": "pt1", 00:17:39.132 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:39.132 "is_configured": true, 00:17:39.132 "data_offset": 2048, 00:17:39.132 "data_size": 63488 00:17:39.132 }, 00:17:39.132 { 00:17:39.132 "name": "pt2", 00:17:39.132 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:39.132 "is_configured": true, 00:17:39.132 "data_offset": 2048, 00:17:39.132 "data_size": 63488 00:17:39.132 }, 00:17:39.132 { 00:17:39.132 "name": "pt3", 00:17:39.132 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:39.132 "is_configured": true, 00:17:39.132 "data_offset": 2048, 00:17:39.132 "data_size": 63488 00:17:39.132 } 00:17:39.132 ] 00:17:39.132 } 00:17:39.132 } 00:17:39.132 }' 00:17:39.132 14:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:39.390 14:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:17:39.390 pt2 00:17:39.390 pt3' 00:17:39.390 14:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:39.390 14:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:39.390 14:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:17:39.648 14:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:39.648 "name": "pt1", 00:17:39.648 "aliases": [ 00:17:39.648 "00000000-0000-0000-0000-000000000001" 00:17:39.648 ], 00:17:39.648 "product_name": "passthru", 00:17:39.648 "block_size": 512, 00:17:39.648 "num_blocks": 65536, 00:17:39.648 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:39.648 "assigned_rate_limits": { 00:17:39.648 "rw_ios_per_sec": 0, 00:17:39.648 "rw_mbytes_per_sec": 0, 00:17:39.648 "r_mbytes_per_sec": 0, 00:17:39.648 "w_mbytes_per_sec": 0 00:17:39.648 }, 00:17:39.648 "claimed": true, 00:17:39.648 "claim_type": "exclusive_write", 00:17:39.648 "zoned": false, 00:17:39.648 "supported_io_types": { 00:17:39.648 "read": true, 00:17:39.648 "write": true, 00:17:39.648 "unmap": true, 00:17:39.648 "flush": true, 00:17:39.648 "reset": true, 00:17:39.648 "nvme_admin": false, 00:17:39.648 "nvme_io": false, 00:17:39.648 "nvme_io_md": false, 00:17:39.648 "write_zeroes": true, 00:17:39.648 "zcopy": true, 00:17:39.648 "get_zone_info": false, 00:17:39.648 "zone_management": false, 00:17:39.648 "zone_append": false, 00:17:39.648 "compare": false, 00:17:39.648 "compare_and_write": false, 00:17:39.648 "abort": true, 00:17:39.648 "seek_hole": false, 00:17:39.648 "seek_data": false, 00:17:39.648 "copy": true, 00:17:39.648 "nvme_iov_md": false 00:17:39.648 }, 00:17:39.648 "memory_domains": [ 00:17:39.648 { 00:17:39.648 "dma_device_id": "system", 00:17:39.648 "dma_device_type": 1 00:17:39.648 }, 00:17:39.648 { 00:17:39.648 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:39.648 "dma_device_type": 2 00:17:39.648 } 00:17:39.648 ], 00:17:39.648 "driver_specific": { 00:17:39.648 "passthru": { 00:17:39.648 "name": "pt1", 00:17:39.648 "base_bdev_name": "malloc1" 00:17:39.648 } 00:17:39.648 } 00:17:39.648 }' 00:17:39.648 14:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:39.648 14:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:39.648 14:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:39.648 14:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:39.648 14:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:39.648 14:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:39.648 14:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:39.906 14:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:39.906 14:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:39.906 14:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:39.906 14:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:39.906 14:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:39.906 14:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:39.906 14:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:39.906 14:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:17:40.164 14:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:40.164 "name": "pt2", 00:17:40.164 "aliases": [ 00:17:40.164 "00000000-0000-0000-0000-000000000002" 00:17:40.164 ], 00:17:40.164 "product_name": "passthru", 00:17:40.164 "block_size": 512, 00:17:40.164 "num_blocks": 65536, 00:17:40.164 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:40.164 "assigned_rate_limits": { 00:17:40.164 "rw_ios_per_sec": 0, 00:17:40.164 "rw_mbytes_per_sec": 0, 00:17:40.164 "r_mbytes_per_sec": 0, 00:17:40.164 "w_mbytes_per_sec": 0 00:17:40.164 }, 00:17:40.164 "claimed": true, 00:17:40.164 "claim_type": "exclusive_write", 00:17:40.164 "zoned": false, 00:17:40.164 "supported_io_types": { 00:17:40.164 "read": true, 00:17:40.164 "write": true, 00:17:40.164 "unmap": true, 00:17:40.164 "flush": true, 00:17:40.164 "reset": true, 00:17:40.164 "nvme_admin": false, 00:17:40.164 "nvme_io": false, 00:17:40.164 "nvme_io_md": false, 00:17:40.164 "write_zeroes": true, 00:17:40.164 "zcopy": true, 00:17:40.164 "get_zone_info": false, 00:17:40.164 "zone_management": false, 00:17:40.164 "zone_append": false, 00:17:40.164 "compare": false, 00:17:40.164 "compare_and_write": false, 00:17:40.164 "abort": true, 00:17:40.164 "seek_hole": false, 00:17:40.164 "seek_data": false, 00:17:40.164 "copy": true, 00:17:40.164 "nvme_iov_md": false 00:17:40.164 }, 00:17:40.164 "memory_domains": [ 00:17:40.164 { 00:17:40.164 "dma_device_id": "system", 00:17:40.164 "dma_device_type": 1 00:17:40.164 }, 00:17:40.164 { 00:17:40.164 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:40.164 "dma_device_type": 2 00:17:40.164 } 00:17:40.164 ], 00:17:40.164 "driver_specific": { 00:17:40.164 "passthru": { 00:17:40.164 "name": "pt2", 00:17:40.164 "base_bdev_name": "malloc2" 00:17:40.164 } 00:17:40.164 } 00:17:40.164 }' 00:17:40.164 14:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:40.164 14:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:40.164 14:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:40.164 14:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:40.422 14:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:40.422 14:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:40.422 14:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:40.422 14:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:40.422 14:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:40.422 14:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:40.422 14:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:40.422 14:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:40.422 14:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:40.422 14:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:17:40.680 14:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:40.938 14:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:40.938 "name": "pt3", 00:17:40.938 "aliases": [ 00:17:40.938 "00000000-0000-0000-0000-000000000003" 00:17:40.938 ], 00:17:40.938 "product_name": "passthru", 00:17:40.938 "block_size": 512, 00:17:40.938 "num_blocks": 65536, 00:17:40.938 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:40.938 "assigned_rate_limits": { 00:17:40.938 "rw_ios_per_sec": 0, 00:17:40.938 "rw_mbytes_per_sec": 0, 00:17:40.938 "r_mbytes_per_sec": 0, 00:17:40.938 "w_mbytes_per_sec": 0 00:17:40.938 }, 00:17:40.938 "claimed": true, 00:17:40.938 "claim_type": "exclusive_write", 00:17:40.938 "zoned": false, 00:17:40.938 "supported_io_types": { 00:17:40.938 "read": true, 00:17:40.938 "write": true, 00:17:40.938 "unmap": true, 00:17:40.938 "flush": true, 00:17:40.938 "reset": true, 00:17:40.938 "nvme_admin": false, 00:17:40.938 "nvme_io": false, 00:17:40.938 "nvme_io_md": false, 00:17:40.938 "write_zeroes": true, 00:17:40.938 "zcopy": true, 00:17:40.938 "get_zone_info": false, 00:17:40.938 "zone_management": false, 00:17:40.938 "zone_append": false, 00:17:40.938 "compare": false, 00:17:40.938 "compare_and_write": false, 00:17:40.938 "abort": true, 00:17:40.938 "seek_hole": false, 00:17:40.938 "seek_data": false, 00:17:40.938 "copy": true, 00:17:40.938 "nvme_iov_md": false 00:17:40.938 }, 00:17:40.938 "memory_domains": [ 00:17:40.938 { 00:17:40.938 "dma_device_id": "system", 00:17:40.938 "dma_device_type": 1 00:17:40.938 }, 00:17:40.938 { 00:17:40.938 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:40.938 "dma_device_type": 2 00:17:40.938 } 00:17:40.938 ], 00:17:40.938 "driver_specific": { 00:17:40.938 "passthru": { 00:17:40.938 "name": "pt3", 00:17:40.938 "base_bdev_name": "malloc3" 00:17:40.938 } 00:17:40.938 } 00:17:40.938 }' 00:17:40.938 14:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:40.938 14:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:40.938 14:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:40.938 14:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:40.938 14:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:40.938 14:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:40.938 14:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:40.938 14:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:41.196 14:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:41.196 14:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:41.196 14:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:41.196 14:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:41.196 14:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:41.196 14:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:17:41.453 [2024-07-15 14:10:27.274848] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:41.453 14:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=c0cbeda6-a0df-404d-b482-e7f5e6e096eb 00:17:41.453 14:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z c0cbeda6-a0df-404d-b482-e7f5e6e096eb ']' 00:17:41.453 14:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:41.711 [2024-07-15 14:10:27.562654] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:41.711 [2024-07-15 14:10:27.562906] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:41.711 [2024-07-15 14:10:27.563099] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:41.711 [2024-07-15 14:10:27.563197] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:41.711 [2024-07-15 14:10:27.563244] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:17:41.711 14:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:41.711 14:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:17:42.022 14:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:17:42.022 14:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:17:42.022 14:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:17:42.022 14:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:42.294 14:10:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:17:42.294 14:10:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:42.294 14:10:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:17:42.294 14:10:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:42.554 14:10:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:17:42.554 14:10:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:42.813 14:10:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:17:42.813 14:10:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:42.813 14:10:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:17:42.813 14:10:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:42.813 14:10:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:42.813 14:10:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:42.813 14:10:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:42.813 14:10:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:42.813 14:10:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:42.813 14:10:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:42.813 14:10:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:42.813 14:10:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:42.813 14:10:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:43.379 [2024-07-15 14:10:29.074992] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:43.379 [2024-07-15 14:10:29.076869] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:43.379 [2024-07-15 14:10:29.077064] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:43.379 [2024-07-15 14:10:29.077152] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:43.380 [2024-07-15 14:10:29.077470] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:43.380 [2024-07-15 14:10:29.077625] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:17:43.380 [2024-07-15 14:10:29.077823] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:43.380 [2024-07-15 14:10:29.077934] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state configuring 00:17:43.380 request: 00:17:43.380 { 00:17:43.380 "name": "raid_bdev1", 00:17:43.380 "raid_level": "raid0", 00:17:43.380 "base_bdevs": [ 00:17:43.380 "malloc1", 00:17:43.380 "malloc2", 00:17:43.380 "malloc3" 00:17:43.380 ], 00:17:43.380 "strip_size_kb": 64, 00:17:43.380 "superblock": false, 00:17:43.380 "method": "bdev_raid_create", 00:17:43.380 "req_id": 1 00:17:43.380 } 00:17:43.380 Got JSON-RPC error response 00:17:43.380 response: 00:17:43.380 { 00:17:43.380 "code": -17, 00:17:43.380 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:43.380 } 00:17:43.380 14:10:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:17:43.380 14:10:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:43.380 14:10:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:43.380 14:10:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:43.380 14:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:43.380 14:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:17:43.380 14:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:17:43.380 14:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:17:43.380 14:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:43.639 [2024-07-15 14:10:29.583041] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:43.639 [2024-07-15 14:10:29.584457] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:43.639 [2024-07-15 14:10:29.584543] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:43.639 [2024-07-15 14:10:29.584870] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:43.639 [2024-07-15 14:10:29.586694] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:43.639 [2024-07-15 14:10:29.586892] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:43.639 [2024-07-15 14:10:29.587131] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:43.639 [2024-07-15 14:10:29.587312] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:43.639 pt1 00:17:43.639 14:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:17:43.639 14:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:43.639 14:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:43.639 14:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:43.639 14:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:43.639 14:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:43.639 14:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:43.639 14:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:43.639 14:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:43.639 14:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:43.639 14:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:43.639 14:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.897 14:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:43.897 "name": "raid_bdev1", 00:17:43.897 "uuid": "c0cbeda6-a0df-404d-b482-e7f5e6e096eb", 00:17:43.897 "strip_size_kb": 64, 00:17:43.897 "state": "configuring", 00:17:43.897 "raid_level": "raid0", 00:17:43.897 "superblock": true, 00:17:43.897 "num_base_bdevs": 3, 00:17:43.897 "num_base_bdevs_discovered": 1, 00:17:43.897 "num_base_bdevs_operational": 3, 00:17:43.897 "base_bdevs_list": [ 00:17:43.897 { 00:17:43.897 "name": "pt1", 00:17:43.897 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:43.897 "is_configured": true, 00:17:43.897 "data_offset": 2048, 00:17:43.897 "data_size": 63488 00:17:43.897 }, 00:17:43.897 { 00:17:43.897 "name": null, 00:17:43.897 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:43.897 "is_configured": false, 00:17:43.897 "data_offset": 2048, 00:17:43.897 "data_size": 63488 00:17:43.897 }, 00:17:43.897 { 00:17:43.897 "name": null, 00:17:43.897 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:43.897 "is_configured": false, 00:17:43.897 "data_offset": 2048, 00:17:43.897 "data_size": 63488 00:17:43.897 } 00:17:43.897 ] 00:17:43.897 }' 00:17:43.897 14:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:43.897 14:10:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:44.830 14:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:17:44.830 14:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:44.830 [2024-07-15 14:10:30.755387] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:44.830 [2024-07-15 14:10:30.755812] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:44.830 [2024-07-15 14:10:30.755979] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:17:44.830 [2024-07-15 14:10:30.756112] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:44.830 [2024-07-15 14:10:30.756557] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:44.830 [2024-07-15 14:10:30.756755] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:44.830 [2024-07-15 14:10:30.757007] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:44.830 [2024-07-15 14:10:30.757154] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:44.830 pt2 00:17:44.830 14:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:45.087 [2024-07-15 14:10:30.995511] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:45.087 14:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:17:45.087 14:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:45.087 14:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:45.087 14:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:45.087 14:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:45.087 14:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:45.087 14:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:45.087 14:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:45.087 14:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:45.087 14:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:45.087 14:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:45.087 14:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:45.345 14:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:45.345 "name": "raid_bdev1", 00:17:45.345 "uuid": "c0cbeda6-a0df-404d-b482-e7f5e6e096eb", 00:17:45.345 "strip_size_kb": 64, 00:17:45.345 "state": "configuring", 00:17:45.345 "raid_level": "raid0", 00:17:45.345 "superblock": true, 00:17:45.345 "num_base_bdevs": 3, 00:17:45.345 "num_base_bdevs_discovered": 1, 00:17:45.345 "num_base_bdevs_operational": 3, 00:17:45.345 "base_bdevs_list": [ 00:17:45.345 { 00:17:45.345 "name": "pt1", 00:17:45.345 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:45.345 "is_configured": true, 00:17:45.345 "data_offset": 2048, 00:17:45.345 "data_size": 63488 00:17:45.345 }, 00:17:45.345 { 00:17:45.345 "name": null, 00:17:45.345 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:45.345 "is_configured": false, 00:17:45.345 "data_offset": 2048, 00:17:45.345 "data_size": 63488 00:17:45.345 }, 00:17:45.345 { 00:17:45.345 "name": null, 00:17:45.345 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:45.345 "is_configured": false, 00:17:45.345 "data_offset": 2048, 00:17:45.345 "data_size": 63488 00:17:45.345 } 00:17:45.345 ] 00:17:45.345 }' 00:17:45.345 14:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:45.345 14:10:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:45.909 14:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:17:45.909 14:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:17:45.909 14:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:46.171 [2024-07-15 14:10:32.115684] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:46.171 [2024-07-15 14:10:32.116084] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:46.171 [2024-07-15 14:10:32.116269] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:46.171 [2024-07-15 14:10:32.116404] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:46.171 [2024-07-15 14:10:32.116933] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:46.171 [2024-07-15 14:10:32.117107] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:46.171 [2024-07-15 14:10:32.117315] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:46.171 [2024-07-15 14:10:32.117450] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:46.171 pt2 00:17:46.171 14:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:17:46.171 14:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:17:46.171 14:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:46.441 [2024-07-15 14:10:32.363691] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:46.441 [2024-07-15 14:10:32.364019] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:46.441 [2024-07-15 14:10:32.364096] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:17:46.441 [2024-07-15 14:10:32.364390] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:46.441 [2024-07-15 14:10:32.364992] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:46.441 [2024-07-15 14:10:32.365169] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:46.441 [2024-07-15 14:10:32.365387] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:46.441 [2024-07-15 14:10:32.365522] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:46.441 [2024-07-15 14:10:32.365742] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:17:46.441 [2024-07-15 14:10:32.365867] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:46.441 [2024-07-15 14:10:32.365992] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:17:46.441 [2024-07-15 14:10:32.366243] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:17:46.441 [2024-07-15 14:10:32.366292] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:17:46.441 [2024-07-15 14:10:32.366497] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:46.441 pt3 00:17:46.441 14:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:17:46.441 14:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:17:46.441 14:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:17:46.441 14:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:46.441 14:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:46.441 14:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:46.441 14:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:46.441 14:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:46.441 14:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:46.441 14:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:46.441 14:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:46.441 14:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:46.441 14:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:46.441 14:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:46.698 14:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:46.698 "name": "raid_bdev1", 00:17:46.698 "uuid": "c0cbeda6-a0df-404d-b482-e7f5e6e096eb", 00:17:46.699 "strip_size_kb": 64, 00:17:46.699 "state": "online", 00:17:46.699 "raid_level": "raid0", 00:17:46.699 "superblock": true, 00:17:46.699 "num_base_bdevs": 3, 00:17:46.699 "num_base_bdevs_discovered": 3, 00:17:46.699 "num_base_bdevs_operational": 3, 00:17:46.699 "base_bdevs_list": [ 00:17:46.699 { 00:17:46.699 "name": "pt1", 00:17:46.699 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:46.699 "is_configured": true, 00:17:46.699 "data_offset": 2048, 00:17:46.699 "data_size": 63488 00:17:46.699 }, 00:17:46.699 { 00:17:46.699 "name": "pt2", 00:17:46.699 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:46.699 "is_configured": true, 00:17:46.699 "data_offset": 2048, 00:17:46.699 "data_size": 63488 00:17:46.699 }, 00:17:46.699 { 00:17:46.699 "name": "pt3", 00:17:46.699 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:46.699 "is_configured": true, 00:17:46.699 "data_offset": 2048, 00:17:46.699 "data_size": 63488 00:17:46.699 } 00:17:46.699 ] 00:17:46.699 }' 00:17:46.699 14:10:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:46.699 14:10:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:47.265 14:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:17:47.265 14:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:17:47.265 14:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:47.265 14:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:47.265 14:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:47.265 14:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:17:47.265 14:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:47.265 14:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:47.523 [2024-07-15 14:10:33.484069] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:47.523 14:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:47.523 "name": "raid_bdev1", 00:17:47.523 "aliases": [ 00:17:47.523 "c0cbeda6-a0df-404d-b482-e7f5e6e096eb" 00:17:47.523 ], 00:17:47.523 "product_name": "Raid Volume", 00:17:47.523 "block_size": 512, 00:17:47.523 "num_blocks": 190464, 00:17:47.523 "uuid": "c0cbeda6-a0df-404d-b482-e7f5e6e096eb", 00:17:47.523 "assigned_rate_limits": { 00:17:47.523 "rw_ios_per_sec": 0, 00:17:47.523 "rw_mbytes_per_sec": 0, 00:17:47.523 "r_mbytes_per_sec": 0, 00:17:47.523 "w_mbytes_per_sec": 0 00:17:47.523 }, 00:17:47.523 "claimed": false, 00:17:47.523 "zoned": false, 00:17:47.523 "supported_io_types": { 00:17:47.523 "read": true, 00:17:47.523 "write": true, 00:17:47.523 "unmap": true, 00:17:47.523 "flush": true, 00:17:47.523 "reset": true, 00:17:47.523 "nvme_admin": false, 00:17:47.523 "nvme_io": false, 00:17:47.523 "nvme_io_md": false, 00:17:47.523 "write_zeroes": true, 00:17:47.523 "zcopy": false, 00:17:47.523 "get_zone_info": false, 00:17:47.523 "zone_management": false, 00:17:47.523 "zone_append": false, 00:17:47.523 "compare": false, 00:17:47.523 "compare_and_write": false, 00:17:47.523 "abort": false, 00:17:47.523 "seek_hole": false, 00:17:47.523 "seek_data": false, 00:17:47.523 "copy": false, 00:17:47.523 "nvme_iov_md": false 00:17:47.523 }, 00:17:47.523 "memory_domains": [ 00:17:47.523 { 00:17:47.523 "dma_device_id": "system", 00:17:47.523 "dma_device_type": 1 00:17:47.523 }, 00:17:47.523 { 00:17:47.523 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:47.523 "dma_device_type": 2 00:17:47.523 }, 00:17:47.523 { 00:17:47.523 "dma_device_id": "system", 00:17:47.523 "dma_device_type": 1 00:17:47.523 }, 00:17:47.523 { 00:17:47.523 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:47.523 "dma_device_type": 2 00:17:47.523 }, 00:17:47.523 { 00:17:47.523 "dma_device_id": "system", 00:17:47.523 "dma_device_type": 1 00:17:47.523 }, 00:17:47.523 { 00:17:47.523 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:47.523 "dma_device_type": 2 00:17:47.523 } 00:17:47.523 ], 00:17:47.523 "driver_specific": { 00:17:47.523 "raid": { 00:17:47.523 "uuid": "c0cbeda6-a0df-404d-b482-e7f5e6e096eb", 00:17:47.523 "strip_size_kb": 64, 00:17:47.523 "state": "online", 00:17:47.523 "raid_level": "raid0", 00:17:47.523 "superblock": true, 00:17:47.523 "num_base_bdevs": 3, 00:17:47.523 "num_base_bdevs_discovered": 3, 00:17:47.523 "num_base_bdevs_operational": 3, 00:17:47.523 "base_bdevs_list": [ 00:17:47.523 { 00:17:47.523 "name": "pt1", 00:17:47.523 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:47.523 "is_configured": true, 00:17:47.523 "data_offset": 2048, 00:17:47.523 "data_size": 63488 00:17:47.523 }, 00:17:47.523 { 00:17:47.523 "name": "pt2", 00:17:47.523 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:47.523 "is_configured": true, 00:17:47.523 "data_offset": 2048, 00:17:47.523 "data_size": 63488 00:17:47.523 }, 00:17:47.523 { 00:17:47.523 "name": "pt3", 00:17:47.523 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:47.523 "is_configured": true, 00:17:47.523 "data_offset": 2048, 00:17:47.523 "data_size": 63488 00:17:47.523 } 00:17:47.523 ] 00:17:47.523 } 00:17:47.523 } 00:17:47.523 }' 00:17:47.523 14:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:47.781 14:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:17:47.781 pt2 00:17:47.781 pt3' 00:17:47.781 14:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:47.781 14:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:17:47.781 14:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:47.781 14:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:47.781 "name": "pt1", 00:17:47.781 "aliases": [ 00:17:47.781 "00000000-0000-0000-0000-000000000001" 00:17:47.781 ], 00:17:47.781 "product_name": "passthru", 00:17:47.781 "block_size": 512, 00:17:47.781 "num_blocks": 65536, 00:17:47.781 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:47.781 "assigned_rate_limits": { 00:17:47.781 "rw_ios_per_sec": 0, 00:17:47.781 "rw_mbytes_per_sec": 0, 00:17:47.781 "r_mbytes_per_sec": 0, 00:17:47.781 "w_mbytes_per_sec": 0 00:17:47.781 }, 00:17:47.781 "claimed": true, 00:17:47.781 "claim_type": "exclusive_write", 00:17:47.781 "zoned": false, 00:17:47.781 "supported_io_types": { 00:17:47.781 "read": true, 00:17:47.781 "write": true, 00:17:47.781 "unmap": true, 00:17:47.781 "flush": true, 00:17:47.781 "reset": true, 00:17:47.781 "nvme_admin": false, 00:17:47.781 "nvme_io": false, 00:17:47.781 "nvme_io_md": false, 00:17:47.781 "write_zeroes": true, 00:17:47.781 "zcopy": true, 00:17:47.781 "get_zone_info": false, 00:17:47.781 "zone_management": false, 00:17:47.781 "zone_append": false, 00:17:47.781 "compare": false, 00:17:47.781 "compare_and_write": false, 00:17:47.781 "abort": true, 00:17:47.781 "seek_hole": false, 00:17:47.781 "seek_data": false, 00:17:47.781 "copy": true, 00:17:47.781 "nvme_iov_md": false 00:17:47.781 }, 00:17:47.781 "memory_domains": [ 00:17:47.781 { 00:17:47.781 "dma_device_id": "system", 00:17:47.781 "dma_device_type": 1 00:17:47.781 }, 00:17:47.781 { 00:17:47.781 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:47.781 "dma_device_type": 2 00:17:47.781 } 00:17:47.781 ], 00:17:47.781 "driver_specific": { 00:17:47.781 "passthru": { 00:17:47.781 "name": "pt1", 00:17:47.781 "base_bdev_name": "malloc1" 00:17:47.781 } 00:17:47.781 } 00:17:47.781 }' 00:17:47.781 14:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:48.038 14:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:48.038 14:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:48.038 14:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:48.038 14:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:48.038 14:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:48.039 14:10:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:48.039 14:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:48.297 14:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:48.297 14:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:48.297 14:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:48.297 14:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:48.297 14:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:48.297 14:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:17:48.297 14:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:48.556 14:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:48.556 "name": "pt2", 00:17:48.556 "aliases": [ 00:17:48.556 "00000000-0000-0000-0000-000000000002" 00:17:48.556 ], 00:17:48.556 "product_name": "passthru", 00:17:48.556 "block_size": 512, 00:17:48.556 "num_blocks": 65536, 00:17:48.556 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:48.556 "assigned_rate_limits": { 00:17:48.556 "rw_ios_per_sec": 0, 00:17:48.556 "rw_mbytes_per_sec": 0, 00:17:48.556 "r_mbytes_per_sec": 0, 00:17:48.556 "w_mbytes_per_sec": 0 00:17:48.556 }, 00:17:48.556 "claimed": true, 00:17:48.556 "claim_type": "exclusive_write", 00:17:48.556 "zoned": false, 00:17:48.556 "supported_io_types": { 00:17:48.556 "read": true, 00:17:48.556 "write": true, 00:17:48.556 "unmap": true, 00:17:48.556 "flush": true, 00:17:48.556 "reset": true, 00:17:48.556 "nvme_admin": false, 00:17:48.556 "nvme_io": false, 00:17:48.556 "nvme_io_md": false, 00:17:48.556 "write_zeroes": true, 00:17:48.556 "zcopy": true, 00:17:48.556 "get_zone_info": false, 00:17:48.556 "zone_management": false, 00:17:48.556 "zone_append": false, 00:17:48.556 "compare": false, 00:17:48.556 "compare_and_write": false, 00:17:48.556 "abort": true, 00:17:48.556 "seek_hole": false, 00:17:48.556 "seek_data": false, 00:17:48.556 "copy": true, 00:17:48.556 "nvme_iov_md": false 00:17:48.556 }, 00:17:48.556 "memory_domains": [ 00:17:48.556 { 00:17:48.556 "dma_device_id": "system", 00:17:48.556 "dma_device_type": 1 00:17:48.556 }, 00:17:48.556 { 00:17:48.556 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:48.556 "dma_device_type": 2 00:17:48.556 } 00:17:48.556 ], 00:17:48.556 "driver_specific": { 00:17:48.556 "passthru": { 00:17:48.556 "name": "pt2", 00:17:48.556 "base_bdev_name": "malloc2" 00:17:48.556 } 00:17:48.556 } 00:17:48.556 }' 00:17:48.556 14:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:48.556 14:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:48.556 14:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:48.556 14:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:48.815 14:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:48.815 14:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:48.815 14:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:48.815 14:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:48.815 14:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:48.815 14:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:48.815 14:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:48.815 14:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:48.815 14:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:48.815 14:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:17:48.815 14:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:49.073 14:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:49.073 "name": "pt3", 00:17:49.073 "aliases": [ 00:17:49.073 "00000000-0000-0000-0000-000000000003" 00:17:49.073 ], 00:17:49.073 "product_name": "passthru", 00:17:49.073 "block_size": 512, 00:17:49.073 "num_blocks": 65536, 00:17:49.073 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:49.073 "assigned_rate_limits": { 00:17:49.073 "rw_ios_per_sec": 0, 00:17:49.073 "rw_mbytes_per_sec": 0, 00:17:49.073 "r_mbytes_per_sec": 0, 00:17:49.073 "w_mbytes_per_sec": 0 00:17:49.073 }, 00:17:49.073 "claimed": true, 00:17:49.073 "claim_type": "exclusive_write", 00:17:49.073 "zoned": false, 00:17:49.073 "supported_io_types": { 00:17:49.073 "read": true, 00:17:49.073 "write": true, 00:17:49.073 "unmap": true, 00:17:49.073 "flush": true, 00:17:49.073 "reset": true, 00:17:49.073 "nvme_admin": false, 00:17:49.073 "nvme_io": false, 00:17:49.073 "nvme_io_md": false, 00:17:49.073 "write_zeroes": true, 00:17:49.073 "zcopy": true, 00:17:49.073 "get_zone_info": false, 00:17:49.073 "zone_management": false, 00:17:49.073 "zone_append": false, 00:17:49.073 "compare": false, 00:17:49.073 "compare_and_write": false, 00:17:49.073 "abort": true, 00:17:49.073 "seek_hole": false, 00:17:49.073 "seek_data": false, 00:17:49.073 "copy": true, 00:17:49.073 "nvme_iov_md": false 00:17:49.073 }, 00:17:49.073 "memory_domains": [ 00:17:49.073 { 00:17:49.073 "dma_device_id": "system", 00:17:49.073 "dma_device_type": 1 00:17:49.073 }, 00:17:49.073 { 00:17:49.073 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:49.073 "dma_device_type": 2 00:17:49.073 } 00:17:49.073 ], 00:17:49.073 "driver_specific": { 00:17:49.073 "passthru": { 00:17:49.073 "name": "pt3", 00:17:49.073 "base_bdev_name": "malloc3" 00:17:49.073 } 00:17:49.073 } 00:17:49.073 }' 00:17:49.073 14:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:49.332 14:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:49.332 14:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:49.332 14:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:49.332 14:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:49.332 14:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:49.332 14:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:49.332 14:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:49.332 14:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:49.332 14:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:49.590 14:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:49.590 14:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:49.590 14:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:17:49.590 14:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:49.850 [2024-07-15 14:10:35.704541] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:49.850 14:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' c0cbeda6-a0df-404d-b482-e7f5e6e096eb '!=' c0cbeda6-a0df-404d-b482-e7f5e6e096eb ']' 00:17:49.850 14:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid0 00:17:49.850 14:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:49.850 14:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:17:49.850 14:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 193238 00:17:49.850 14:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 193238 ']' 00:17:49.850 14:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 193238 00:17:49.850 14:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:17:49.850 14:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:49.850 14:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 193238 00:17:49.850 14:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:49.850 14:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:49.850 14:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 193238' 00:17:49.850 killing process with pid 193238 00:17:49.850 14:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 193238 00:17:49.850 [2024-07-15 14:10:35.758119] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:49.850 [2024-07-15 14:10:35.758351] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:49.850 14:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 193238 00:17:49.850 [2024-07-15 14:10:35.758519] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:49.850 [2024-07-15 14:10:35.758694] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:17:50.109 [2024-07-15 14:10:36.017606] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:51.493 14:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:17:51.493 00:17:51.493 real 0m16.123s 00:17:51.493 user 0m28.715s 00:17:51.493 sys 0m1.919s 00:17:51.493 14:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:51.493 14:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.493 ************************************ 00:17:51.493 END TEST raid_superblock_test 00:17:51.493 ************************************ 00:17:51.493 14:10:37 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:17:51.493 14:10:37 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:17:51.493 14:10:37 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:17:51.493 14:10:37 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:51.493 14:10:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:51.493 ************************************ 00:17:51.493 START TEST raid_read_error_test 00:17:51.493 ************************************ 00:17:51.493 14:10:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 3 read 00:17:51.493 14:10:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:17:51.493 14:10:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:17:51.493 14:10:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:17:51.493 14:10:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:17:51.493 14:10:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:51.493 14:10:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:17:51.493 14:10:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:51.493 14:10:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:51.493 14:10:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:17:51.493 14:10:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:51.493 14:10:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:51.493 14:10:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:17:51.493 14:10:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:51.493 14:10:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:51.493 14:10:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:51.493 14:10:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:17:51.493 14:10:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:17:51.493 14:10:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:17:51.493 14:10:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:17:51.493 14:10:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:17:51.493 14:10:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:17:51.493 14:10:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:17:51.493 14:10:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:17:51.493 14:10:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:17:51.493 14:10:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:17:51.493 14:10:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.IFksnLPzzu 00:17:51.493 14:10:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=193726 00:17:51.493 14:10:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 193726 /var/tmp/spdk-raid.sock 00:17:51.493 14:10:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:17:51.493 14:10:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 193726 ']' 00:17:51.493 14:10:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:51.493 14:10:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:51.493 14:10:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:51.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:51.494 14:10:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:51.494 14:10:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.494 [2024-07-15 14:10:37.259434] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:17:51.494 [2024-07-15 14:10:37.259817] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid193726 ] 00:17:51.494 [2024-07-15 14:10:37.423861] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:51.752 [2024-07-15 14:10:37.677603] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:52.011 [2024-07-15 14:10:37.876020] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:52.579 14:10:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:52.579 14:10:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:17:52.579 14:10:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:52.579 14:10:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:52.838 BaseBdev1_malloc 00:17:52.838 14:10:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:17:53.096 true 00:17:53.096 14:10:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:17:53.355 [2024-07-15 14:10:39.124690] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:17:53.355 [2024-07-15 14:10:39.125421] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:53.355 [2024-07-15 14:10:39.125747] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:17:53.355 [2024-07-15 14:10:39.125980] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:53.355 [2024-07-15 14:10:39.127998] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:53.355 [2024-07-15 14:10:39.128277] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:53.355 BaseBdev1 00:17:53.355 14:10:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:53.355 14:10:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:53.615 BaseBdev2_malloc 00:17:53.615 14:10:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:17:53.873 true 00:17:53.873 14:10:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:17:54.132 [2024-07-15 14:10:40.090943] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:17:54.132 [2024-07-15 14:10:40.091519] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:54.132 [2024-07-15 14:10:40.091803] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:54.132 [2024-07-15 14:10:40.092029] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:54.132 [2024-07-15 14:10:40.094275] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:54.132 [2024-07-15 14:10:40.094529] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:54.132 BaseBdev2 00:17:54.132 14:10:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:54.132 14:10:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:54.390 BaseBdev3_malloc 00:17:54.647 14:10:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:17:54.904 true 00:17:54.904 14:10:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:17:54.904 [2024-07-15 14:10:40.894146] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:17:54.904 [2024-07-15 14:10:40.894746] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:54.904 [2024-07-15 14:10:40.895031] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:17:54.904 [2024-07-15 14:10:40.895255] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:54.904 [2024-07-15 14:10:40.897438] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:54.904 [2024-07-15 14:10:40.897694] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:54.904 BaseBdev3 00:17:55.160 14:10:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:17:55.160 [2024-07-15 14:10:41.134307] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:55.160 [2024-07-15 14:10:41.135953] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:55.160 [2024-07-15 14:10:41.136136] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:55.160 [2024-07-15 14:10:41.136428] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:17:55.160 [2024-07-15 14:10:41.136565] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:55.160 [2024-07-15 14:10:41.136755] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:55.160 [2024-07-15 14:10:41.137073] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:17:55.160 [2024-07-15 14:10:41.137199] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009380 00:17:55.160 [2024-07-15 14:10:41.137434] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:55.160 14:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:17:55.160 14:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:55.160 14:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:55.160 14:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:55.160 14:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:55.160 14:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:55.160 14:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:55.160 14:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:55.160 14:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:55.160 14:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:55.160 14:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:55.160 14:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:55.418 14:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:55.418 "name": "raid_bdev1", 00:17:55.418 "uuid": "9df7aa3f-a0ef-48b2-ad04-32b6334c6b90", 00:17:55.418 "strip_size_kb": 64, 00:17:55.418 "state": "online", 00:17:55.418 "raid_level": "raid0", 00:17:55.418 "superblock": true, 00:17:55.418 "num_base_bdevs": 3, 00:17:55.418 "num_base_bdevs_discovered": 3, 00:17:55.418 "num_base_bdevs_operational": 3, 00:17:55.418 "base_bdevs_list": [ 00:17:55.418 { 00:17:55.418 "name": "BaseBdev1", 00:17:55.418 "uuid": "9e2f382a-79cb-5151-b7ce-7ebbd86301d9", 00:17:55.418 "is_configured": true, 00:17:55.418 "data_offset": 2048, 00:17:55.418 "data_size": 63488 00:17:55.418 }, 00:17:55.418 { 00:17:55.418 "name": "BaseBdev2", 00:17:55.418 "uuid": "e0147a90-5f4d-5af5-acf2-7c565ecba2e9", 00:17:55.418 "is_configured": true, 00:17:55.418 "data_offset": 2048, 00:17:55.418 "data_size": 63488 00:17:55.418 }, 00:17:55.418 { 00:17:55.418 "name": "BaseBdev3", 00:17:55.418 "uuid": "e85d5462-51a6-57bf-a3e5-a0dcbed464e1", 00:17:55.418 "is_configured": true, 00:17:55.418 "data_offset": 2048, 00:17:55.418 "data_size": 63488 00:17:55.418 } 00:17:55.418 ] 00:17:55.418 }' 00:17:55.418 14:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:55.418 14:10:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:56.357 14:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:17:56.357 14:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:17:56.358 [2024-07-15 14:10:42.196539] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:57.292 14:10:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:17:57.550 14:10:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:17:57.550 14:10:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:17:57.550 14:10:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:17:57.550 14:10:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:17:57.550 14:10:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:57.550 14:10:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:57.550 14:10:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:57.550 14:10:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:57.550 14:10:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:57.550 14:10:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:57.550 14:10:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:57.550 14:10:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:57.551 14:10:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:57.551 14:10:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:57.551 14:10:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:57.821 14:10:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:57.821 "name": "raid_bdev1", 00:17:57.821 "uuid": "9df7aa3f-a0ef-48b2-ad04-32b6334c6b90", 00:17:57.822 "strip_size_kb": 64, 00:17:57.822 "state": "online", 00:17:57.822 "raid_level": "raid0", 00:17:57.822 "superblock": true, 00:17:57.822 "num_base_bdevs": 3, 00:17:57.822 "num_base_bdevs_discovered": 3, 00:17:57.822 "num_base_bdevs_operational": 3, 00:17:57.822 "base_bdevs_list": [ 00:17:57.822 { 00:17:57.822 "name": "BaseBdev1", 00:17:57.822 "uuid": "9e2f382a-79cb-5151-b7ce-7ebbd86301d9", 00:17:57.822 "is_configured": true, 00:17:57.822 "data_offset": 2048, 00:17:57.822 "data_size": 63488 00:17:57.822 }, 00:17:57.822 { 00:17:57.822 "name": "BaseBdev2", 00:17:57.822 "uuid": "e0147a90-5f4d-5af5-acf2-7c565ecba2e9", 00:17:57.822 "is_configured": true, 00:17:57.822 "data_offset": 2048, 00:17:57.822 "data_size": 63488 00:17:57.822 }, 00:17:57.822 { 00:17:57.822 "name": "BaseBdev3", 00:17:57.822 "uuid": "e85d5462-51a6-57bf-a3e5-a0dcbed464e1", 00:17:57.822 "is_configured": true, 00:17:57.822 "data_offset": 2048, 00:17:57.822 "data_size": 63488 00:17:57.822 } 00:17:57.822 ] 00:17:57.822 }' 00:17:57.822 14:10:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:57.822 14:10:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.389 14:10:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:58.648 [2024-07-15 14:10:44.593947] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:58.648 [2024-07-15 14:10:44.593999] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:58.648 [2024-07-15 14:10:44.595483] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:58.648 [2024-07-15 14:10:44.595546] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:58.648 [2024-07-15 14:10:44.595605] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:58.648 [2024-07-15 14:10:44.595615] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state offline 00:17:58.648 0 00:17:58.648 14:10:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 193726 00:17:58.648 14:10:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 193726 ']' 00:17:58.648 14:10:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 193726 00:17:58.648 14:10:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:17:58.648 14:10:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:58.648 14:10:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 193726 00:17:58.648 14:10:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:58.648 14:10:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:58.648 14:10:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 193726' 00:17:58.648 killing process with pid 193726 00:17:58.648 14:10:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 193726 00:17:58.648 [2024-07-15 14:10:44.639387] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:58.648 14:10:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 193726 00:17:58.906 [2024-07-15 14:10:44.828909] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:00.281 14:10:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.IFksnLPzzu 00:18:00.281 14:10:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:18:00.281 14:10:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:18:00.281 14:10:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.42 00:18:00.281 14:10:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:18:00.281 14:10:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:00.281 14:10:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:18:00.281 14:10:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.42 != \0\.\0\0 ]] 00:18:00.281 00:18:00.281 real 0m8.841s 00:18:00.281 user 0m13.655s 00:18:00.281 sys 0m0.978s 00:18:00.281 14:10:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:00.281 14:10:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.281 ************************************ 00:18:00.281 END TEST raid_read_error_test 00:18:00.281 ************************************ 00:18:00.281 14:10:46 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:18:00.281 14:10:46 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:18:00.281 14:10:46 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:18:00.281 14:10:46 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:00.281 14:10:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:00.281 ************************************ 00:18:00.281 START TEST raid_write_error_test 00:18:00.281 ************************************ 00:18:00.282 14:10:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 3 write 00:18:00.282 14:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:18:00.282 14:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:18:00.282 14:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:18:00.282 14:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:18:00.282 14:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:00.282 14:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:18:00.282 14:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:00.282 14:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:00.282 14:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:18:00.282 14:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:00.282 14:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:00.282 14:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:18:00.282 14:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:18:00.282 14:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:18:00.282 14:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:18:00.282 14:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:18:00.282 14:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:18:00.282 14:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:18:00.282 14:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:18:00.282 14:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:18:00.282 14:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:18:00.282 14:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:18:00.282 14:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:18:00.282 14:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:18:00.282 14:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:18:00.282 14:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.IkKP24gtWi 00:18:00.282 14:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=193937 00:18:00.282 14:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 193937 /var/tmp/spdk-raid.sock 00:18:00.282 14:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:18:00.282 14:10:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 193937 ']' 00:18:00.282 14:10:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:00.282 14:10:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:00.282 14:10:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:00.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:00.282 14:10:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:00.282 14:10:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.282 [2024-07-15 14:10:46.148436] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:18:00.282 [2024-07-15 14:10:46.148606] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid193937 ] 00:18:00.539 [2024-07-15 14:10:46.321541] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:00.797 [2024-07-15 14:10:46.569299] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:00.797 [2024-07-15 14:10:46.772515] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:01.365 14:10:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:01.365 14:10:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:18:01.365 14:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:01.365 14:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:01.635 BaseBdev1_malloc 00:18:01.635 14:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:18:01.894 true 00:18:01.894 14:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:18:02.151 [2024-07-15 14:10:47.950761] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:18:02.151 [2024-07-15 14:10:47.950927] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:02.151 [2024-07-15 14:10:47.950975] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:18:02.151 [2024-07-15 14:10:47.951000] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:02.151 [2024-07-15 14:10:47.952793] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:02.151 [2024-07-15 14:10:47.952875] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:02.151 BaseBdev1 00:18:02.151 14:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:02.151 14:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:02.410 BaseBdev2_malloc 00:18:02.410 14:10:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:18:02.667 true 00:18:02.667 14:10:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:18:02.980 [2024-07-15 14:10:48.704099] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:18:02.980 [2024-07-15 14:10:48.704256] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:02.980 [2024-07-15 14:10:48.704317] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:02.980 [2024-07-15 14:10:48.704343] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:02.980 [2024-07-15 14:10:48.706067] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:02.980 [2024-07-15 14:10:48.706150] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:02.980 BaseBdev2 00:18:02.980 14:10:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:18:02.980 14:10:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:03.248 BaseBdev3_malloc 00:18:03.248 14:10:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:18:03.542 true 00:18:03.542 14:10:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:18:03.799 [2024-07-15 14:10:49.547380] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:18:03.799 [2024-07-15 14:10:49.547516] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:03.799 [2024-07-15 14:10:49.547556] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:18:03.799 [2024-07-15 14:10:49.547587] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:03.799 [2024-07-15 14:10:49.549442] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:03.799 [2024-07-15 14:10:49.549501] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:03.799 BaseBdev3 00:18:03.799 14:10:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:18:04.058 [2024-07-15 14:10:49.819466] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:04.058 [2024-07-15 14:10:49.821006] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:04.058 [2024-07-15 14:10:49.821077] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:04.058 [2024-07-15 14:10:49.821263] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:18:04.058 [2024-07-15 14:10:49.821287] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:04.058 [2024-07-15 14:10:49.821412] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:04.058 [2024-07-15 14:10:49.821677] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:18:04.058 [2024-07-15 14:10:49.821704] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009380 00:18:04.058 [2024-07-15 14:10:49.821857] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:04.058 14:10:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:18:04.058 14:10:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:04.058 14:10:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:04.058 14:10:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:04.058 14:10:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:04.058 14:10:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:04.058 14:10:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:04.058 14:10:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:04.058 14:10:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:04.058 14:10:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:04.058 14:10:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:04.058 14:10:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.318 14:10:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:04.318 "name": "raid_bdev1", 00:18:04.318 "uuid": "48492550-7b58-4d29-a0cf-1716dab08d21", 00:18:04.318 "strip_size_kb": 64, 00:18:04.318 "state": "online", 00:18:04.318 "raid_level": "raid0", 00:18:04.318 "superblock": true, 00:18:04.318 "num_base_bdevs": 3, 00:18:04.318 "num_base_bdevs_discovered": 3, 00:18:04.318 "num_base_bdevs_operational": 3, 00:18:04.318 "base_bdevs_list": [ 00:18:04.318 { 00:18:04.318 "name": "BaseBdev1", 00:18:04.318 "uuid": "ec19a894-b333-5b0e-b9ef-629f4f94af5b", 00:18:04.318 "is_configured": true, 00:18:04.318 "data_offset": 2048, 00:18:04.318 "data_size": 63488 00:18:04.318 }, 00:18:04.318 { 00:18:04.318 "name": "BaseBdev2", 00:18:04.318 "uuid": "298c2232-0aad-5580-a2f0-8899cd93bea8", 00:18:04.318 "is_configured": true, 00:18:04.318 "data_offset": 2048, 00:18:04.318 "data_size": 63488 00:18:04.318 }, 00:18:04.318 { 00:18:04.318 "name": "BaseBdev3", 00:18:04.318 "uuid": "7f056d4f-3994-5d17-8300-02dcae5dfbf2", 00:18:04.318 "is_configured": true, 00:18:04.318 "data_offset": 2048, 00:18:04.318 "data_size": 63488 00:18:04.318 } 00:18:04.318 ] 00:18:04.318 }' 00:18:04.318 14:10:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:04.318 14:10:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.882 14:10:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:18:04.882 14:10:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:18:05.139 [2024-07-15 14:10:50.920768] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:06.077 14:10:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:18:06.077 14:10:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:18:06.077 14:10:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:18:06.077 14:10:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:18:06.077 14:10:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:18:06.077 14:10:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:06.077 14:10:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:06.077 14:10:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:18:06.077 14:10:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:06.077 14:10:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:06.077 14:10:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:06.077 14:10:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:06.077 14:10:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:06.077 14:10:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:06.077 14:10:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:06.077 14:10:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.643 14:10:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:06.643 "name": "raid_bdev1", 00:18:06.643 "uuid": "48492550-7b58-4d29-a0cf-1716dab08d21", 00:18:06.643 "strip_size_kb": 64, 00:18:06.643 "state": "online", 00:18:06.643 "raid_level": "raid0", 00:18:06.643 "superblock": true, 00:18:06.643 "num_base_bdevs": 3, 00:18:06.643 "num_base_bdevs_discovered": 3, 00:18:06.643 "num_base_bdevs_operational": 3, 00:18:06.643 "base_bdevs_list": [ 00:18:06.643 { 00:18:06.643 "name": "BaseBdev1", 00:18:06.643 "uuid": "ec19a894-b333-5b0e-b9ef-629f4f94af5b", 00:18:06.643 "is_configured": true, 00:18:06.643 "data_offset": 2048, 00:18:06.643 "data_size": 63488 00:18:06.643 }, 00:18:06.643 { 00:18:06.643 "name": "BaseBdev2", 00:18:06.643 "uuid": "298c2232-0aad-5580-a2f0-8899cd93bea8", 00:18:06.643 "is_configured": true, 00:18:06.643 "data_offset": 2048, 00:18:06.643 "data_size": 63488 00:18:06.643 }, 00:18:06.643 { 00:18:06.643 "name": "BaseBdev3", 00:18:06.643 "uuid": "7f056d4f-3994-5d17-8300-02dcae5dfbf2", 00:18:06.643 "is_configured": true, 00:18:06.643 "data_offset": 2048, 00:18:06.643 "data_size": 63488 00:18:06.643 } 00:18:06.643 ] 00:18:06.643 }' 00:18:06.643 14:10:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:06.643 14:10:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:07.210 14:10:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:07.469 [2024-07-15 14:10:53.350470] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:07.469 [2024-07-15 14:10:53.350758] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:07.469 [2024-07-15 14:10:53.352210] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:07.469 [2024-07-15 14:10:53.352393] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:07.469 [2024-07-15 14:10:53.352465] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:07.469 [2024-07-15 14:10:53.352672] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state offline 00:18:07.469 0 00:18:07.469 14:10:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 193937 00:18:07.469 14:10:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 193937 ']' 00:18:07.469 14:10:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 193937 00:18:07.469 14:10:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:18:07.469 14:10:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:07.469 14:10:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 193937 00:18:07.469 14:10:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:07.469 14:10:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:07.469 14:10:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 193937' 00:18:07.469 killing process with pid 193937 00:18:07.469 14:10:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 193937 00:18:07.469 [2024-07-15 14:10:53.405478] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:07.469 14:10:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 193937 00:18:07.728 [2024-07-15 14:10:53.600866] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:09.104 14:10:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.IkKP24gtWi 00:18:09.104 14:10:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:18:09.104 14:10:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:18:09.104 14:10:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.41 00:18:09.104 14:10:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:18:09.104 14:10:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:09.104 14:10:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:18:09.104 14:10:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.41 != \0\.\0\0 ]] 00:18:09.104 00:18:09.104 real 0m8.715s 00:18:09.104 user 0m13.451s 00:18:09.104 sys 0m0.969s 00:18:09.104 14:10:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:09.104 14:10:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.104 ************************************ 00:18:09.104 END TEST raid_write_error_test 00:18:09.104 ************************************ 00:18:09.104 14:10:54 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:18:09.104 14:10:54 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:18:09.104 14:10:54 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:18:09.104 14:10:54 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:18:09.104 14:10:54 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:09.104 14:10:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:09.104 ************************************ 00:18:09.104 START TEST raid_state_function_test 00:18:09.104 ************************************ 00:18:09.104 14:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 3 false 00:18:09.104 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:18:09.104 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:18:09.104 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:18:09.104 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:18:09.104 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:18:09.104 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:09.104 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:18:09.104 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:09.104 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:09.104 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:18:09.104 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:09.104 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:09.104 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:18:09.104 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:09.105 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:09.105 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:18:09.105 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:18:09.105 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:18:09.105 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:18:09.105 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:18:09.105 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:18:09.105 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:18:09.105 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:18:09.105 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:18:09.105 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:18:09.105 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:18:09.105 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=194147 00:18:09.105 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:09.105 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 194147' 00:18:09.105 Process raid pid: 194147 00:18:09.105 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 194147 /var/tmp/spdk-raid.sock 00:18:09.105 14:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 194147 ']' 00:18:09.105 14:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:09.105 14:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:09.105 14:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:09.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:09.105 14:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:09.105 14:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.105 [2024-07-15 14:10:54.934758] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:18:09.105 [2024-07-15 14:10:54.935188] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:09.105 [2024-07-15 14:10:55.090591] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:09.364 [2024-07-15 14:10:55.311670] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:09.623 [2024-07-15 14:10:55.516974] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:10.190 14:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:10.190 14:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:18:10.190 14:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:10.449 [2024-07-15 14:10:56.241633] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:10.449 [2024-07-15 14:10:56.242003] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:10.449 [2024-07-15 14:10:56.242163] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:10.449 [2024-07-15 14:10:56.242317] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:10.449 [2024-07-15 14:10:56.242427] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:10.449 [2024-07-15 14:10:56.242553] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:10.449 14:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:10.449 14:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:10.449 14:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:10.449 14:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:10.449 14:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:10.449 14:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:10.449 14:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:10.449 14:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:10.449 14:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:10.449 14:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:10.449 14:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:10.449 14:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:10.707 14:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:10.707 "name": "Existed_Raid", 00:18:10.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.708 "strip_size_kb": 64, 00:18:10.708 "state": "configuring", 00:18:10.708 "raid_level": "concat", 00:18:10.708 "superblock": false, 00:18:10.708 "num_base_bdevs": 3, 00:18:10.708 "num_base_bdevs_discovered": 0, 00:18:10.708 "num_base_bdevs_operational": 3, 00:18:10.708 "base_bdevs_list": [ 00:18:10.708 { 00:18:10.708 "name": "BaseBdev1", 00:18:10.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.708 "is_configured": false, 00:18:10.708 "data_offset": 0, 00:18:10.708 "data_size": 0 00:18:10.708 }, 00:18:10.708 { 00:18:10.708 "name": "BaseBdev2", 00:18:10.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.708 "is_configured": false, 00:18:10.708 "data_offset": 0, 00:18:10.708 "data_size": 0 00:18:10.708 }, 00:18:10.708 { 00:18:10.708 "name": "BaseBdev3", 00:18:10.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.708 "is_configured": false, 00:18:10.708 "data_offset": 0, 00:18:10.708 "data_size": 0 00:18:10.708 } 00:18:10.708 ] 00:18:10.708 }' 00:18:10.708 14:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:10.708 14:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.276 14:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:11.534 [2024-07-15 14:10:57.393706] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:11.534 [2024-07-15 14:10:57.393979] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:18:11.534 14:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:11.793 [2024-07-15 14:10:57.641778] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:11.793 [2024-07-15 14:10:57.642106] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:11.793 [2024-07-15 14:10:57.642227] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:11.793 [2024-07-15 14:10:57.642292] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:11.793 [2024-07-15 14:10:57.642397] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:11.793 [2024-07-15 14:10:57.642488] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:11.793 14:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:12.053 [2024-07-15 14:10:57.933806] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:12.053 BaseBdev1 00:18:12.053 14:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:18:12.053 14:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:18:12.053 14:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:12.053 14:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:18:12.053 14:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:12.053 14:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:12.053 14:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:12.312 14:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:12.570 [ 00:18:12.570 { 00:18:12.570 "name": "BaseBdev1", 00:18:12.570 "aliases": [ 00:18:12.570 "baac28c2-bd00-4151-9929-6cd8ddfb3211" 00:18:12.570 ], 00:18:12.571 "product_name": "Malloc disk", 00:18:12.571 "block_size": 512, 00:18:12.571 "num_blocks": 65536, 00:18:12.571 "uuid": "baac28c2-bd00-4151-9929-6cd8ddfb3211", 00:18:12.571 "assigned_rate_limits": { 00:18:12.571 "rw_ios_per_sec": 0, 00:18:12.571 "rw_mbytes_per_sec": 0, 00:18:12.571 "r_mbytes_per_sec": 0, 00:18:12.571 "w_mbytes_per_sec": 0 00:18:12.571 }, 00:18:12.571 "claimed": true, 00:18:12.571 "claim_type": "exclusive_write", 00:18:12.571 "zoned": false, 00:18:12.571 "supported_io_types": { 00:18:12.571 "read": true, 00:18:12.571 "write": true, 00:18:12.571 "unmap": true, 00:18:12.571 "flush": true, 00:18:12.571 "reset": true, 00:18:12.571 "nvme_admin": false, 00:18:12.571 "nvme_io": false, 00:18:12.571 "nvme_io_md": false, 00:18:12.571 "write_zeroes": true, 00:18:12.571 "zcopy": true, 00:18:12.571 "get_zone_info": false, 00:18:12.571 "zone_management": false, 00:18:12.571 "zone_append": false, 00:18:12.571 "compare": false, 00:18:12.571 "compare_and_write": false, 00:18:12.571 "abort": true, 00:18:12.571 "seek_hole": false, 00:18:12.571 "seek_data": false, 00:18:12.571 "copy": true, 00:18:12.571 "nvme_iov_md": false 00:18:12.571 }, 00:18:12.571 "memory_domains": [ 00:18:12.571 { 00:18:12.571 "dma_device_id": "system", 00:18:12.571 "dma_device_type": 1 00:18:12.571 }, 00:18:12.571 { 00:18:12.571 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:12.571 "dma_device_type": 2 00:18:12.571 } 00:18:12.571 ], 00:18:12.571 "driver_specific": {} 00:18:12.571 } 00:18:12.571 ] 00:18:12.571 14:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:18:12.571 14:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:12.571 14:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:12.571 14:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:12.571 14:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:12.571 14:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:12.571 14:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:12.571 14:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:12.571 14:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:12.571 14:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:12.571 14:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:12.571 14:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:12.571 14:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:12.829 14:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:12.829 "name": "Existed_Raid", 00:18:12.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.829 "strip_size_kb": 64, 00:18:12.829 "state": "configuring", 00:18:12.829 "raid_level": "concat", 00:18:12.829 "superblock": false, 00:18:12.829 "num_base_bdevs": 3, 00:18:12.829 "num_base_bdevs_discovered": 1, 00:18:12.829 "num_base_bdevs_operational": 3, 00:18:12.829 "base_bdevs_list": [ 00:18:12.829 { 00:18:12.829 "name": "BaseBdev1", 00:18:12.829 "uuid": "baac28c2-bd00-4151-9929-6cd8ddfb3211", 00:18:12.829 "is_configured": true, 00:18:12.829 "data_offset": 0, 00:18:12.829 "data_size": 65536 00:18:12.829 }, 00:18:12.829 { 00:18:12.829 "name": "BaseBdev2", 00:18:12.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.829 "is_configured": false, 00:18:12.829 "data_offset": 0, 00:18:12.829 "data_size": 0 00:18:12.829 }, 00:18:12.829 { 00:18:12.829 "name": "BaseBdev3", 00:18:12.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.829 "is_configured": false, 00:18:12.829 "data_offset": 0, 00:18:12.829 "data_size": 0 00:18:12.829 } 00:18:12.829 ] 00:18:12.829 }' 00:18:12.829 14:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:12.829 14:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.764 14:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:13.764 [2024-07-15 14:10:59.630143] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:13.764 [2024-07-15 14:10:59.630481] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:18:13.764 14:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:14.022 [2024-07-15 14:10:59.910217] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:14.022 [2024-07-15 14:10:59.911917] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:14.022 [2024-07-15 14:10:59.912125] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:14.022 [2024-07-15 14:10:59.912292] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:14.022 [2024-07-15 14:10:59.912452] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:14.022 14:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:18:14.022 14:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:14.022 14:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:14.022 14:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:14.022 14:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:14.022 14:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:14.022 14:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:14.022 14:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:14.022 14:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:14.022 14:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:14.022 14:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:14.022 14:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:14.022 14:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:14.022 14:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:14.281 14:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:14.281 "name": "Existed_Raid", 00:18:14.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.281 "strip_size_kb": 64, 00:18:14.281 "state": "configuring", 00:18:14.281 "raid_level": "concat", 00:18:14.281 "superblock": false, 00:18:14.281 "num_base_bdevs": 3, 00:18:14.281 "num_base_bdevs_discovered": 1, 00:18:14.281 "num_base_bdevs_operational": 3, 00:18:14.281 "base_bdevs_list": [ 00:18:14.281 { 00:18:14.281 "name": "BaseBdev1", 00:18:14.281 "uuid": "baac28c2-bd00-4151-9929-6cd8ddfb3211", 00:18:14.281 "is_configured": true, 00:18:14.281 "data_offset": 0, 00:18:14.281 "data_size": 65536 00:18:14.281 }, 00:18:14.281 { 00:18:14.281 "name": "BaseBdev2", 00:18:14.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.281 "is_configured": false, 00:18:14.281 "data_offset": 0, 00:18:14.281 "data_size": 0 00:18:14.281 }, 00:18:14.281 { 00:18:14.281 "name": "BaseBdev3", 00:18:14.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.281 "is_configured": false, 00:18:14.281 "data_offset": 0, 00:18:14.281 "data_size": 0 00:18:14.281 } 00:18:14.281 ] 00:18:14.281 }' 00:18:14.281 14:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:14.281 14:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.847 14:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:15.106 [2024-07-15 14:11:01.036606] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:15.106 BaseBdev2 00:18:15.106 14:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:18:15.106 14:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:18:15.106 14:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:15.106 14:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:18:15.106 14:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:15.106 14:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:15.106 14:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:15.365 14:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:15.623 [ 00:18:15.623 { 00:18:15.623 "name": "BaseBdev2", 00:18:15.623 "aliases": [ 00:18:15.623 "91974975-66ad-4a99-bd21-81791ebb4ef4" 00:18:15.623 ], 00:18:15.623 "product_name": "Malloc disk", 00:18:15.623 "block_size": 512, 00:18:15.623 "num_blocks": 65536, 00:18:15.623 "uuid": "91974975-66ad-4a99-bd21-81791ebb4ef4", 00:18:15.623 "assigned_rate_limits": { 00:18:15.623 "rw_ios_per_sec": 0, 00:18:15.623 "rw_mbytes_per_sec": 0, 00:18:15.623 "r_mbytes_per_sec": 0, 00:18:15.623 "w_mbytes_per_sec": 0 00:18:15.623 }, 00:18:15.623 "claimed": true, 00:18:15.623 "claim_type": "exclusive_write", 00:18:15.623 "zoned": false, 00:18:15.623 "supported_io_types": { 00:18:15.623 "read": true, 00:18:15.623 "write": true, 00:18:15.623 "unmap": true, 00:18:15.623 "flush": true, 00:18:15.623 "reset": true, 00:18:15.623 "nvme_admin": false, 00:18:15.623 "nvme_io": false, 00:18:15.623 "nvme_io_md": false, 00:18:15.623 "write_zeroes": true, 00:18:15.623 "zcopy": true, 00:18:15.623 "get_zone_info": false, 00:18:15.623 "zone_management": false, 00:18:15.623 "zone_append": false, 00:18:15.623 "compare": false, 00:18:15.623 "compare_and_write": false, 00:18:15.623 "abort": true, 00:18:15.623 "seek_hole": false, 00:18:15.623 "seek_data": false, 00:18:15.623 "copy": true, 00:18:15.623 "nvme_iov_md": false 00:18:15.623 }, 00:18:15.623 "memory_domains": [ 00:18:15.623 { 00:18:15.623 "dma_device_id": "system", 00:18:15.623 "dma_device_type": 1 00:18:15.623 }, 00:18:15.623 { 00:18:15.623 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:15.623 "dma_device_type": 2 00:18:15.623 } 00:18:15.623 ], 00:18:15.623 "driver_specific": {} 00:18:15.623 } 00:18:15.623 ] 00:18:15.623 14:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:18:15.623 14:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:18:15.623 14:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:15.623 14:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:15.623 14:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:15.623 14:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:15.623 14:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:15.623 14:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:15.623 14:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:15.623 14:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:15.623 14:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:15.623 14:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:15.623 14:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:15.623 14:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:15.623 14:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:15.882 14:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:15.882 "name": "Existed_Raid", 00:18:15.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.882 "strip_size_kb": 64, 00:18:15.882 "state": "configuring", 00:18:15.882 "raid_level": "concat", 00:18:15.882 "superblock": false, 00:18:15.882 "num_base_bdevs": 3, 00:18:15.882 "num_base_bdevs_discovered": 2, 00:18:15.882 "num_base_bdevs_operational": 3, 00:18:15.882 "base_bdevs_list": [ 00:18:15.882 { 00:18:15.882 "name": "BaseBdev1", 00:18:15.882 "uuid": "baac28c2-bd00-4151-9929-6cd8ddfb3211", 00:18:15.882 "is_configured": true, 00:18:15.882 "data_offset": 0, 00:18:15.882 "data_size": 65536 00:18:15.882 }, 00:18:15.882 { 00:18:15.882 "name": "BaseBdev2", 00:18:15.882 "uuid": "91974975-66ad-4a99-bd21-81791ebb4ef4", 00:18:15.882 "is_configured": true, 00:18:15.882 "data_offset": 0, 00:18:15.882 "data_size": 65536 00:18:15.882 }, 00:18:15.882 { 00:18:15.882 "name": "BaseBdev3", 00:18:15.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.882 "is_configured": false, 00:18:15.882 "data_offset": 0, 00:18:15.882 "data_size": 0 00:18:15.882 } 00:18:15.882 ] 00:18:15.882 }' 00:18:15.882 14:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:15.882 14:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.449 14:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:17.014 [2024-07-15 14:11:02.766473] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:17.014 [2024-07-15 14:11:02.766831] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:18:17.014 [2024-07-15 14:11:02.766889] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:18:17.014 [2024-07-15 14:11:02.767158] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:18:17.014 [2024-07-15 14:11:02.767564] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:18:17.014 [2024-07-15 14:11:02.767689] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:18:17.014 [2024-07-15 14:11:02.768023] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:17.014 BaseBdev3 00:18:17.014 14:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:18:17.014 14:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:18:17.014 14:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:17.014 14:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:18:17.014 14:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:17.014 14:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:17.014 14:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:17.273 14:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:17.274 [ 00:18:17.274 { 00:18:17.274 "name": "BaseBdev3", 00:18:17.274 "aliases": [ 00:18:17.274 "b5be4fbe-dd7a-4a0b-8630-871bd148f4e6" 00:18:17.274 ], 00:18:17.274 "product_name": "Malloc disk", 00:18:17.274 "block_size": 512, 00:18:17.274 "num_blocks": 65536, 00:18:17.274 "uuid": "b5be4fbe-dd7a-4a0b-8630-871bd148f4e6", 00:18:17.274 "assigned_rate_limits": { 00:18:17.274 "rw_ios_per_sec": 0, 00:18:17.274 "rw_mbytes_per_sec": 0, 00:18:17.274 "r_mbytes_per_sec": 0, 00:18:17.274 "w_mbytes_per_sec": 0 00:18:17.274 }, 00:18:17.274 "claimed": true, 00:18:17.274 "claim_type": "exclusive_write", 00:18:17.274 "zoned": false, 00:18:17.274 "supported_io_types": { 00:18:17.274 "read": true, 00:18:17.274 "write": true, 00:18:17.274 "unmap": true, 00:18:17.274 "flush": true, 00:18:17.274 "reset": true, 00:18:17.274 "nvme_admin": false, 00:18:17.274 "nvme_io": false, 00:18:17.274 "nvme_io_md": false, 00:18:17.274 "write_zeroes": true, 00:18:17.274 "zcopy": true, 00:18:17.274 "get_zone_info": false, 00:18:17.274 "zone_management": false, 00:18:17.274 "zone_append": false, 00:18:17.274 "compare": false, 00:18:17.274 "compare_and_write": false, 00:18:17.274 "abort": true, 00:18:17.274 "seek_hole": false, 00:18:17.274 "seek_data": false, 00:18:17.274 "copy": true, 00:18:17.274 "nvme_iov_md": false 00:18:17.274 }, 00:18:17.274 "memory_domains": [ 00:18:17.274 { 00:18:17.274 "dma_device_id": "system", 00:18:17.274 "dma_device_type": 1 00:18:17.274 }, 00:18:17.274 { 00:18:17.274 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:17.274 "dma_device_type": 2 00:18:17.274 } 00:18:17.274 ], 00:18:17.274 "driver_specific": {} 00:18:17.274 } 00:18:17.274 ] 00:18:17.533 14:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:18:17.533 14:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:18:17.533 14:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:17.533 14:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:18:17.533 14:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:17.533 14:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:17.533 14:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:17.533 14:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:17.533 14:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:17.533 14:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:17.533 14:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:17.533 14:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:17.533 14:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:17.533 14:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:17.533 14:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:17.790 14:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:17.790 "name": "Existed_Raid", 00:18:17.790 "uuid": "54375f53-dd85-4725-8a06-4a4d4efc2d29", 00:18:17.791 "strip_size_kb": 64, 00:18:17.791 "state": "online", 00:18:17.791 "raid_level": "concat", 00:18:17.791 "superblock": false, 00:18:17.791 "num_base_bdevs": 3, 00:18:17.791 "num_base_bdevs_discovered": 3, 00:18:17.791 "num_base_bdevs_operational": 3, 00:18:17.791 "base_bdevs_list": [ 00:18:17.791 { 00:18:17.791 "name": "BaseBdev1", 00:18:17.791 "uuid": "baac28c2-bd00-4151-9929-6cd8ddfb3211", 00:18:17.791 "is_configured": true, 00:18:17.791 "data_offset": 0, 00:18:17.791 "data_size": 65536 00:18:17.791 }, 00:18:17.791 { 00:18:17.791 "name": "BaseBdev2", 00:18:17.791 "uuid": "91974975-66ad-4a99-bd21-81791ebb4ef4", 00:18:17.791 "is_configured": true, 00:18:17.791 "data_offset": 0, 00:18:17.791 "data_size": 65536 00:18:17.791 }, 00:18:17.791 { 00:18:17.791 "name": "BaseBdev3", 00:18:17.791 "uuid": "b5be4fbe-dd7a-4a0b-8630-871bd148f4e6", 00:18:17.791 "is_configured": true, 00:18:17.791 "data_offset": 0, 00:18:17.791 "data_size": 65536 00:18:17.791 } 00:18:17.791 ] 00:18:17.791 }' 00:18:17.791 14:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:17.791 14:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.356 14:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:18:18.356 14:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:18:18.357 14:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:18.357 14:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:18.357 14:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:18.357 14:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:18:18.357 14:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:18:18.357 14:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:18.615 [2024-07-15 14:11:04.447171] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:18.615 14:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:18.615 "name": "Existed_Raid", 00:18:18.615 "aliases": [ 00:18:18.615 "54375f53-dd85-4725-8a06-4a4d4efc2d29" 00:18:18.615 ], 00:18:18.615 "product_name": "Raid Volume", 00:18:18.615 "block_size": 512, 00:18:18.615 "num_blocks": 196608, 00:18:18.615 "uuid": "54375f53-dd85-4725-8a06-4a4d4efc2d29", 00:18:18.615 "assigned_rate_limits": { 00:18:18.615 "rw_ios_per_sec": 0, 00:18:18.615 "rw_mbytes_per_sec": 0, 00:18:18.615 "r_mbytes_per_sec": 0, 00:18:18.615 "w_mbytes_per_sec": 0 00:18:18.615 }, 00:18:18.615 "claimed": false, 00:18:18.615 "zoned": false, 00:18:18.615 "supported_io_types": { 00:18:18.615 "read": true, 00:18:18.615 "write": true, 00:18:18.615 "unmap": true, 00:18:18.615 "flush": true, 00:18:18.615 "reset": true, 00:18:18.615 "nvme_admin": false, 00:18:18.615 "nvme_io": false, 00:18:18.615 "nvme_io_md": false, 00:18:18.615 "write_zeroes": true, 00:18:18.615 "zcopy": false, 00:18:18.615 "get_zone_info": false, 00:18:18.615 "zone_management": false, 00:18:18.615 "zone_append": false, 00:18:18.615 "compare": false, 00:18:18.615 "compare_and_write": false, 00:18:18.615 "abort": false, 00:18:18.615 "seek_hole": false, 00:18:18.615 "seek_data": false, 00:18:18.615 "copy": false, 00:18:18.615 "nvme_iov_md": false 00:18:18.615 }, 00:18:18.615 "memory_domains": [ 00:18:18.615 { 00:18:18.615 "dma_device_id": "system", 00:18:18.615 "dma_device_type": 1 00:18:18.615 }, 00:18:18.615 { 00:18:18.615 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:18.615 "dma_device_type": 2 00:18:18.615 }, 00:18:18.615 { 00:18:18.615 "dma_device_id": "system", 00:18:18.615 "dma_device_type": 1 00:18:18.615 }, 00:18:18.615 { 00:18:18.615 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:18.615 "dma_device_type": 2 00:18:18.615 }, 00:18:18.615 { 00:18:18.615 "dma_device_id": "system", 00:18:18.615 "dma_device_type": 1 00:18:18.615 }, 00:18:18.615 { 00:18:18.615 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:18.615 "dma_device_type": 2 00:18:18.615 } 00:18:18.615 ], 00:18:18.615 "driver_specific": { 00:18:18.615 "raid": { 00:18:18.615 "uuid": "54375f53-dd85-4725-8a06-4a4d4efc2d29", 00:18:18.615 "strip_size_kb": 64, 00:18:18.615 "state": "online", 00:18:18.615 "raid_level": "concat", 00:18:18.615 "superblock": false, 00:18:18.615 "num_base_bdevs": 3, 00:18:18.615 "num_base_bdevs_discovered": 3, 00:18:18.615 "num_base_bdevs_operational": 3, 00:18:18.615 "base_bdevs_list": [ 00:18:18.615 { 00:18:18.615 "name": "BaseBdev1", 00:18:18.615 "uuid": "baac28c2-bd00-4151-9929-6cd8ddfb3211", 00:18:18.615 "is_configured": true, 00:18:18.615 "data_offset": 0, 00:18:18.615 "data_size": 65536 00:18:18.615 }, 00:18:18.615 { 00:18:18.615 "name": "BaseBdev2", 00:18:18.615 "uuid": "91974975-66ad-4a99-bd21-81791ebb4ef4", 00:18:18.615 "is_configured": true, 00:18:18.615 "data_offset": 0, 00:18:18.616 "data_size": 65536 00:18:18.616 }, 00:18:18.616 { 00:18:18.616 "name": "BaseBdev3", 00:18:18.616 "uuid": "b5be4fbe-dd7a-4a0b-8630-871bd148f4e6", 00:18:18.616 "is_configured": true, 00:18:18.616 "data_offset": 0, 00:18:18.616 "data_size": 65536 00:18:18.616 } 00:18:18.616 ] 00:18:18.616 } 00:18:18.616 } 00:18:18.616 }' 00:18:18.616 14:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:18.616 14:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:18:18.616 BaseBdev2 00:18:18.616 BaseBdev3' 00:18:18.616 14:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:18.616 14:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:18:18.616 14:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:18.953 14:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:18.953 "name": "BaseBdev1", 00:18:18.953 "aliases": [ 00:18:18.953 "baac28c2-bd00-4151-9929-6cd8ddfb3211" 00:18:18.953 ], 00:18:18.953 "product_name": "Malloc disk", 00:18:18.953 "block_size": 512, 00:18:18.953 "num_blocks": 65536, 00:18:18.953 "uuid": "baac28c2-bd00-4151-9929-6cd8ddfb3211", 00:18:18.953 "assigned_rate_limits": { 00:18:18.953 "rw_ios_per_sec": 0, 00:18:18.953 "rw_mbytes_per_sec": 0, 00:18:18.953 "r_mbytes_per_sec": 0, 00:18:18.953 "w_mbytes_per_sec": 0 00:18:18.953 }, 00:18:18.953 "claimed": true, 00:18:18.953 "claim_type": "exclusive_write", 00:18:18.953 "zoned": false, 00:18:18.953 "supported_io_types": { 00:18:18.953 "read": true, 00:18:18.953 "write": true, 00:18:18.953 "unmap": true, 00:18:18.953 "flush": true, 00:18:18.953 "reset": true, 00:18:18.953 "nvme_admin": false, 00:18:18.953 "nvme_io": false, 00:18:18.953 "nvme_io_md": false, 00:18:18.953 "write_zeroes": true, 00:18:18.953 "zcopy": true, 00:18:18.953 "get_zone_info": false, 00:18:18.953 "zone_management": false, 00:18:18.953 "zone_append": false, 00:18:18.953 "compare": false, 00:18:18.953 "compare_and_write": false, 00:18:18.953 "abort": true, 00:18:18.953 "seek_hole": false, 00:18:18.953 "seek_data": false, 00:18:18.953 "copy": true, 00:18:18.953 "nvme_iov_md": false 00:18:18.953 }, 00:18:18.953 "memory_domains": [ 00:18:18.953 { 00:18:18.953 "dma_device_id": "system", 00:18:18.953 "dma_device_type": 1 00:18:18.953 }, 00:18:18.953 { 00:18:18.953 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:18.953 "dma_device_type": 2 00:18:18.953 } 00:18:18.953 ], 00:18:18.953 "driver_specific": {} 00:18:18.953 }' 00:18:18.953 14:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:18.953 14:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:18.953 14:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:18.953 14:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:19.213 14:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:19.213 14:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:19.213 14:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:19.213 14:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:19.213 14:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:19.213 14:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:19.213 14:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:19.213 14:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:19.213 14:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:19.213 14:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:18:19.213 14:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:19.781 14:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:19.781 "name": "BaseBdev2", 00:18:19.781 "aliases": [ 00:18:19.781 "91974975-66ad-4a99-bd21-81791ebb4ef4" 00:18:19.781 ], 00:18:19.781 "product_name": "Malloc disk", 00:18:19.781 "block_size": 512, 00:18:19.781 "num_blocks": 65536, 00:18:19.781 "uuid": "91974975-66ad-4a99-bd21-81791ebb4ef4", 00:18:19.781 "assigned_rate_limits": { 00:18:19.781 "rw_ios_per_sec": 0, 00:18:19.781 "rw_mbytes_per_sec": 0, 00:18:19.781 "r_mbytes_per_sec": 0, 00:18:19.781 "w_mbytes_per_sec": 0 00:18:19.781 }, 00:18:19.781 "claimed": true, 00:18:19.781 "claim_type": "exclusive_write", 00:18:19.781 "zoned": false, 00:18:19.781 "supported_io_types": { 00:18:19.781 "read": true, 00:18:19.781 "write": true, 00:18:19.781 "unmap": true, 00:18:19.781 "flush": true, 00:18:19.781 "reset": true, 00:18:19.781 "nvme_admin": false, 00:18:19.781 "nvme_io": false, 00:18:19.781 "nvme_io_md": false, 00:18:19.781 "write_zeroes": true, 00:18:19.781 "zcopy": true, 00:18:19.781 "get_zone_info": false, 00:18:19.781 "zone_management": false, 00:18:19.781 "zone_append": false, 00:18:19.781 "compare": false, 00:18:19.781 "compare_and_write": false, 00:18:19.781 "abort": true, 00:18:19.781 "seek_hole": false, 00:18:19.781 "seek_data": false, 00:18:19.781 "copy": true, 00:18:19.781 "nvme_iov_md": false 00:18:19.781 }, 00:18:19.781 "memory_domains": [ 00:18:19.781 { 00:18:19.781 "dma_device_id": "system", 00:18:19.781 "dma_device_type": 1 00:18:19.781 }, 00:18:19.781 { 00:18:19.781 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:19.781 "dma_device_type": 2 00:18:19.781 } 00:18:19.781 ], 00:18:19.781 "driver_specific": {} 00:18:19.781 }' 00:18:19.781 14:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:19.781 14:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:19.781 14:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:19.781 14:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:19.781 14:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:19.781 14:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:19.781 14:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:20.041 14:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:20.041 14:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:20.041 14:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:20.041 14:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:20.041 14:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:20.041 14:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:20.041 14:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:18:20.041 14:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:20.299 14:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:20.299 "name": "BaseBdev3", 00:18:20.299 "aliases": [ 00:18:20.299 "b5be4fbe-dd7a-4a0b-8630-871bd148f4e6" 00:18:20.299 ], 00:18:20.299 "product_name": "Malloc disk", 00:18:20.299 "block_size": 512, 00:18:20.299 "num_blocks": 65536, 00:18:20.299 "uuid": "b5be4fbe-dd7a-4a0b-8630-871bd148f4e6", 00:18:20.299 "assigned_rate_limits": { 00:18:20.299 "rw_ios_per_sec": 0, 00:18:20.299 "rw_mbytes_per_sec": 0, 00:18:20.299 "r_mbytes_per_sec": 0, 00:18:20.299 "w_mbytes_per_sec": 0 00:18:20.299 }, 00:18:20.299 "claimed": true, 00:18:20.299 "claim_type": "exclusive_write", 00:18:20.299 "zoned": false, 00:18:20.299 "supported_io_types": { 00:18:20.299 "read": true, 00:18:20.299 "write": true, 00:18:20.299 "unmap": true, 00:18:20.299 "flush": true, 00:18:20.299 "reset": true, 00:18:20.299 "nvme_admin": false, 00:18:20.299 "nvme_io": false, 00:18:20.299 "nvme_io_md": false, 00:18:20.299 "write_zeroes": true, 00:18:20.299 "zcopy": true, 00:18:20.299 "get_zone_info": false, 00:18:20.299 "zone_management": false, 00:18:20.299 "zone_append": false, 00:18:20.299 "compare": false, 00:18:20.299 "compare_and_write": false, 00:18:20.299 "abort": true, 00:18:20.299 "seek_hole": false, 00:18:20.299 "seek_data": false, 00:18:20.299 "copy": true, 00:18:20.299 "nvme_iov_md": false 00:18:20.299 }, 00:18:20.299 "memory_domains": [ 00:18:20.299 { 00:18:20.299 "dma_device_id": "system", 00:18:20.299 "dma_device_type": 1 00:18:20.299 }, 00:18:20.299 { 00:18:20.299 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:20.299 "dma_device_type": 2 00:18:20.299 } 00:18:20.299 ], 00:18:20.299 "driver_specific": {} 00:18:20.299 }' 00:18:20.299 14:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:20.299 14:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:20.559 14:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:20.559 14:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:20.559 14:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:20.559 14:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:20.559 14:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:20.559 14:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:20.559 14:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:20.559 14:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:20.817 14:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:20.817 14:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:20.817 14:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:21.075 [2024-07-15 14:11:06.871268] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:21.075 [2024-07-15 14:11:06.871497] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:21.075 [2024-07-15 14:11:06.871689] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:21.075 14:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:18:21.075 14:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:18:21.075 14:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:21.075 14:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:18:21.075 14:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:18:21.076 14:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:18:21.076 14:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:21.076 14:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:18:21.076 14:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:21.076 14:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:21.076 14:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:21.076 14:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:21.076 14:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:21.076 14:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:21.076 14:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:21.076 14:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:21.076 14:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:21.333 14:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:21.333 "name": "Existed_Raid", 00:18:21.333 "uuid": "54375f53-dd85-4725-8a06-4a4d4efc2d29", 00:18:21.333 "strip_size_kb": 64, 00:18:21.333 "state": "offline", 00:18:21.333 "raid_level": "concat", 00:18:21.333 "superblock": false, 00:18:21.333 "num_base_bdevs": 3, 00:18:21.333 "num_base_bdevs_discovered": 2, 00:18:21.333 "num_base_bdevs_operational": 2, 00:18:21.333 "base_bdevs_list": [ 00:18:21.333 { 00:18:21.333 "name": null, 00:18:21.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:21.333 "is_configured": false, 00:18:21.333 "data_offset": 0, 00:18:21.333 "data_size": 65536 00:18:21.333 }, 00:18:21.333 { 00:18:21.333 "name": "BaseBdev2", 00:18:21.333 "uuid": "91974975-66ad-4a99-bd21-81791ebb4ef4", 00:18:21.333 "is_configured": true, 00:18:21.333 "data_offset": 0, 00:18:21.333 "data_size": 65536 00:18:21.333 }, 00:18:21.333 { 00:18:21.333 "name": "BaseBdev3", 00:18:21.333 "uuid": "b5be4fbe-dd7a-4a0b-8630-871bd148f4e6", 00:18:21.333 "is_configured": true, 00:18:21.333 "data_offset": 0, 00:18:21.333 "data_size": 65536 00:18:21.333 } 00:18:21.333 ] 00:18:21.333 }' 00:18:21.333 14:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:21.333 14:11:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.898 14:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:18:21.898 14:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:21.898 14:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:21.898 14:11:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:18:22.464 14:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:18:22.464 14:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:22.464 14:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:22.722 [2024-07-15 14:11:08.522152] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:22.722 14:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:18:22.722 14:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:22.722 14:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:22.722 14:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:18:23.001 14:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:18:23.001 14:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:23.001 14:11:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:23.260 [2024-07-15 14:11:09.169582] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:23.260 [2024-07-15 14:11:09.169954] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:18:23.517 14:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:18:23.517 14:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:23.517 14:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:18:23.517 14:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:23.775 14:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:18:23.775 14:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:18:23.775 14:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:18:23.775 14:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:18:23.775 14:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:23.775 14:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:24.033 BaseBdev2 00:18:24.033 14:11:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:18:24.033 14:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:18:24.033 14:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:24.033 14:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:18:24.033 14:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:24.033 14:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:24.033 14:11:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:24.291 14:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:24.549 [ 00:18:24.549 { 00:18:24.549 "name": "BaseBdev2", 00:18:24.549 "aliases": [ 00:18:24.549 "0e0b92c0-1b1c-4eea-a6aa-17e078b41789" 00:18:24.549 ], 00:18:24.549 "product_name": "Malloc disk", 00:18:24.549 "block_size": 512, 00:18:24.549 "num_blocks": 65536, 00:18:24.549 "uuid": "0e0b92c0-1b1c-4eea-a6aa-17e078b41789", 00:18:24.549 "assigned_rate_limits": { 00:18:24.549 "rw_ios_per_sec": 0, 00:18:24.549 "rw_mbytes_per_sec": 0, 00:18:24.549 "r_mbytes_per_sec": 0, 00:18:24.549 "w_mbytes_per_sec": 0 00:18:24.549 }, 00:18:24.549 "claimed": false, 00:18:24.549 "zoned": false, 00:18:24.549 "supported_io_types": { 00:18:24.549 "read": true, 00:18:24.549 "write": true, 00:18:24.549 "unmap": true, 00:18:24.549 "flush": true, 00:18:24.549 "reset": true, 00:18:24.549 "nvme_admin": false, 00:18:24.549 "nvme_io": false, 00:18:24.549 "nvme_io_md": false, 00:18:24.549 "write_zeroes": true, 00:18:24.550 "zcopy": true, 00:18:24.550 "get_zone_info": false, 00:18:24.550 "zone_management": false, 00:18:24.550 "zone_append": false, 00:18:24.550 "compare": false, 00:18:24.550 "compare_and_write": false, 00:18:24.550 "abort": true, 00:18:24.550 "seek_hole": false, 00:18:24.550 "seek_data": false, 00:18:24.550 "copy": true, 00:18:24.550 "nvme_iov_md": false 00:18:24.550 }, 00:18:24.550 "memory_domains": [ 00:18:24.550 { 00:18:24.550 "dma_device_id": "system", 00:18:24.550 "dma_device_type": 1 00:18:24.550 }, 00:18:24.550 { 00:18:24.550 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:24.550 "dma_device_type": 2 00:18:24.550 } 00:18:24.550 ], 00:18:24.550 "driver_specific": {} 00:18:24.550 } 00:18:24.550 ] 00:18:24.550 14:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:18:24.550 14:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:18:24.550 14:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:24.550 14:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:24.808 BaseBdev3 00:18:24.808 14:11:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:18:24.808 14:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:18:24.808 14:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:24.808 14:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:18:24.808 14:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:24.808 14:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:24.808 14:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:25.067 14:11:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:25.067 [ 00:18:25.067 { 00:18:25.067 "name": "BaseBdev3", 00:18:25.067 "aliases": [ 00:18:25.067 "2b308438-73c2-4794-a7ce-6c729324a9b4" 00:18:25.067 ], 00:18:25.067 "product_name": "Malloc disk", 00:18:25.067 "block_size": 512, 00:18:25.067 "num_blocks": 65536, 00:18:25.067 "uuid": "2b308438-73c2-4794-a7ce-6c729324a9b4", 00:18:25.067 "assigned_rate_limits": { 00:18:25.067 "rw_ios_per_sec": 0, 00:18:25.067 "rw_mbytes_per_sec": 0, 00:18:25.067 "r_mbytes_per_sec": 0, 00:18:25.067 "w_mbytes_per_sec": 0 00:18:25.067 }, 00:18:25.067 "claimed": false, 00:18:25.067 "zoned": false, 00:18:25.067 "supported_io_types": { 00:18:25.067 "read": true, 00:18:25.067 "write": true, 00:18:25.067 "unmap": true, 00:18:25.067 "flush": true, 00:18:25.067 "reset": true, 00:18:25.067 "nvme_admin": false, 00:18:25.067 "nvme_io": false, 00:18:25.067 "nvme_io_md": false, 00:18:25.067 "write_zeroes": true, 00:18:25.067 "zcopy": true, 00:18:25.067 "get_zone_info": false, 00:18:25.067 "zone_management": false, 00:18:25.067 "zone_append": false, 00:18:25.067 "compare": false, 00:18:25.067 "compare_and_write": false, 00:18:25.067 "abort": true, 00:18:25.067 "seek_hole": false, 00:18:25.067 "seek_data": false, 00:18:25.067 "copy": true, 00:18:25.067 "nvme_iov_md": false 00:18:25.067 }, 00:18:25.067 "memory_domains": [ 00:18:25.067 { 00:18:25.067 "dma_device_id": "system", 00:18:25.067 "dma_device_type": 1 00:18:25.067 }, 00:18:25.067 { 00:18:25.067 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:25.067 "dma_device_type": 2 00:18:25.067 } 00:18:25.067 ], 00:18:25.067 "driver_specific": {} 00:18:25.067 } 00:18:25.067 ] 00:18:25.325 14:11:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:18:25.325 14:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:18:25.325 14:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:25.325 14:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:25.584 [2024-07-15 14:11:11.353764] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:25.584 [2024-07-15 14:11:11.353849] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:25.584 [2024-07-15 14:11:11.353906] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:25.584 [2024-07-15 14:11:11.355328] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:25.584 14:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:25.584 14:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:25.584 14:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:25.584 14:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:25.584 14:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:25.584 14:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:25.584 14:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:25.584 14:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:25.584 14:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:25.584 14:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:25.584 14:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:25.584 14:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:25.867 14:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:25.867 "name": "Existed_Raid", 00:18:25.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:25.867 "strip_size_kb": 64, 00:18:25.867 "state": "configuring", 00:18:25.867 "raid_level": "concat", 00:18:25.867 "superblock": false, 00:18:25.867 "num_base_bdevs": 3, 00:18:25.867 "num_base_bdevs_discovered": 2, 00:18:25.867 "num_base_bdevs_operational": 3, 00:18:25.867 "base_bdevs_list": [ 00:18:25.867 { 00:18:25.867 "name": "BaseBdev1", 00:18:25.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:25.867 "is_configured": false, 00:18:25.867 "data_offset": 0, 00:18:25.867 "data_size": 0 00:18:25.867 }, 00:18:25.867 { 00:18:25.867 "name": "BaseBdev2", 00:18:25.867 "uuid": "0e0b92c0-1b1c-4eea-a6aa-17e078b41789", 00:18:25.867 "is_configured": true, 00:18:25.867 "data_offset": 0, 00:18:25.867 "data_size": 65536 00:18:25.867 }, 00:18:25.867 { 00:18:25.867 "name": "BaseBdev3", 00:18:25.867 "uuid": "2b308438-73c2-4794-a7ce-6c729324a9b4", 00:18:25.867 "is_configured": true, 00:18:25.867 "data_offset": 0, 00:18:25.867 "data_size": 65536 00:18:25.867 } 00:18:25.867 ] 00:18:25.867 }' 00:18:25.867 14:11:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:25.867 14:11:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:26.452 14:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:18:26.711 [2024-07-15 14:11:12.473875] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:26.711 14:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:26.711 14:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:26.711 14:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:26.711 14:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:26.711 14:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:26.711 14:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:26.711 14:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:26.711 14:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:26.711 14:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:26.711 14:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:26.711 14:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:26.711 14:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:26.969 14:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:26.969 "name": "Existed_Raid", 00:18:26.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:26.969 "strip_size_kb": 64, 00:18:26.969 "state": "configuring", 00:18:26.969 "raid_level": "concat", 00:18:26.969 "superblock": false, 00:18:26.969 "num_base_bdevs": 3, 00:18:26.969 "num_base_bdevs_discovered": 1, 00:18:26.969 "num_base_bdevs_operational": 3, 00:18:26.969 "base_bdevs_list": [ 00:18:26.969 { 00:18:26.969 "name": "BaseBdev1", 00:18:26.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:26.969 "is_configured": false, 00:18:26.969 "data_offset": 0, 00:18:26.969 "data_size": 0 00:18:26.969 }, 00:18:26.969 { 00:18:26.969 "name": null, 00:18:26.969 "uuid": "0e0b92c0-1b1c-4eea-a6aa-17e078b41789", 00:18:26.969 "is_configured": false, 00:18:26.969 "data_offset": 0, 00:18:26.969 "data_size": 65536 00:18:26.969 }, 00:18:26.969 { 00:18:26.969 "name": "BaseBdev3", 00:18:26.969 "uuid": "2b308438-73c2-4794-a7ce-6c729324a9b4", 00:18:26.969 "is_configured": true, 00:18:26.969 "data_offset": 0, 00:18:26.969 "data_size": 65536 00:18:26.969 } 00:18:26.969 ] 00:18:26.969 }' 00:18:26.969 14:11:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:26.969 14:11:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:27.535 14:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:27.535 14:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:27.792 14:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:18:27.792 14:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:28.050 [2024-07-15 14:11:13.973329] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:28.050 BaseBdev1 00:18:28.050 14:11:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:18:28.050 14:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:18:28.050 14:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:28.050 14:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:18:28.050 14:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:28.050 14:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:28.050 14:11:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:28.308 14:11:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:28.571 [ 00:18:28.571 { 00:18:28.571 "name": "BaseBdev1", 00:18:28.571 "aliases": [ 00:18:28.571 "41373423-02fd-48d2-ac71-572e4fadf6c4" 00:18:28.571 ], 00:18:28.571 "product_name": "Malloc disk", 00:18:28.571 "block_size": 512, 00:18:28.571 "num_blocks": 65536, 00:18:28.571 "uuid": "41373423-02fd-48d2-ac71-572e4fadf6c4", 00:18:28.571 "assigned_rate_limits": { 00:18:28.571 "rw_ios_per_sec": 0, 00:18:28.571 "rw_mbytes_per_sec": 0, 00:18:28.571 "r_mbytes_per_sec": 0, 00:18:28.571 "w_mbytes_per_sec": 0 00:18:28.571 }, 00:18:28.571 "claimed": true, 00:18:28.571 "claim_type": "exclusive_write", 00:18:28.571 "zoned": false, 00:18:28.571 "supported_io_types": { 00:18:28.571 "read": true, 00:18:28.571 "write": true, 00:18:28.571 "unmap": true, 00:18:28.571 "flush": true, 00:18:28.571 "reset": true, 00:18:28.571 "nvme_admin": false, 00:18:28.571 "nvme_io": false, 00:18:28.571 "nvme_io_md": false, 00:18:28.571 "write_zeroes": true, 00:18:28.571 "zcopy": true, 00:18:28.571 "get_zone_info": false, 00:18:28.571 "zone_management": false, 00:18:28.571 "zone_append": false, 00:18:28.571 "compare": false, 00:18:28.571 "compare_and_write": false, 00:18:28.571 "abort": true, 00:18:28.571 "seek_hole": false, 00:18:28.571 "seek_data": false, 00:18:28.571 "copy": true, 00:18:28.571 "nvme_iov_md": false 00:18:28.571 }, 00:18:28.571 "memory_domains": [ 00:18:28.571 { 00:18:28.571 "dma_device_id": "system", 00:18:28.571 "dma_device_type": 1 00:18:28.571 }, 00:18:28.571 { 00:18:28.571 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:28.571 "dma_device_type": 2 00:18:28.571 } 00:18:28.571 ], 00:18:28.571 "driver_specific": {} 00:18:28.571 } 00:18:28.571 ] 00:18:28.571 14:11:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:18:28.571 14:11:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:28.571 14:11:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:28.571 14:11:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:28.571 14:11:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:28.571 14:11:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:28.571 14:11:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:28.571 14:11:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:28.571 14:11:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:28.571 14:11:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:28.571 14:11:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:28.571 14:11:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:28.571 14:11:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:28.829 14:11:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:28.829 "name": "Existed_Raid", 00:18:28.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:28.829 "strip_size_kb": 64, 00:18:28.829 "state": "configuring", 00:18:28.829 "raid_level": "concat", 00:18:28.829 "superblock": false, 00:18:28.829 "num_base_bdevs": 3, 00:18:28.829 "num_base_bdevs_discovered": 2, 00:18:28.829 "num_base_bdevs_operational": 3, 00:18:28.829 "base_bdevs_list": [ 00:18:28.829 { 00:18:28.829 "name": "BaseBdev1", 00:18:28.829 "uuid": "41373423-02fd-48d2-ac71-572e4fadf6c4", 00:18:28.829 "is_configured": true, 00:18:28.829 "data_offset": 0, 00:18:28.829 "data_size": 65536 00:18:28.829 }, 00:18:28.829 { 00:18:28.829 "name": null, 00:18:28.829 "uuid": "0e0b92c0-1b1c-4eea-a6aa-17e078b41789", 00:18:28.829 "is_configured": false, 00:18:28.829 "data_offset": 0, 00:18:28.829 "data_size": 65536 00:18:28.829 }, 00:18:28.829 { 00:18:28.829 "name": "BaseBdev3", 00:18:28.829 "uuid": "2b308438-73c2-4794-a7ce-6c729324a9b4", 00:18:28.829 "is_configured": true, 00:18:28.829 "data_offset": 0, 00:18:28.829 "data_size": 65536 00:18:28.829 } 00:18:28.829 ] 00:18:28.829 }' 00:18:28.829 14:11:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:28.829 14:11:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:29.395 14:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:29.395 14:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:29.653 14:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:18:29.653 14:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:18:29.911 [2024-07-15 14:11:15.832416] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:29.911 14:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:29.911 14:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:29.911 14:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:29.911 14:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:29.911 14:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:29.911 14:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:29.911 14:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:29.911 14:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:29.911 14:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:29.911 14:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:29.911 14:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:29.911 14:11:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:30.173 14:11:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:30.173 "name": "Existed_Raid", 00:18:30.173 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.173 "strip_size_kb": 64, 00:18:30.173 "state": "configuring", 00:18:30.173 "raid_level": "concat", 00:18:30.173 "superblock": false, 00:18:30.173 "num_base_bdevs": 3, 00:18:30.173 "num_base_bdevs_discovered": 1, 00:18:30.173 "num_base_bdevs_operational": 3, 00:18:30.173 "base_bdevs_list": [ 00:18:30.173 { 00:18:30.173 "name": "BaseBdev1", 00:18:30.173 "uuid": "41373423-02fd-48d2-ac71-572e4fadf6c4", 00:18:30.173 "is_configured": true, 00:18:30.173 "data_offset": 0, 00:18:30.173 "data_size": 65536 00:18:30.173 }, 00:18:30.173 { 00:18:30.173 "name": null, 00:18:30.173 "uuid": "0e0b92c0-1b1c-4eea-a6aa-17e078b41789", 00:18:30.173 "is_configured": false, 00:18:30.173 "data_offset": 0, 00:18:30.173 "data_size": 65536 00:18:30.173 }, 00:18:30.173 { 00:18:30.173 "name": null, 00:18:30.173 "uuid": "2b308438-73c2-4794-a7ce-6c729324a9b4", 00:18:30.173 "is_configured": false, 00:18:30.173 "data_offset": 0, 00:18:30.173 "data_size": 65536 00:18:30.173 } 00:18:30.173 ] 00:18:30.173 }' 00:18:30.173 14:11:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:30.432 14:11:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:30.997 14:11:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:30.997 14:11:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:31.255 14:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:18:31.255 14:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:18:31.513 [2024-07-15 14:11:17.301949] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:31.513 14:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:31.513 14:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:31.513 14:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:31.513 14:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:31.513 14:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:31.513 14:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:31.513 14:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:31.513 14:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:31.513 14:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:31.513 14:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:31.513 14:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:31.513 14:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:31.771 14:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:31.771 "name": "Existed_Raid", 00:18:31.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:31.771 "strip_size_kb": 64, 00:18:31.771 "state": "configuring", 00:18:31.771 "raid_level": "concat", 00:18:31.771 "superblock": false, 00:18:31.771 "num_base_bdevs": 3, 00:18:31.771 "num_base_bdevs_discovered": 2, 00:18:31.771 "num_base_bdevs_operational": 3, 00:18:31.771 "base_bdevs_list": [ 00:18:31.771 { 00:18:31.771 "name": "BaseBdev1", 00:18:31.771 "uuid": "41373423-02fd-48d2-ac71-572e4fadf6c4", 00:18:31.771 "is_configured": true, 00:18:31.771 "data_offset": 0, 00:18:31.771 "data_size": 65536 00:18:31.771 }, 00:18:31.771 { 00:18:31.771 "name": null, 00:18:31.771 "uuid": "0e0b92c0-1b1c-4eea-a6aa-17e078b41789", 00:18:31.771 "is_configured": false, 00:18:31.771 "data_offset": 0, 00:18:31.771 "data_size": 65536 00:18:31.771 }, 00:18:31.771 { 00:18:31.771 "name": "BaseBdev3", 00:18:31.771 "uuid": "2b308438-73c2-4794-a7ce-6c729324a9b4", 00:18:31.771 "is_configured": true, 00:18:31.771 "data_offset": 0, 00:18:31.771 "data_size": 65536 00:18:31.771 } 00:18:31.771 ] 00:18:31.771 }' 00:18:31.771 14:11:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:31.771 14:11:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:32.338 14:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:32.338 14:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:32.615 14:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:18:32.615 14:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:32.872 [2024-07-15 14:11:18.685093] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:32.872 14:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:32.872 14:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:32.872 14:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:32.872 14:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:32.872 14:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:32.872 14:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:32.872 14:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:32.872 14:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:32.872 14:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:32.872 14:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:32.872 14:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:32.872 14:11:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:33.131 14:11:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:33.131 "name": "Existed_Raid", 00:18:33.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:33.131 "strip_size_kb": 64, 00:18:33.131 "state": "configuring", 00:18:33.131 "raid_level": "concat", 00:18:33.131 "superblock": false, 00:18:33.131 "num_base_bdevs": 3, 00:18:33.131 "num_base_bdevs_discovered": 1, 00:18:33.131 "num_base_bdevs_operational": 3, 00:18:33.131 "base_bdevs_list": [ 00:18:33.131 { 00:18:33.131 "name": null, 00:18:33.131 "uuid": "41373423-02fd-48d2-ac71-572e4fadf6c4", 00:18:33.131 "is_configured": false, 00:18:33.131 "data_offset": 0, 00:18:33.131 "data_size": 65536 00:18:33.131 }, 00:18:33.131 { 00:18:33.131 "name": null, 00:18:33.131 "uuid": "0e0b92c0-1b1c-4eea-a6aa-17e078b41789", 00:18:33.131 "is_configured": false, 00:18:33.131 "data_offset": 0, 00:18:33.131 "data_size": 65536 00:18:33.131 }, 00:18:33.131 { 00:18:33.131 "name": "BaseBdev3", 00:18:33.131 "uuid": "2b308438-73c2-4794-a7ce-6c729324a9b4", 00:18:33.131 "is_configured": true, 00:18:33.131 "data_offset": 0, 00:18:33.131 "data_size": 65536 00:18:33.131 } 00:18:33.131 ] 00:18:33.131 }' 00:18:33.131 14:11:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:33.131 14:11:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:34.066 14:11:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:34.066 14:11:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:34.066 14:11:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:18:34.066 14:11:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:18:34.324 [2024-07-15 14:11:20.245284] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:34.324 14:11:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:34.324 14:11:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:34.324 14:11:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:34.324 14:11:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:34.324 14:11:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:34.324 14:11:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:34.324 14:11:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:34.324 14:11:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:34.324 14:11:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:34.324 14:11:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:34.324 14:11:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:34.324 14:11:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:34.588 14:11:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:34.588 "name": "Existed_Raid", 00:18:34.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:34.588 "strip_size_kb": 64, 00:18:34.588 "state": "configuring", 00:18:34.588 "raid_level": "concat", 00:18:34.588 "superblock": false, 00:18:34.588 "num_base_bdevs": 3, 00:18:34.588 "num_base_bdevs_discovered": 2, 00:18:34.588 "num_base_bdevs_operational": 3, 00:18:34.588 "base_bdevs_list": [ 00:18:34.588 { 00:18:34.588 "name": null, 00:18:34.588 "uuid": "41373423-02fd-48d2-ac71-572e4fadf6c4", 00:18:34.588 "is_configured": false, 00:18:34.588 "data_offset": 0, 00:18:34.588 "data_size": 65536 00:18:34.588 }, 00:18:34.588 { 00:18:34.588 "name": "BaseBdev2", 00:18:34.588 "uuid": "0e0b92c0-1b1c-4eea-a6aa-17e078b41789", 00:18:34.588 "is_configured": true, 00:18:34.588 "data_offset": 0, 00:18:34.588 "data_size": 65536 00:18:34.588 }, 00:18:34.588 { 00:18:34.588 "name": "BaseBdev3", 00:18:34.588 "uuid": "2b308438-73c2-4794-a7ce-6c729324a9b4", 00:18:34.588 "is_configured": true, 00:18:34.588 "data_offset": 0, 00:18:34.588 "data_size": 65536 00:18:34.588 } 00:18:34.588 ] 00:18:34.588 }' 00:18:34.588 14:11:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:34.588 14:11:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:35.522 14:11:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:35.522 14:11:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:35.522 14:11:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:18:35.522 14:11:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:35.522 14:11:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:18:35.780 14:11:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 41373423-02fd-48d2-ac71-572e4fadf6c4 00:18:36.368 [2024-07-15 14:11:22.060613] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:18:36.368 [2024-07-15 14:11:22.060682] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:18:36.368 [2024-07-15 14:11:22.060693] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:18:36.368 [2024-07-15 14:11:22.060818] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:36.368 [2024-07-15 14:11:22.061052] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:18:36.368 [2024-07-15 14:11:22.061077] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000008a80 00:18:36.368 [2024-07-15 14:11:22.061272] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:36.368 NewBaseBdev 00:18:36.368 14:11:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:18:36.368 14:11:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:18:36.368 14:11:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:36.368 14:11:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:18:36.368 14:11:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:36.368 14:11:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:36.368 14:11:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:36.655 14:11:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:18:36.655 [ 00:18:36.655 { 00:18:36.655 "name": "NewBaseBdev", 00:18:36.655 "aliases": [ 00:18:36.655 "41373423-02fd-48d2-ac71-572e4fadf6c4" 00:18:36.655 ], 00:18:36.655 "product_name": "Malloc disk", 00:18:36.655 "block_size": 512, 00:18:36.655 "num_blocks": 65536, 00:18:36.655 "uuid": "41373423-02fd-48d2-ac71-572e4fadf6c4", 00:18:36.655 "assigned_rate_limits": { 00:18:36.655 "rw_ios_per_sec": 0, 00:18:36.655 "rw_mbytes_per_sec": 0, 00:18:36.655 "r_mbytes_per_sec": 0, 00:18:36.655 "w_mbytes_per_sec": 0 00:18:36.655 }, 00:18:36.655 "claimed": true, 00:18:36.655 "claim_type": "exclusive_write", 00:18:36.655 "zoned": false, 00:18:36.655 "supported_io_types": { 00:18:36.655 "read": true, 00:18:36.655 "write": true, 00:18:36.655 "unmap": true, 00:18:36.655 "flush": true, 00:18:36.655 "reset": true, 00:18:36.655 "nvme_admin": false, 00:18:36.655 "nvme_io": false, 00:18:36.655 "nvme_io_md": false, 00:18:36.655 "write_zeroes": true, 00:18:36.655 "zcopy": true, 00:18:36.655 "get_zone_info": false, 00:18:36.655 "zone_management": false, 00:18:36.655 "zone_append": false, 00:18:36.655 "compare": false, 00:18:36.655 "compare_and_write": false, 00:18:36.655 "abort": true, 00:18:36.655 "seek_hole": false, 00:18:36.655 "seek_data": false, 00:18:36.655 "copy": true, 00:18:36.655 "nvme_iov_md": false 00:18:36.655 }, 00:18:36.655 "memory_domains": [ 00:18:36.655 { 00:18:36.655 "dma_device_id": "system", 00:18:36.655 "dma_device_type": 1 00:18:36.655 }, 00:18:36.655 { 00:18:36.655 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:36.655 "dma_device_type": 2 00:18:36.655 } 00:18:36.655 ], 00:18:36.655 "driver_specific": {} 00:18:36.655 } 00:18:36.655 ] 00:18:36.913 14:11:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:18:36.913 14:11:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:18:36.913 14:11:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:36.913 14:11:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:36.913 14:11:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:36.913 14:11:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:36.913 14:11:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:36.913 14:11:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:36.913 14:11:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:36.913 14:11:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:36.913 14:11:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:36.913 14:11:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:36.913 14:11:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:37.170 14:11:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:37.170 "name": "Existed_Raid", 00:18:37.170 "uuid": "24c86694-69ea-4a71-820a-fef117e39aa6", 00:18:37.170 "strip_size_kb": 64, 00:18:37.170 "state": "online", 00:18:37.170 "raid_level": "concat", 00:18:37.170 "superblock": false, 00:18:37.170 "num_base_bdevs": 3, 00:18:37.170 "num_base_bdevs_discovered": 3, 00:18:37.170 "num_base_bdevs_operational": 3, 00:18:37.170 "base_bdevs_list": [ 00:18:37.170 { 00:18:37.170 "name": "NewBaseBdev", 00:18:37.170 "uuid": "41373423-02fd-48d2-ac71-572e4fadf6c4", 00:18:37.170 "is_configured": true, 00:18:37.170 "data_offset": 0, 00:18:37.170 "data_size": 65536 00:18:37.170 }, 00:18:37.170 { 00:18:37.170 "name": "BaseBdev2", 00:18:37.170 "uuid": "0e0b92c0-1b1c-4eea-a6aa-17e078b41789", 00:18:37.170 "is_configured": true, 00:18:37.170 "data_offset": 0, 00:18:37.170 "data_size": 65536 00:18:37.170 }, 00:18:37.170 { 00:18:37.170 "name": "BaseBdev3", 00:18:37.170 "uuid": "2b308438-73c2-4794-a7ce-6c729324a9b4", 00:18:37.170 "is_configured": true, 00:18:37.170 "data_offset": 0, 00:18:37.170 "data_size": 65536 00:18:37.170 } 00:18:37.170 ] 00:18:37.170 }' 00:18:37.170 14:11:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:37.170 14:11:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.740 14:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:18:37.740 14:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:18:37.740 14:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:37.740 14:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:37.740 14:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:37.740 14:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:18:37.740 14:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:18:37.740 14:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:37.998 [2024-07-15 14:11:23.905156] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:37.998 14:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:37.998 "name": "Existed_Raid", 00:18:37.998 "aliases": [ 00:18:37.998 "24c86694-69ea-4a71-820a-fef117e39aa6" 00:18:37.998 ], 00:18:37.998 "product_name": "Raid Volume", 00:18:37.998 "block_size": 512, 00:18:37.998 "num_blocks": 196608, 00:18:37.998 "uuid": "24c86694-69ea-4a71-820a-fef117e39aa6", 00:18:37.998 "assigned_rate_limits": { 00:18:37.998 "rw_ios_per_sec": 0, 00:18:37.998 "rw_mbytes_per_sec": 0, 00:18:37.998 "r_mbytes_per_sec": 0, 00:18:37.998 "w_mbytes_per_sec": 0 00:18:37.998 }, 00:18:37.998 "claimed": false, 00:18:37.998 "zoned": false, 00:18:37.998 "supported_io_types": { 00:18:37.998 "read": true, 00:18:37.998 "write": true, 00:18:37.998 "unmap": true, 00:18:37.998 "flush": true, 00:18:37.998 "reset": true, 00:18:37.998 "nvme_admin": false, 00:18:37.998 "nvme_io": false, 00:18:37.998 "nvme_io_md": false, 00:18:37.998 "write_zeroes": true, 00:18:37.998 "zcopy": false, 00:18:37.998 "get_zone_info": false, 00:18:37.998 "zone_management": false, 00:18:37.998 "zone_append": false, 00:18:37.998 "compare": false, 00:18:37.998 "compare_and_write": false, 00:18:37.998 "abort": false, 00:18:37.998 "seek_hole": false, 00:18:37.998 "seek_data": false, 00:18:37.998 "copy": false, 00:18:37.998 "nvme_iov_md": false 00:18:37.998 }, 00:18:37.998 "memory_domains": [ 00:18:37.998 { 00:18:37.998 "dma_device_id": "system", 00:18:37.998 "dma_device_type": 1 00:18:37.998 }, 00:18:37.998 { 00:18:37.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:37.998 "dma_device_type": 2 00:18:37.998 }, 00:18:37.998 { 00:18:37.998 "dma_device_id": "system", 00:18:37.998 "dma_device_type": 1 00:18:37.998 }, 00:18:37.998 { 00:18:37.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:37.998 "dma_device_type": 2 00:18:37.998 }, 00:18:37.998 { 00:18:37.998 "dma_device_id": "system", 00:18:37.998 "dma_device_type": 1 00:18:37.998 }, 00:18:37.998 { 00:18:37.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:37.998 "dma_device_type": 2 00:18:37.998 } 00:18:37.998 ], 00:18:37.998 "driver_specific": { 00:18:37.998 "raid": { 00:18:37.998 "uuid": "24c86694-69ea-4a71-820a-fef117e39aa6", 00:18:37.998 "strip_size_kb": 64, 00:18:37.998 "state": "online", 00:18:37.998 "raid_level": "concat", 00:18:37.998 "superblock": false, 00:18:37.998 "num_base_bdevs": 3, 00:18:37.998 "num_base_bdevs_discovered": 3, 00:18:37.998 "num_base_bdevs_operational": 3, 00:18:37.998 "base_bdevs_list": [ 00:18:37.998 { 00:18:37.998 "name": "NewBaseBdev", 00:18:37.998 "uuid": "41373423-02fd-48d2-ac71-572e4fadf6c4", 00:18:37.998 "is_configured": true, 00:18:37.998 "data_offset": 0, 00:18:37.998 "data_size": 65536 00:18:37.998 }, 00:18:37.998 { 00:18:37.998 "name": "BaseBdev2", 00:18:37.998 "uuid": "0e0b92c0-1b1c-4eea-a6aa-17e078b41789", 00:18:37.998 "is_configured": true, 00:18:37.998 "data_offset": 0, 00:18:37.998 "data_size": 65536 00:18:37.998 }, 00:18:37.998 { 00:18:37.998 "name": "BaseBdev3", 00:18:37.998 "uuid": "2b308438-73c2-4794-a7ce-6c729324a9b4", 00:18:37.998 "is_configured": true, 00:18:37.998 "data_offset": 0, 00:18:37.998 "data_size": 65536 00:18:37.998 } 00:18:37.998 ] 00:18:37.998 } 00:18:37.998 } 00:18:37.998 }' 00:18:37.998 14:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:37.999 14:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:18:37.999 BaseBdev2 00:18:37.999 BaseBdev3' 00:18:37.999 14:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:37.999 14:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:18:37.999 14:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:38.256 14:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:38.256 "name": "NewBaseBdev", 00:18:38.256 "aliases": [ 00:18:38.256 "41373423-02fd-48d2-ac71-572e4fadf6c4" 00:18:38.256 ], 00:18:38.256 "product_name": "Malloc disk", 00:18:38.256 "block_size": 512, 00:18:38.256 "num_blocks": 65536, 00:18:38.256 "uuid": "41373423-02fd-48d2-ac71-572e4fadf6c4", 00:18:38.256 "assigned_rate_limits": { 00:18:38.256 "rw_ios_per_sec": 0, 00:18:38.256 "rw_mbytes_per_sec": 0, 00:18:38.256 "r_mbytes_per_sec": 0, 00:18:38.256 "w_mbytes_per_sec": 0 00:18:38.256 }, 00:18:38.256 "claimed": true, 00:18:38.256 "claim_type": "exclusive_write", 00:18:38.256 "zoned": false, 00:18:38.256 "supported_io_types": { 00:18:38.256 "read": true, 00:18:38.256 "write": true, 00:18:38.256 "unmap": true, 00:18:38.256 "flush": true, 00:18:38.256 "reset": true, 00:18:38.256 "nvme_admin": false, 00:18:38.256 "nvme_io": false, 00:18:38.256 "nvme_io_md": false, 00:18:38.256 "write_zeroes": true, 00:18:38.256 "zcopy": true, 00:18:38.256 "get_zone_info": false, 00:18:38.256 "zone_management": false, 00:18:38.256 "zone_append": false, 00:18:38.256 "compare": false, 00:18:38.256 "compare_and_write": false, 00:18:38.256 "abort": true, 00:18:38.256 "seek_hole": false, 00:18:38.256 "seek_data": false, 00:18:38.256 "copy": true, 00:18:38.256 "nvme_iov_md": false 00:18:38.256 }, 00:18:38.256 "memory_domains": [ 00:18:38.256 { 00:18:38.256 "dma_device_id": "system", 00:18:38.256 "dma_device_type": 1 00:18:38.256 }, 00:18:38.256 { 00:18:38.256 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:38.256 "dma_device_type": 2 00:18:38.256 } 00:18:38.256 ], 00:18:38.256 "driver_specific": {} 00:18:38.256 }' 00:18:38.256 14:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:38.515 14:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:38.515 14:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:38.515 14:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:38.515 14:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:38.515 14:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:38.515 14:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:38.515 14:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:38.772 14:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:38.772 14:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:38.772 14:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:38.772 14:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:38.772 14:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:38.772 14:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:38.772 14:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:18:39.030 14:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:39.030 "name": "BaseBdev2", 00:18:39.030 "aliases": [ 00:18:39.030 "0e0b92c0-1b1c-4eea-a6aa-17e078b41789" 00:18:39.030 ], 00:18:39.030 "product_name": "Malloc disk", 00:18:39.030 "block_size": 512, 00:18:39.030 "num_blocks": 65536, 00:18:39.030 "uuid": "0e0b92c0-1b1c-4eea-a6aa-17e078b41789", 00:18:39.030 "assigned_rate_limits": { 00:18:39.030 "rw_ios_per_sec": 0, 00:18:39.030 "rw_mbytes_per_sec": 0, 00:18:39.030 "r_mbytes_per_sec": 0, 00:18:39.030 "w_mbytes_per_sec": 0 00:18:39.030 }, 00:18:39.030 "claimed": true, 00:18:39.030 "claim_type": "exclusive_write", 00:18:39.030 "zoned": false, 00:18:39.030 "supported_io_types": { 00:18:39.030 "read": true, 00:18:39.030 "write": true, 00:18:39.030 "unmap": true, 00:18:39.030 "flush": true, 00:18:39.030 "reset": true, 00:18:39.030 "nvme_admin": false, 00:18:39.030 "nvme_io": false, 00:18:39.030 "nvme_io_md": false, 00:18:39.030 "write_zeroes": true, 00:18:39.030 "zcopy": true, 00:18:39.030 "get_zone_info": false, 00:18:39.030 "zone_management": false, 00:18:39.030 "zone_append": false, 00:18:39.030 "compare": false, 00:18:39.030 "compare_and_write": false, 00:18:39.030 "abort": true, 00:18:39.030 "seek_hole": false, 00:18:39.030 "seek_data": false, 00:18:39.030 "copy": true, 00:18:39.030 "nvme_iov_md": false 00:18:39.030 }, 00:18:39.030 "memory_domains": [ 00:18:39.030 { 00:18:39.030 "dma_device_id": "system", 00:18:39.030 "dma_device_type": 1 00:18:39.030 }, 00:18:39.030 { 00:18:39.030 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:39.030 "dma_device_type": 2 00:18:39.030 } 00:18:39.030 ], 00:18:39.030 "driver_specific": {} 00:18:39.030 }' 00:18:39.030 14:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:39.030 14:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:39.287 14:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:39.287 14:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:39.287 14:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:39.287 14:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:39.287 14:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:39.287 14:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:39.287 14:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:39.287 14:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:39.545 14:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:39.545 14:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:39.545 14:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:39.545 14:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:18:39.545 14:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:39.803 14:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:39.803 "name": "BaseBdev3", 00:18:39.803 "aliases": [ 00:18:39.803 "2b308438-73c2-4794-a7ce-6c729324a9b4" 00:18:39.803 ], 00:18:39.803 "product_name": "Malloc disk", 00:18:39.803 "block_size": 512, 00:18:39.803 "num_blocks": 65536, 00:18:39.803 "uuid": "2b308438-73c2-4794-a7ce-6c729324a9b4", 00:18:39.803 "assigned_rate_limits": { 00:18:39.803 "rw_ios_per_sec": 0, 00:18:39.803 "rw_mbytes_per_sec": 0, 00:18:39.803 "r_mbytes_per_sec": 0, 00:18:39.803 "w_mbytes_per_sec": 0 00:18:39.803 }, 00:18:39.803 "claimed": true, 00:18:39.803 "claim_type": "exclusive_write", 00:18:39.803 "zoned": false, 00:18:39.803 "supported_io_types": { 00:18:39.803 "read": true, 00:18:39.803 "write": true, 00:18:39.803 "unmap": true, 00:18:39.803 "flush": true, 00:18:39.803 "reset": true, 00:18:39.803 "nvme_admin": false, 00:18:39.803 "nvme_io": false, 00:18:39.803 "nvme_io_md": false, 00:18:39.803 "write_zeroes": true, 00:18:39.803 "zcopy": true, 00:18:39.803 "get_zone_info": false, 00:18:39.803 "zone_management": false, 00:18:39.803 "zone_append": false, 00:18:39.803 "compare": false, 00:18:39.803 "compare_and_write": false, 00:18:39.803 "abort": true, 00:18:39.803 "seek_hole": false, 00:18:39.803 "seek_data": false, 00:18:39.803 "copy": true, 00:18:39.803 "nvme_iov_md": false 00:18:39.803 }, 00:18:39.803 "memory_domains": [ 00:18:39.803 { 00:18:39.803 "dma_device_id": "system", 00:18:39.803 "dma_device_type": 1 00:18:39.803 }, 00:18:39.803 { 00:18:39.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:39.803 "dma_device_type": 2 00:18:39.803 } 00:18:39.803 ], 00:18:39.803 "driver_specific": {} 00:18:39.803 }' 00:18:39.803 14:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:39.803 14:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:39.803 14:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:39.803 14:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:39.803 14:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:39.803 14:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:39.803 14:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:40.061 14:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:40.061 14:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:40.061 14:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:40.061 14:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:40.061 14:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:40.061 14:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:40.318 [2024-07-15 14:11:26.271382] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:40.318 [2024-07-15 14:11:26.271691] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:40.318 [2024-07-15 14:11:26.271942] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:40.318 [2024-07-15 14:11:26.272119] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:40.318 [2024-07-15 14:11:26.272243] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name Existed_Raid, state offline 00:18:40.318 14:11:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 194147 00:18:40.318 14:11:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 194147 ']' 00:18:40.318 14:11:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 194147 00:18:40.318 14:11:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:18:40.318 14:11:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:40.318 14:11:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 194147 00:18:40.318 14:11:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:40.318 14:11:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:40.318 14:11:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 194147' 00:18:40.318 killing process with pid 194147 00:18:40.318 14:11:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 194147 00:18:40.318 [2024-07-15 14:11:26.318379] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:40.318 14:11:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 194147 00:18:40.881 [2024-07-15 14:11:26.587775] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:42.253 14:11:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:18:42.253 00:18:42.253 real 0m33.044s 00:18:42.253 user 1m0.801s 00:18:42.253 sys 0m3.813s 00:18:42.253 14:11:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:42.253 14:11:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.253 ************************************ 00:18:42.253 END TEST raid_state_function_test 00:18:42.253 ************************************ 00:18:42.253 14:11:27 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:18:42.253 14:11:27 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:18:42.253 14:11:27 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:18:42.253 14:11:27 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:42.253 14:11:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:42.253 ************************************ 00:18:42.253 START TEST raid_state_function_test_sb 00:18:42.253 ************************************ 00:18:42.253 14:11:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 3 true 00:18:42.253 14:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:18:42.253 14:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:18:42.253 14:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:18:42.253 14:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:18:42.253 14:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:18:42.253 14:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:42.253 14:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:18:42.253 14:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:42.253 14:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:42.253 14:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:18:42.253 14:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:42.253 14:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:42.253 14:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:18:42.253 14:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:42.253 14:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:42.253 14:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:18:42.253 14:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:18:42.253 14:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:18:42.253 14:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:18:42.253 14:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:18:42.253 14:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:18:42.253 14:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:18:42.253 14:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:18:42.253 14:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:18:42.253 14:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:18:42.253 14:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:18:42.253 14:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=195175 00:18:42.253 14:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:42.253 Process raid pid: 195175 00:18:42.253 14:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 195175' 00:18:42.253 14:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 195175 /var/tmp/spdk-raid.sock 00:18:42.253 14:11:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 195175 ']' 00:18:42.253 14:11:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:42.253 14:11:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:42.253 14:11:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:42.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:42.254 14:11:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:42.254 14:11:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:42.254 [2024-07-15 14:11:28.026419] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:18:42.254 [2024-07-15 14:11:28.027034] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:42.254 [2024-07-15 14:11:28.198336] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:42.512 [2024-07-15 14:11:28.471683] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:42.783 [2024-07-15 14:11:28.691243] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:43.346 14:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:43.346 14:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:18:43.346 14:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:43.604 [2024-07-15 14:11:29.413921] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:43.604 [2024-07-15 14:11:29.414779] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:43.604 [2024-07-15 14:11:29.414948] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:43.604 [2024-07-15 14:11:29.415193] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:43.605 [2024-07-15 14:11:29.415342] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:43.605 [2024-07-15 14:11:29.415583] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:43.605 14:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:43.605 14:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:43.605 14:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:43.605 14:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:43.605 14:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:43.605 14:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:43.605 14:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:43.605 14:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:43.605 14:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:43.605 14:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:43.605 14:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:43.605 14:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:43.862 14:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:43.862 "name": "Existed_Raid", 00:18:43.862 "uuid": "8cde63d8-ad8a-40c6-ad0a-cd588eee56b9", 00:18:43.862 "strip_size_kb": 64, 00:18:43.862 "state": "configuring", 00:18:43.862 "raid_level": "concat", 00:18:43.862 "superblock": true, 00:18:43.862 "num_base_bdevs": 3, 00:18:43.862 "num_base_bdevs_discovered": 0, 00:18:43.862 "num_base_bdevs_operational": 3, 00:18:43.862 "base_bdevs_list": [ 00:18:43.862 { 00:18:43.862 "name": "BaseBdev1", 00:18:43.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.862 "is_configured": false, 00:18:43.862 "data_offset": 0, 00:18:43.862 "data_size": 0 00:18:43.862 }, 00:18:43.862 { 00:18:43.862 "name": "BaseBdev2", 00:18:43.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.862 "is_configured": false, 00:18:43.862 "data_offset": 0, 00:18:43.862 "data_size": 0 00:18:43.862 }, 00:18:43.862 { 00:18:43.862 "name": "BaseBdev3", 00:18:43.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.862 "is_configured": false, 00:18:43.862 "data_offset": 0, 00:18:43.862 "data_size": 0 00:18:43.862 } 00:18:43.862 ] 00:18:43.862 }' 00:18:43.862 14:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:43.862 14:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:44.794 14:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:44.794 [2024-07-15 14:11:30.766082] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:44.794 [2024-07-15 14:11:30.766455] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:18:44.794 14:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:45.360 [2024-07-15 14:11:31.078173] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:45.360 [2024-07-15 14:11:31.078751] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:45.360 [2024-07-15 14:11:31.078915] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:45.360 [2024-07-15 14:11:31.079068] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:45.360 [2024-07-15 14:11:31.079188] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:45.360 [2024-07-15 14:11:31.079312] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:45.360 14:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:45.618 [2024-07-15 14:11:31.428557] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:45.618 BaseBdev1 00:18:45.618 14:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:18:45.618 14:11:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:18:45.618 14:11:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:45.618 14:11:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:18:45.618 14:11:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:45.618 14:11:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:45.618 14:11:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:45.877 14:11:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:46.136 [ 00:18:46.136 { 00:18:46.136 "name": "BaseBdev1", 00:18:46.136 "aliases": [ 00:18:46.136 "3ddd67c0-1ad7-4629-b63c-f50159b8f1ad" 00:18:46.136 ], 00:18:46.136 "product_name": "Malloc disk", 00:18:46.136 "block_size": 512, 00:18:46.136 "num_blocks": 65536, 00:18:46.136 "uuid": "3ddd67c0-1ad7-4629-b63c-f50159b8f1ad", 00:18:46.136 "assigned_rate_limits": { 00:18:46.136 "rw_ios_per_sec": 0, 00:18:46.136 "rw_mbytes_per_sec": 0, 00:18:46.136 "r_mbytes_per_sec": 0, 00:18:46.136 "w_mbytes_per_sec": 0 00:18:46.136 }, 00:18:46.136 "claimed": true, 00:18:46.136 "claim_type": "exclusive_write", 00:18:46.136 "zoned": false, 00:18:46.136 "supported_io_types": { 00:18:46.136 "read": true, 00:18:46.136 "write": true, 00:18:46.136 "unmap": true, 00:18:46.136 "flush": true, 00:18:46.136 "reset": true, 00:18:46.136 "nvme_admin": false, 00:18:46.136 "nvme_io": false, 00:18:46.136 "nvme_io_md": false, 00:18:46.136 "write_zeroes": true, 00:18:46.136 "zcopy": true, 00:18:46.136 "get_zone_info": false, 00:18:46.136 "zone_management": false, 00:18:46.136 "zone_append": false, 00:18:46.136 "compare": false, 00:18:46.136 "compare_and_write": false, 00:18:46.136 "abort": true, 00:18:46.136 "seek_hole": false, 00:18:46.136 "seek_data": false, 00:18:46.136 "copy": true, 00:18:46.136 "nvme_iov_md": false 00:18:46.136 }, 00:18:46.136 "memory_domains": [ 00:18:46.136 { 00:18:46.136 "dma_device_id": "system", 00:18:46.136 "dma_device_type": 1 00:18:46.136 }, 00:18:46.136 { 00:18:46.136 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:46.136 "dma_device_type": 2 00:18:46.136 } 00:18:46.136 ], 00:18:46.136 "driver_specific": {} 00:18:46.136 } 00:18:46.136 ] 00:18:46.136 14:11:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:18:46.136 14:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:46.136 14:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:46.136 14:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:46.136 14:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:46.136 14:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:46.136 14:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:46.136 14:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:46.136 14:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:46.136 14:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:46.136 14:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:46.136 14:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:46.136 14:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:46.395 14:11:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:46.395 "name": "Existed_Raid", 00:18:46.395 "uuid": "f3fbf49e-203a-4a49-8bb7-7bdf0999c53b", 00:18:46.395 "strip_size_kb": 64, 00:18:46.395 "state": "configuring", 00:18:46.395 "raid_level": "concat", 00:18:46.395 "superblock": true, 00:18:46.395 "num_base_bdevs": 3, 00:18:46.395 "num_base_bdevs_discovered": 1, 00:18:46.395 "num_base_bdevs_operational": 3, 00:18:46.395 "base_bdevs_list": [ 00:18:46.395 { 00:18:46.395 "name": "BaseBdev1", 00:18:46.395 "uuid": "3ddd67c0-1ad7-4629-b63c-f50159b8f1ad", 00:18:46.395 "is_configured": true, 00:18:46.395 "data_offset": 2048, 00:18:46.395 "data_size": 63488 00:18:46.395 }, 00:18:46.395 { 00:18:46.395 "name": "BaseBdev2", 00:18:46.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:46.395 "is_configured": false, 00:18:46.395 "data_offset": 0, 00:18:46.395 "data_size": 0 00:18:46.395 }, 00:18:46.395 { 00:18:46.395 "name": "BaseBdev3", 00:18:46.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:46.395 "is_configured": false, 00:18:46.395 "data_offset": 0, 00:18:46.395 "data_size": 0 00:18:46.395 } 00:18:46.395 ] 00:18:46.395 }' 00:18:46.395 14:11:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:46.395 14:11:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:46.963 14:11:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:47.264 [2024-07-15 14:11:33.065068] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:47.264 [2024-07-15 14:11:33.065309] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:18:47.264 14:11:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:47.521 [2024-07-15 14:11:33.409197] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:47.521 [2024-07-15 14:11:33.410881] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:47.521 [2024-07-15 14:11:33.411095] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:47.521 [2024-07-15 14:11:33.411224] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:47.521 [2024-07-15 14:11:33.411301] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:47.521 14:11:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:18:47.521 14:11:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:47.521 14:11:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:47.521 14:11:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:47.521 14:11:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:47.521 14:11:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:47.521 14:11:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:47.521 14:11:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:47.521 14:11:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:47.521 14:11:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:47.521 14:11:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:47.521 14:11:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:47.521 14:11:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:47.521 14:11:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:47.779 14:11:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:47.779 "name": "Existed_Raid", 00:18:47.779 "uuid": "9aa2ba1f-c841-4599-bd3c-2fb64afbeec9", 00:18:47.779 "strip_size_kb": 64, 00:18:47.779 "state": "configuring", 00:18:47.779 "raid_level": "concat", 00:18:47.779 "superblock": true, 00:18:47.779 "num_base_bdevs": 3, 00:18:47.779 "num_base_bdevs_discovered": 1, 00:18:47.779 "num_base_bdevs_operational": 3, 00:18:47.779 "base_bdevs_list": [ 00:18:47.779 { 00:18:47.779 "name": "BaseBdev1", 00:18:47.779 "uuid": "3ddd67c0-1ad7-4629-b63c-f50159b8f1ad", 00:18:47.779 "is_configured": true, 00:18:47.779 "data_offset": 2048, 00:18:47.779 "data_size": 63488 00:18:47.779 }, 00:18:47.779 { 00:18:47.779 "name": "BaseBdev2", 00:18:47.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:47.779 "is_configured": false, 00:18:47.779 "data_offset": 0, 00:18:47.779 "data_size": 0 00:18:47.779 }, 00:18:47.779 { 00:18:47.779 "name": "BaseBdev3", 00:18:47.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:47.779 "is_configured": false, 00:18:47.779 "data_offset": 0, 00:18:47.779 "data_size": 0 00:18:47.779 } 00:18:47.779 ] 00:18:47.779 }' 00:18:47.779 14:11:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:47.779 14:11:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:48.714 14:11:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:48.972 [2024-07-15 14:11:34.728556] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:48.972 BaseBdev2 00:18:48.972 14:11:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:18:48.972 14:11:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:18:48.972 14:11:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:48.972 14:11:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:18:48.972 14:11:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:48.972 14:11:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:48.972 14:11:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:49.229 14:11:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:49.487 [ 00:18:49.487 { 00:18:49.487 "name": "BaseBdev2", 00:18:49.487 "aliases": [ 00:18:49.487 "b8cd004d-1cc1-4c56-98a6-75d623d663c7" 00:18:49.487 ], 00:18:49.487 "product_name": "Malloc disk", 00:18:49.487 "block_size": 512, 00:18:49.487 "num_blocks": 65536, 00:18:49.487 "uuid": "b8cd004d-1cc1-4c56-98a6-75d623d663c7", 00:18:49.487 "assigned_rate_limits": { 00:18:49.487 "rw_ios_per_sec": 0, 00:18:49.487 "rw_mbytes_per_sec": 0, 00:18:49.487 "r_mbytes_per_sec": 0, 00:18:49.487 "w_mbytes_per_sec": 0 00:18:49.487 }, 00:18:49.487 "claimed": true, 00:18:49.487 "claim_type": "exclusive_write", 00:18:49.487 "zoned": false, 00:18:49.487 "supported_io_types": { 00:18:49.487 "read": true, 00:18:49.487 "write": true, 00:18:49.487 "unmap": true, 00:18:49.487 "flush": true, 00:18:49.487 "reset": true, 00:18:49.487 "nvme_admin": false, 00:18:49.487 "nvme_io": false, 00:18:49.487 "nvme_io_md": false, 00:18:49.487 "write_zeroes": true, 00:18:49.487 "zcopy": true, 00:18:49.487 "get_zone_info": false, 00:18:49.487 "zone_management": false, 00:18:49.487 "zone_append": false, 00:18:49.487 "compare": false, 00:18:49.487 "compare_and_write": false, 00:18:49.487 "abort": true, 00:18:49.487 "seek_hole": false, 00:18:49.487 "seek_data": false, 00:18:49.487 "copy": true, 00:18:49.487 "nvme_iov_md": false 00:18:49.487 }, 00:18:49.487 "memory_domains": [ 00:18:49.487 { 00:18:49.487 "dma_device_id": "system", 00:18:49.487 "dma_device_type": 1 00:18:49.487 }, 00:18:49.487 { 00:18:49.487 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:49.487 "dma_device_type": 2 00:18:49.487 } 00:18:49.487 ], 00:18:49.487 "driver_specific": {} 00:18:49.487 } 00:18:49.487 ] 00:18:49.487 14:11:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:18:49.487 14:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:18:49.487 14:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:49.488 14:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:49.488 14:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:49.488 14:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:49.488 14:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:49.488 14:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:49.488 14:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:49.488 14:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:49.488 14:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:49.488 14:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:49.488 14:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:49.488 14:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:49.488 14:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:49.746 14:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:49.746 "name": "Existed_Raid", 00:18:49.746 "uuid": "9aa2ba1f-c841-4599-bd3c-2fb64afbeec9", 00:18:49.746 "strip_size_kb": 64, 00:18:49.746 "state": "configuring", 00:18:49.746 "raid_level": "concat", 00:18:49.746 "superblock": true, 00:18:49.746 "num_base_bdevs": 3, 00:18:49.746 "num_base_bdevs_discovered": 2, 00:18:49.746 "num_base_bdevs_operational": 3, 00:18:49.746 "base_bdevs_list": [ 00:18:49.746 { 00:18:49.746 "name": "BaseBdev1", 00:18:49.746 "uuid": "3ddd67c0-1ad7-4629-b63c-f50159b8f1ad", 00:18:49.746 "is_configured": true, 00:18:49.746 "data_offset": 2048, 00:18:49.746 "data_size": 63488 00:18:49.746 }, 00:18:49.746 { 00:18:49.746 "name": "BaseBdev2", 00:18:49.746 "uuid": "b8cd004d-1cc1-4c56-98a6-75d623d663c7", 00:18:49.746 "is_configured": true, 00:18:49.746 "data_offset": 2048, 00:18:49.746 "data_size": 63488 00:18:49.746 }, 00:18:49.746 { 00:18:49.746 "name": "BaseBdev3", 00:18:49.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:49.746 "is_configured": false, 00:18:49.746 "data_offset": 0, 00:18:49.746 "data_size": 0 00:18:49.746 } 00:18:49.746 ] 00:18:49.746 }' 00:18:49.746 14:11:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:49.746 14:11:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:50.679 14:11:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:50.937 [2024-07-15 14:11:36.771191] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:50.937 [2024-07-15 14:11:36.771651] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:18:50.937 [2024-07-15 14:11:36.771809] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:50.937 [2024-07-15 14:11:36.772020] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:18:50.937 [2024-07-15 14:11:36.772387] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:18:50.937 [2024-07-15 14:11:36.772509] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:18:50.937 [2024-07-15 14:11:36.772748] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:50.937 BaseBdev3 00:18:50.937 14:11:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:18:50.937 14:11:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:18:50.937 14:11:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:50.937 14:11:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:18:50.937 14:11:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:50.937 14:11:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:50.937 14:11:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:51.258 14:11:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:51.534 [ 00:18:51.534 { 00:18:51.534 "name": "BaseBdev3", 00:18:51.534 "aliases": [ 00:18:51.534 "580b4c79-33cc-489c-8c5f-0a0078935a4a" 00:18:51.534 ], 00:18:51.534 "product_name": "Malloc disk", 00:18:51.534 "block_size": 512, 00:18:51.534 "num_blocks": 65536, 00:18:51.534 "uuid": "580b4c79-33cc-489c-8c5f-0a0078935a4a", 00:18:51.534 "assigned_rate_limits": { 00:18:51.534 "rw_ios_per_sec": 0, 00:18:51.534 "rw_mbytes_per_sec": 0, 00:18:51.534 "r_mbytes_per_sec": 0, 00:18:51.534 "w_mbytes_per_sec": 0 00:18:51.534 }, 00:18:51.534 "claimed": true, 00:18:51.534 "claim_type": "exclusive_write", 00:18:51.534 "zoned": false, 00:18:51.534 "supported_io_types": { 00:18:51.534 "read": true, 00:18:51.534 "write": true, 00:18:51.534 "unmap": true, 00:18:51.534 "flush": true, 00:18:51.534 "reset": true, 00:18:51.534 "nvme_admin": false, 00:18:51.534 "nvme_io": false, 00:18:51.534 "nvme_io_md": false, 00:18:51.534 "write_zeroes": true, 00:18:51.534 "zcopy": true, 00:18:51.534 "get_zone_info": false, 00:18:51.534 "zone_management": false, 00:18:51.534 "zone_append": false, 00:18:51.534 "compare": false, 00:18:51.534 "compare_and_write": false, 00:18:51.534 "abort": true, 00:18:51.534 "seek_hole": false, 00:18:51.534 "seek_data": false, 00:18:51.534 "copy": true, 00:18:51.534 "nvme_iov_md": false 00:18:51.534 }, 00:18:51.534 "memory_domains": [ 00:18:51.534 { 00:18:51.534 "dma_device_id": "system", 00:18:51.534 "dma_device_type": 1 00:18:51.534 }, 00:18:51.534 { 00:18:51.534 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:51.534 "dma_device_type": 2 00:18:51.534 } 00:18:51.534 ], 00:18:51.534 "driver_specific": {} 00:18:51.534 } 00:18:51.534 ] 00:18:51.534 14:11:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:18:51.534 14:11:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:18:51.534 14:11:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:51.534 14:11:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:18:51.534 14:11:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:51.534 14:11:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:51.534 14:11:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:51.534 14:11:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:51.534 14:11:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:51.534 14:11:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:51.534 14:11:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:51.534 14:11:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:51.534 14:11:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:51.534 14:11:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:51.534 14:11:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:51.792 14:11:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:51.792 "name": "Existed_Raid", 00:18:51.792 "uuid": "9aa2ba1f-c841-4599-bd3c-2fb64afbeec9", 00:18:51.792 "strip_size_kb": 64, 00:18:51.792 "state": "online", 00:18:51.792 "raid_level": "concat", 00:18:51.792 "superblock": true, 00:18:51.792 "num_base_bdevs": 3, 00:18:51.792 "num_base_bdevs_discovered": 3, 00:18:51.792 "num_base_bdevs_operational": 3, 00:18:51.792 "base_bdevs_list": [ 00:18:51.792 { 00:18:51.792 "name": "BaseBdev1", 00:18:51.792 "uuid": "3ddd67c0-1ad7-4629-b63c-f50159b8f1ad", 00:18:51.792 "is_configured": true, 00:18:51.792 "data_offset": 2048, 00:18:51.792 "data_size": 63488 00:18:51.792 }, 00:18:51.792 { 00:18:51.792 "name": "BaseBdev2", 00:18:51.792 "uuid": "b8cd004d-1cc1-4c56-98a6-75d623d663c7", 00:18:51.792 "is_configured": true, 00:18:51.792 "data_offset": 2048, 00:18:51.792 "data_size": 63488 00:18:51.792 }, 00:18:51.792 { 00:18:51.792 "name": "BaseBdev3", 00:18:51.792 "uuid": "580b4c79-33cc-489c-8c5f-0a0078935a4a", 00:18:51.792 "is_configured": true, 00:18:51.792 "data_offset": 2048, 00:18:51.792 "data_size": 63488 00:18:51.792 } 00:18:51.792 ] 00:18:51.792 }' 00:18:51.792 14:11:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:51.792 14:11:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.727 14:11:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:18:52.727 14:11:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:18:52.727 14:11:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:52.727 14:11:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:52.727 14:11:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:52.727 14:11:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:18:52.727 14:11:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:18:52.727 14:11:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:52.727 [2024-07-15 14:11:38.687699] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:52.727 14:11:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:52.727 "name": "Existed_Raid", 00:18:52.727 "aliases": [ 00:18:52.727 "9aa2ba1f-c841-4599-bd3c-2fb64afbeec9" 00:18:52.727 ], 00:18:52.727 "product_name": "Raid Volume", 00:18:52.727 "block_size": 512, 00:18:52.727 "num_blocks": 190464, 00:18:52.727 "uuid": "9aa2ba1f-c841-4599-bd3c-2fb64afbeec9", 00:18:52.727 "assigned_rate_limits": { 00:18:52.727 "rw_ios_per_sec": 0, 00:18:52.727 "rw_mbytes_per_sec": 0, 00:18:52.727 "r_mbytes_per_sec": 0, 00:18:52.727 "w_mbytes_per_sec": 0 00:18:52.727 }, 00:18:52.727 "claimed": false, 00:18:52.727 "zoned": false, 00:18:52.727 "supported_io_types": { 00:18:52.727 "read": true, 00:18:52.727 "write": true, 00:18:52.727 "unmap": true, 00:18:52.727 "flush": true, 00:18:52.727 "reset": true, 00:18:52.727 "nvme_admin": false, 00:18:52.727 "nvme_io": false, 00:18:52.727 "nvme_io_md": false, 00:18:52.728 "write_zeroes": true, 00:18:52.728 "zcopy": false, 00:18:52.728 "get_zone_info": false, 00:18:52.728 "zone_management": false, 00:18:52.728 "zone_append": false, 00:18:52.728 "compare": false, 00:18:52.728 "compare_and_write": false, 00:18:52.728 "abort": false, 00:18:52.728 "seek_hole": false, 00:18:52.728 "seek_data": false, 00:18:52.728 "copy": false, 00:18:52.728 "nvme_iov_md": false 00:18:52.728 }, 00:18:52.728 "memory_domains": [ 00:18:52.728 { 00:18:52.728 "dma_device_id": "system", 00:18:52.728 "dma_device_type": 1 00:18:52.728 }, 00:18:52.728 { 00:18:52.728 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:52.728 "dma_device_type": 2 00:18:52.728 }, 00:18:52.728 { 00:18:52.728 "dma_device_id": "system", 00:18:52.728 "dma_device_type": 1 00:18:52.728 }, 00:18:52.728 { 00:18:52.728 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:52.728 "dma_device_type": 2 00:18:52.728 }, 00:18:52.728 { 00:18:52.728 "dma_device_id": "system", 00:18:52.728 "dma_device_type": 1 00:18:52.728 }, 00:18:52.728 { 00:18:52.728 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:52.728 "dma_device_type": 2 00:18:52.728 } 00:18:52.728 ], 00:18:52.728 "driver_specific": { 00:18:52.728 "raid": { 00:18:52.728 "uuid": "9aa2ba1f-c841-4599-bd3c-2fb64afbeec9", 00:18:52.728 "strip_size_kb": 64, 00:18:52.728 "state": "online", 00:18:52.728 "raid_level": "concat", 00:18:52.728 "superblock": true, 00:18:52.728 "num_base_bdevs": 3, 00:18:52.728 "num_base_bdevs_discovered": 3, 00:18:52.728 "num_base_bdevs_operational": 3, 00:18:52.728 "base_bdevs_list": [ 00:18:52.728 { 00:18:52.728 "name": "BaseBdev1", 00:18:52.728 "uuid": "3ddd67c0-1ad7-4629-b63c-f50159b8f1ad", 00:18:52.728 "is_configured": true, 00:18:52.728 "data_offset": 2048, 00:18:52.728 "data_size": 63488 00:18:52.728 }, 00:18:52.728 { 00:18:52.728 "name": "BaseBdev2", 00:18:52.728 "uuid": "b8cd004d-1cc1-4c56-98a6-75d623d663c7", 00:18:52.728 "is_configured": true, 00:18:52.728 "data_offset": 2048, 00:18:52.728 "data_size": 63488 00:18:52.728 }, 00:18:52.728 { 00:18:52.728 "name": "BaseBdev3", 00:18:52.728 "uuid": "580b4c79-33cc-489c-8c5f-0a0078935a4a", 00:18:52.728 "is_configured": true, 00:18:52.728 "data_offset": 2048, 00:18:52.728 "data_size": 63488 00:18:52.728 } 00:18:52.728 ] 00:18:52.728 } 00:18:52.728 } 00:18:52.728 }' 00:18:52.728 14:11:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:52.987 14:11:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:18:52.987 BaseBdev2 00:18:52.987 BaseBdev3' 00:18:52.987 14:11:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:52.987 14:11:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:18:52.987 14:11:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:53.246 14:11:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:53.246 "name": "BaseBdev1", 00:18:53.246 "aliases": [ 00:18:53.246 "3ddd67c0-1ad7-4629-b63c-f50159b8f1ad" 00:18:53.246 ], 00:18:53.246 "product_name": "Malloc disk", 00:18:53.246 "block_size": 512, 00:18:53.246 "num_blocks": 65536, 00:18:53.246 "uuid": "3ddd67c0-1ad7-4629-b63c-f50159b8f1ad", 00:18:53.246 "assigned_rate_limits": { 00:18:53.246 "rw_ios_per_sec": 0, 00:18:53.246 "rw_mbytes_per_sec": 0, 00:18:53.246 "r_mbytes_per_sec": 0, 00:18:53.246 "w_mbytes_per_sec": 0 00:18:53.246 }, 00:18:53.246 "claimed": true, 00:18:53.246 "claim_type": "exclusive_write", 00:18:53.246 "zoned": false, 00:18:53.246 "supported_io_types": { 00:18:53.246 "read": true, 00:18:53.246 "write": true, 00:18:53.246 "unmap": true, 00:18:53.246 "flush": true, 00:18:53.246 "reset": true, 00:18:53.246 "nvme_admin": false, 00:18:53.246 "nvme_io": false, 00:18:53.246 "nvme_io_md": false, 00:18:53.246 "write_zeroes": true, 00:18:53.246 "zcopy": true, 00:18:53.246 "get_zone_info": false, 00:18:53.246 "zone_management": false, 00:18:53.246 "zone_append": false, 00:18:53.246 "compare": false, 00:18:53.246 "compare_and_write": false, 00:18:53.246 "abort": true, 00:18:53.246 "seek_hole": false, 00:18:53.246 "seek_data": false, 00:18:53.246 "copy": true, 00:18:53.246 "nvme_iov_md": false 00:18:53.246 }, 00:18:53.246 "memory_domains": [ 00:18:53.246 { 00:18:53.246 "dma_device_id": "system", 00:18:53.246 "dma_device_type": 1 00:18:53.246 }, 00:18:53.246 { 00:18:53.246 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:53.246 "dma_device_type": 2 00:18:53.246 } 00:18:53.246 ], 00:18:53.246 "driver_specific": {} 00:18:53.246 }' 00:18:53.246 14:11:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:53.246 14:11:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:53.246 14:11:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:53.246 14:11:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:53.246 14:11:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:53.505 14:11:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:53.505 14:11:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:53.505 14:11:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:53.505 14:11:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:53.505 14:11:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:53.505 14:11:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:53.505 14:11:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:53.505 14:11:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:53.505 14:11:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:18:53.505 14:11:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:53.764 14:11:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:53.764 "name": "BaseBdev2", 00:18:53.764 "aliases": [ 00:18:53.764 "b8cd004d-1cc1-4c56-98a6-75d623d663c7" 00:18:53.764 ], 00:18:53.764 "product_name": "Malloc disk", 00:18:53.764 "block_size": 512, 00:18:53.764 "num_blocks": 65536, 00:18:53.764 "uuid": "b8cd004d-1cc1-4c56-98a6-75d623d663c7", 00:18:53.764 "assigned_rate_limits": { 00:18:53.764 "rw_ios_per_sec": 0, 00:18:53.764 "rw_mbytes_per_sec": 0, 00:18:53.764 "r_mbytes_per_sec": 0, 00:18:53.764 "w_mbytes_per_sec": 0 00:18:53.764 }, 00:18:53.764 "claimed": true, 00:18:53.764 "claim_type": "exclusive_write", 00:18:53.764 "zoned": false, 00:18:53.764 "supported_io_types": { 00:18:53.764 "read": true, 00:18:53.764 "write": true, 00:18:53.764 "unmap": true, 00:18:53.764 "flush": true, 00:18:53.764 "reset": true, 00:18:53.764 "nvme_admin": false, 00:18:53.764 "nvme_io": false, 00:18:53.764 "nvme_io_md": false, 00:18:53.764 "write_zeroes": true, 00:18:53.764 "zcopy": true, 00:18:53.764 "get_zone_info": false, 00:18:53.764 "zone_management": false, 00:18:53.764 "zone_append": false, 00:18:53.764 "compare": false, 00:18:53.764 "compare_and_write": false, 00:18:53.764 "abort": true, 00:18:53.764 "seek_hole": false, 00:18:53.764 "seek_data": false, 00:18:53.764 "copy": true, 00:18:53.764 "nvme_iov_md": false 00:18:53.764 }, 00:18:53.764 "memory_domains": [ 00:18:53.764 { 00:18:53.764 "dma_device_id": "system", 00:18:53.764 "dma_device_type": 1 00:18:53.764 }, 00:18:53.764 { 00:18:53.764 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:53.764 "dma_device_type": 2 00:18:53.764 } 00:18:53.764 ], 00:18:53.764 "driver_specific": {} 00:18:53.764 }' 00:18:53.764 14:11:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:54.023 14:11:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:54.023 14:11:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:54.023 14:11:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:54.023 14:11:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:54.023 14:11:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:54.023 14:11:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:54.023 14:11:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:54.282 14:11:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:54.282 14:11:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:54.282 14:11:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:54.282 14:11:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:54.282 14:11:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:54.282 14:11:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:18:54.282 14:11:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:54.542 14:11:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:54.542 "name": "BaseBdev3", 00:18:54.542 "aliases": [ 00:18:54.542 "580b4c79-33cc-489c-8c5f-0a0078935a4a" 00:18:54.542 ], 00:18:54.542 "product_name": "Malloc disk", 00:18:54.542 "block_size": 512, 00:18:54.542 "num_blocks": 65536, 00:18:54.542 "uuid": "580b4c79-33cc-489c-8c5f-0a0078935a4a", 00:18:54.542 "assigned_rate_limits": { 00:18:54.542 "rw_ios_per_sec": 0, 00:18:54.542 "rw_mbytes_per_sec": 0, 00:18:54.542 "r_mbytes_per_sec": 0, 00:18:54.542 "w_mbytes_per_sec": 0 00:18:54.542 }, 00:18:54.542 "claimed": true, 00:18:54.542 "claim_type": "exclusive_write", 00:18:54.542 "zoned": false, 00:18:54.542 "supported_io_types": { 00:18:54.542 "read": true, 00:18:54.542 "write": true, 00:18:54.542 "unmap": true, 00:18:54.542 "flush": true, 00:18:54.542 "reset": true, 00:18:54.542 "nvme_admin": false, 00:18:54.542 "nvme_io": false, 00:18:54.542 "nvme_io_md": false, 00:18:54.542 "write_zeroes": true, 00:18:54.542 "zcopy": true, 00:18:54.542 "get_zone_info": false, 00:18:54.542 "zone_management": false, 00:18:54.542 "zone_append": false, 00:18:54.542 "compare": false, 00:18:54.542 "compare_and_write": false, 00:18:54.542 "abort": true, 00:18:54.542 "seek_hole": false, 00:18:54.542 "seek_data": false, 00:18:54.542 "copy": true, 00:18:54.542 "nvme_iov_md": false 00:18:54.542 }, 00:18:54.542 "memory_domains": [ 00:18:54.542 { 00:18:54.542 "dma_device_id": "system", 00:18:54.542 "dma_device_type": 1 00:18:54.542 }, 00:18:54.542 { 00:18:54.542 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:54.542 "dma_device_type": 2 00:18:54.542 } 00:18:54.542 ], 00:18:54.542 "driver_specific": {} 00:18:54.542 }' 00:18:54.542 14:11:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:54.542 14:11:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:54.801 14:11:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:54.801 14:11:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:54.801 14:11:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:54.801 14:11:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:54.801 14:11:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:54.801 14:11:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:54.801 14:11:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:54.801 14:11:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:55.067 14:11:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:55.067 14:11:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:55.067 14:11:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:55.333 [2024-07-15 14:11:41.147981] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:55.333 [2024-07-15 14:11:41.148316] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:55.333 [2024-07-15 14:11:41.148474] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:55.333 14:11:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:18:55.333 14:11:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:18:55.333 14:11:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:55.333 14:11:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:18:55.333 14:11:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:18:55.333 14:11:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:18:55.333 14:11:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:55.333 14:11:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:18:55.333 14:11:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:55.333 14:11:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:55.333 14:11:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:55.333 14:11:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:55.333 14:11:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:55.333 14:11:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:55.333 14:11:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:55.333 14:11:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:55.333 14:11:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:55.592 14:11:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:55.592 "name": "Existed_Raid", 00:18:55.592 "uuid": "9aa2ba1f-c841-4599-bd3c-2fb64afbeec9", 00:18:55.592 "strip_size_kb": 64, 00:18:55.592 "state": "offline", 00:18:55.592 "raid_level": "concat", 00:18:55.592 "superblock": true, 00:18:55.592 "num_base_bdevs": 3, 00:18:55.592 "num_base_bdevs_discovered": 2, 00:18:55.592 "num_base_bdevs_operational": 2, 00:18:55.592 "base_bdevs_list": [ 00:18:55.592 { 00:18:55.592 "name": null, 00:18:55.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.592 "is_configured": false, 00:18:55.592 "data_offset": 2048, 00:18:55.592 "data_size": 63488 00:18:55.592 }, 00:18:55.592 { 00:18:55.592 "name": "BaseBdev2", 00:18:55.592 "uuid": "b8cd004d-1cc1-4c56-98a6-75d623d663c7", 00:18:55.592 "is_configured": true, 00:18:55.592 "data_offset": 2048, 00:18:55.592 "data_size": 63488 00:18:55.592 }, 00:18:55.592 { 00:18:55.592 "name": "BaseBdev3", 00:18:55.592 "uuid": "580b4c79-33cc-489c-8c5f-0a0078935a4a", 00:18:55.592 "is_configured": true, 00:18:55.592 "data_offset": 2048, 00:18:55.592 "data_size": 63488 00:18:55.592 } 00:18:55.592 ] 00:18:55.592 }' 00:18:55.592 14:11:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:55.592 14:11:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:56.530 14:11:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:18:56.530 14:11:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:56.530 14:11:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:18:56.530 14:11:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:56.789 14:11:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:18:56.789 14:11:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:56.789 14:11:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:57.047 [2024-07-15 14:11:42.967184] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:57.306 14:11:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:18:57.306 14:11:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:57.306 14:11:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:57.306 14:11:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:18:57.564 14:11:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:18:57.564 14:11:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:57.564 14:11:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:57.825 [2024-07-15 14:11:43.695282] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:57.825 [2024-07-15 14:11:43.695587] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:18:57.825 14:11:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:18:57.825 14:11:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:57.825 14:11:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:57.825 14:11:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:18:58.393 14:11:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:18:58.393 14:11:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:18:58.393 14:11:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:18:58.393 14:11:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:18:58.393 14:11:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:58.393 14:11:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:58.652 BaseBdev2 00:18:58.652 14:11:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:18:58.652 14:11:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:18:58.652 14:11:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:58.652 14:11:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:18:58.652 14:11:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:58.652 14:11:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:58.652 14:11:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:58.911 14:11:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:59.276 [ 00:18:59.276 { 00:18:59.276 "name": "BaseBdev2", 00:18:59.276 "aliases": [ 00:18:59.276 "d4031a9a-6cf2-41e9-b512-fb48edfab950" 00:18:59.276 ], 00:18:59.276 "product_name": "Malloc disk", 00:18:59.276 "block_size": 512, 00:18:59.276 "num_blocks": 65536, 00:18:59.276 "uuid": "d4031a9a-6cf2-41e9-b512-fb48edfab950", 00:18:59.276 "assigned_rate_limits": { 00:18:59.276 "rw_ios_per_sec": 0, 00:18:59.276 "rw_mbytes_per_sec": 0, 00:18:59.276 "r_mbytes_per_sec": 0, 00:18:59.276 "w_mbytes_per_sec": 0 00:18:59.276 }, 00:18:59.276 "claimed": false, 00:18:59.276 "zoned": false, 00:18:59.276 "supported_io_types": { 00:18:59.276 "read": true, 00:18:59.276 "write": true, 00:18:59.276 "unmap": true, 00:18:59.276 "flush": true, 00:18:59.276 "reset": true, 00:18:59.276 "nvme_admin": false, 00:18:59.276 "nvme_io": false, 00:18:59.276 "nvme_io_md": false, 00:18:59.276 "write_zeroes": true, 00:18:59.276 "zcopy": true, 00:18:59.276 "get_zone_info": false, 00:18:59.276 "zone_management": false, 00:18:59.276 "zone_append": false, 00:18:59.276 "compare": false, 00:18:59.276 "compare_and_write": false, 00:18:59.276 "abort": true, 00:18:59.276 "seek_hole": false, 00:18:59.276 "seek_data": false, 00:18:59.276 "copy": true, 00:18:59.276 "nvme_iov_md": false 00:18:59.276 }, 00:18:59.276 "memory_domains": [ 00:18:59.276 { 00:18:59.276 "dma_device_id": "system", 00:18:59.276 "dma_device_type": 1 00:18:59.276 }, 00:18:59.276 { 00:18:59.276 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:59.276 "dma_device_type": 2 00:18:59.276 } 00:18:59.276 ], 00:18:59.276 "driver_specific": {} 00:18:59.276 } 00:18:59.276 ] 00:18:59.276 14:11:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:18:59.276 14:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:18:59.276 14:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:59.276 14:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:59.534 BaseBdev3 00:18:59.534 14:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:18:59.534 14:11:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:18:59.534 14:11:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:59.534 14:11:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:18:59.534 14:11:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:59.534 14:11:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:59.534 14:11:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:59.792 14:11:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:00.049 [ 00:19:00.049 { 00:19:00.049 "name": "BaseBdev3", 00:19:00.049 "aliases": [ 00:19:00.049 "98d37710-a07f-4c10-9490-673fcb5838c7" 00:19:00.049 ], 00:19:00.049 "product_name": "Malloc disk", 00:19:00.049 "block_size": 512, 00:19:00.049 "num_blocks": 65536, 00:19:00.049 "uuid": "98d37710-a07f-4c10-9490-673fcb5838c7", 00:19:00.049 "assigned_rate_limits": { 00:19:00.049 "rw_ios_per_sec": 0, 00:19:00.049 "rw_mbytes_per_sec": 0, 00:19:00.049 "r_mbytes_per_sec": 0, 00:19:00.049 "w_mbytes_per_sec": 0 00:19:00.049 }, 00:19:00.049 "claimed": false, 00:19:00.049 "zoned": false, 00:19:00.049 "supported_io_types": { 00:19:00.049 "read": true, 00:19:00.049 "write": true, 00:19:00.049 "unmap": true, 00:19:00.049 "flush": true, 00:19:00.049 "reset": true, 00:19:00.049 "nvme_admin": false, 00:19:00.049 "nvme_io": false, 00:19:00.049 "nvme_io_md": false, 00:19:00.049 "write_zeroes": true, 00:19:00.049 "zcopy": true, 00:19:00.049 "get_zone_info": false, 00:19:00.049 "zone_management": false, 00:19:00.049 "zone_append": false, 00:19:00.049 "compare": false, 00:19:00.049 "compare_and_write": false, 00:19:00.049 "abort": true, 00:19:00.049 "seek_hole": false, 00:19:00.049 "seek_data": false, 00:19:00.049 "copy": true, 00:19:00.049 "nvme_iov_md": false 00:19:00.049 }, 00:19:00.049 "memory_domains": [ 00:19:00.049 { 00:19:00.049 "dma_device_id": "system", 00:19:00.049 "dma_device_type": 1 00:19:00.049 }, 00:19:00.049 { 00:19:00.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:00.049 "dma_device_type": 2 00:19:00.049 } 00:19:00.049 ], 00:19:00.049 "driver_specific": {} 00:19:00.049 } 00:19:00.049 ] 00:19:00.049 14:11:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:19:00.049 14:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:19:00.049 14:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:00.049 14:11:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:00.305 [2024-07-15 14:11:46.139712] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:00.305 [2024-07-15 14:11:46.139987] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:00.305 [2024-07-15 14:11:46.140153] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:00.305 [2024-07-15 14:11:46.141681] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:00.305 14:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:00.305 14:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:00.305 14:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:00.305 14:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:00.305 14:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:00.305 14:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:00.305 14:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:00.305 14:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:00.305 14:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:00.305 14:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:00.305 14:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:00.306 14:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:00.563 14:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:00.563 "name": "Existed_Raid", 00:19:00.563 "uuid": "4f787df3-6323-446d-8a86-f7813f03f0b9", 00:19:00.563 "strip_size_kb": 64, 00:19:00.563 "state": "configuring", 00:19:00.563 "raid_level": "concat", 00:19:00.563 "superblock": true, 00:19:00.563 "num_base_bdevs": 3, 00:19:00.563 "num_base_bdevs_discovered": 2, 00:19:00.563 "num_base_bdevs_operational": 3, 00:19:00.563 "base_bdevs_list": [ 00:19:00.563 { 00:19:00.564 "name": "BaseBdev1", 00:19:00.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:00.564 "is_configured": false, 00:19:00.564 "data_offset": 0, 00:19:00.564 "data_size": 0 00:19:00.564 }, 00:19:00.564 { 00:19:00.564 "name": "BaseBdev2", 00:19:00.564 "uuid": "d4031a9a-6cf2-41e9-b512-fb48edfab950", 00:19:00.564 "is_configured": true, 00:19:00.564 "data_offset": 2048, 00:19:00.564 "data_size": 63488 00:19:00.564 }, 00:19:00.564 { 00:19:00.564 "name": "BaseBdev3", 00:19:00.564 "uuid": "98d37710-a07f-4c10-9490-673fcb5838c7", 00:19:00.564 "is_configured": true, 00:19:00.564 "data_offset": 2048, 00:19:00.564 "data_size": 63488 00:19:00.564 } 00:19:00.564 ] 00:19:00.564 }' 00:19:00.564 14:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:00.564 14:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:01.129 14:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:19:01.387 [2024-07-15 14:11:47.343864] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:01.387 14:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:01.387 14:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:01.387 14:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:01.387 14:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:01.387 14:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:01.387 14:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:01.387 14:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:01.387 14:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:01.387 14:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:01.387 14:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:01.387 14:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:01.387 14:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:01.644 14:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:01.644 "name": "Existed_Raid", 00:19:01.644 "uuid": "4f787df3-6323-446d-8a86-f7813f03f0b9", 00:19:01.644 "strip_size_kb": 64, 00:19:01.644 "state": "configuring", 00:19:01.644 "raid_level": "concat", 00:19:01.644 "superblock": true, 00:19:01.644 "num_base_bdevs": 3, 00:19:01.644 "num_base_bdevs_discovered": 1, 00:19:01.644 "num_base_bdevs_operational": 3, 00:19:01.644 "base_bdevs_list": [ 00:19:01.644 { 00:19:01.644 "name": "BaseBdev1", 00:19:01.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:01.644 "is_configured": false, 00:19:01.644 "data_offset": 0, 00:19:01.644 "data_size": 0 00:19:01.644 }, 00:19:01.644 { 00:19:01.644 "name": null, 00:19:01.644 "uuid": "d4031a9a-6cf2-41e9-b512-fb48edfab950", 00:19:01.644 "is_configured": false, 00:19:01.644 "data_offset": 2048, 00:19:01.644 "data_size": 63488 00:19:01.644 }, 00:19:01.644 { 00:19:01.644 "name": "BaseBdev3", 00:19:01.644 "uuid": "98d37710-a07f-4c10-9490-673fcb5838c7", 00:19:01.644 "is_configured": true, 00:19:01.644 "data_offset": 2048, 00:19:01.644 "data_size": 63488 00:19:01.645 } 00:19:01.645 ] 00:19:01.645 }' 00:19:01.645 14:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:01.645 14:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:02.331 14:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:02.331 14:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:02.606 14:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:19:02.606 14:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:03.174 [2024-07-15 14:11:48.884592] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:03.174 BaseBdev1 00:19:03.174 14:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:19:03.174 14:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:19:03.174 14:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:03.174 14:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:19:03.174 14:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:03.174 14:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:03.174 14:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:03.433 14:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:03.692 [ 00:19:03.692 { 00:19:03.692 "name": "BaseBdev1", 00:19:03.692 "aliases": [ 00:19:03.692 "8b7298c7-d825-48e2-b07d-6127e5e4fa83" 00:19:03.692 ], 00:19:03.692 "product_name": "Malloc disk", 00:19:03.692 "block_size": 512, 00:19:03.692 "num_blocks": 65536, 00:19:03.692 "uuid": "8b7298c7-d825-48e2-b07d-6127e5e4fa83", 00:19:03.692 "assigned_rate_limits": { 00:19:03.692 "rw_ios_per_sec": 0, 00:19:03.692 "rw_mbytes_per_sec": 0, 00:19:03.692 "r_mbytes_per_sec": 0, 00:19:03.692 "w_mbytes_per_sec": 0 00:19:03.692 }, 00:19:03.692 "claimed": true, 00:19:03.692 "claim_type": "exclusive_write", 00:19:03.692 "zoned": false, 00:19:03.692 "supported_io_types": { 00:19:03.692 "read": true, 00:19:03.692 "write": true, 00:19:03.692 "unmap": true, 00:19:03.692 "flush": true, 00:19:03.692 "reset": true, 00:19:03.692 "nvme_admin": false, 00:19:03.692 "nvme_io": false, 00:19:03.692 "nvme_io_md": false, 00:19:03.692 "write_zeroes": true, 00:19:03.692 "zcopy": true, 00:19:03.692 "get_zone_info": false, 00:19:03.692 "zone_management": false, 00:19:03.692 "zone_append": false, 00:19:03.692 "compare": false, 00:19:03.692 "compare_and_write": false, 00:19:03.692 "abort": true, 00:19:03.692 "seek_hole": false, 00:19:03.692 "seek_data": false, 00:19:03.692 "copy": true, 00:19:03.692 "nvme_iov_md": false 00:19:03.692 }, 00:19:03.692 "memory_domains": [ 00:19:03.692 { 00:19:03.692 "dma_device_id": "system", 00:19:03.692 "dma_device_type": 1 00:19:03.692 }, 00:19:03.692 { 00:19:03.692 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:03.692 "dma_device_type": 2 00:19:03.692 } 00:19:03.692 ], 00:19:03.692 "driver_specific": {} 00:19:03.692 } 00:19:03.692 ] 00:19:03.692 14:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:19:03.692 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:03.692 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:03.692 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:03.692 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:03.692 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:03.692 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:03.692 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:03.692 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:03.692 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:03.692 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:03.692 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:03.692 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:03.951 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:03.951 "name": "Existed_Raid", 00:19:03.951 "uuid": "4f787df3-6323-446d-8a86-f7813f03f0b9", 00:19:03.951 "strip_size_kb": 64, 00:19:03.951 "state": "configuring", 00:19:03.951 "raid_level": "concat", 00:19:03.951 "superblock": true, 00:19:03.951 "num_base_bdevs": 3, 00:19:03.951 "num_base_bdevs_discovered": 2, 00:19:03.951 "num_base_bdevs_operational": 3, 00:19:03.951 "base_bdevs_list": [ 00:19:03.951 { 00:19:03.951 "name": "BaseBdev1", 00:19:03.951 "uuid": "8b7298c7-d825-48e2-b07d-6127e5e4fa83", 00:19:03.951 "is_configured": true, 00:19:03.951 "data_offset": 2048, 00:19:03.951 "data_size": 63488 00:19:03.951 }, 00:19:03.951 { 00:19:03.951 "name": null, 00:19:03.951 "uuid": "d4031a9a-6cf2-41e9-b512-fb48edfab950", 00:19:03.951 "is_configured": false, 00:19:03.951 "data_offset": 2048, 00:19:03.951 "data_size": 63488 00:19:03.951 }, 00:19:03.951 { 00:19:03.951 "name": "BaseBdev3", 00:19:03.951 "uuid": "98d37710-a07f-4c10-9490-673fcb5838c7", 00:19:03.951 "is_configured": true, 00:19:03.951 "data_offset": 2048, 00:19:03.951 "data_size": 63488 00:19:03.951 } 00:19:03.951 ] 00:19:03.951 }' 00:19:03.951 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:03.951 14:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:04.518 14:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:04.518 14:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:05.083 14:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:19:05.083 14:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:19:05.340 [2024-07-15 14:11:51.159876] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:05.340 14:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:05.340 14:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:05.340 14:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:05.340 14:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:05.340 14:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:05.340 14:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:05.340 14:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:05.340 14:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:05.340 14:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:05.340 14:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:05.340 14:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:05.340 14:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:05.598 14:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:05.598 "name": "Existed_Raid", 00:19:05.598 "uuid": "4f787df3-6323-446d-8a86-f7813f03f0b9", 00:19:05.598 "strip_size_kb": 64, 00:19:05.598 "state": "configuring", 00:19:05.598 "raid_level": "concat", 00:19:05.598 "superblock": true, 00:19:05.598 "num_base_bdevs": 3, 00:19:05.598 "num_base_bdevs_discovered": 1, 00:19:05.598 "num_base_bdevs_operational": 3, 00:19:05.598 "base_bdevs_list": [ 00:19:05.598 { 00:19:05.598 "name": "BaseBdev1", 00:19:05.598 "uuid": "8b7298c7-d825-48e2-b07d-6127e5e4fa83", 00:19:05.598 "is_configured": true, 00:19:05.598 "data_offset": 2048, 00:19:05.598 "data_size": 63488 00:19:05.598 }, 00:19:05.598 { 00:19:05.598 "name": null, 00:19:05.598 "uuid": "d4031a9a-6cf2-41e9-b512-fb48edfab950", 00:19:05.598 "is_configured": false, 00:19:05.598 "data_offset": 2048, 00:19:05.598 "data_size": 63488 00:19:05.598 }, 00:19:05.598 { 00:19:05.598 "name": null, 00:19:05.598 "uuid": "98d37710-a07f-4c10-9490-673fcb5838c7", 00:19:05.598 "is_configured": false, 00:19:05.598 "data_offset": 2048, 00:19:05.598 "data_size": 63488 00:19:05.598 } 00:19:05.598 ] 00:19:05.598 }' 00:19:05.598 14:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:05.598 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:06.531 14:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:06.531 14:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:06.789 14:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:19:06.789 14:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:19:07.047 [2024-07-15 14:11:52.844184] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:07.047 14:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:07.047 14:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:07.047 14:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:07.047 14:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:07.047 14:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:07.047 14:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:07.047 14:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:07.047 14:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:07.047 14:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:07.047 14:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:07.047 14:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:07.047 14:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:07.305 14:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:07.305 "name": "Existed_Raid", 00:19:07.305 "uuid": "4f787df3-6323-446d-8a86-f7813f03f0b9", 00:19:07.305 "strip_size_kb": 64, 00:19:07.305 "state": "configuring", 00:19:07.305 "raid_level": "concat", 00:19:07.305 "superblock": true, 00:19:07.305 "num_base_bdevs": 3, 00:19:07.305 "num_base_bdevs_discovered": 2, 00:19:07.305 "num_base_bdevs_operational": 3, 00:19:07.305 "base_bdevs_list": [ 00:19:07.305 { 00:19:07.305 "name": "BaseBdev1", 00:19:07.305 "uuid": "8b7298c7-d825-48e2-b07d-6127e5e4fa83", 00:19:07.305 "is_configured": true, 00:19:07.305 "data_offset": 2048, 00:19:07.305 "data_size": 63488 00:19:07.305 }, 00:19:07.305 { 00:19:07.305 "name": null, 00:19:07.305 "uuid": "d4031a9a-6cf2-41e9-b512-fb48edfab950", 00:19:07.305 "is_configured": false, 00:19:07.305 "data_offset": 2048, 00:19:07.305 "data_size": 63488 00:19:07.305 }, 00:19:07.305 { 00:19:07.305 "name": "BaseBdev3", 00:19:07.305 "uuid": "98d37710-a07f-4c10-9490-673fcb5838c7", 00:19:07.305 "is_configured": true, 00:19:07.305 "data_offset": 2048, 00:19:07.305 "data_size": 63488 00:19:07.305 } 00:19:07.305 ] 00:19:07.305 }' 00:19:07.305 14:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:07.305 14:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:07.871 14:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:07.871 14:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:08.129 14:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:19:08.129 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:08.387 [2024-07-15 14:11:54.224380] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:08.387 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:08.387 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:08.387 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:08.387 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:08.387 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:08.387 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:08.387 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:08.387 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:08.387 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:08.387 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:08.387 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:08.387 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:08.646 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:08.646 "name": "Existed_Raid", 00:19:08.646 "uuid": "4f787df3-6323-446d-8a86-f7813f03f0b9", 00:19:08.646 "strip_size_kb": 64, 00:19:08.646 "state": "configuring", 00:19:08.646 "raid_level": "concat", 00:19:08.646 "superblock": true, 00:19:08.646 "num_base_bdevs": 3, 00:19:08.646 "num_base_bdevs_discovered": 1, 00:19:08.646 "num_base_bdevs_operational": 3, 00:19:08.646 "base_bdevs_list": [ 00:19:08.646 { 00:19:08.646 "name": null, 00:19:08.646 "uuid": "8b7298c7-d825-48e2-b07d-6127e5e4fa83", 00:19:08.646 "is_configured": false, 00:19:08.646 "data_offset": 2048, 00:19:08.646 "data_size": 63488 00:19:08.646 }, 00:19:08.646 { 00:19:08.646 "name": null, 00:19:08.646 "uuid": "d4031a9a-6cf2-41e9-b512-fb48edfab950", 00:19:08.646 "is_configured": false, 00:19:08.646 "data_offset": 2048, 00:19:08.646 "data_size": 63488 00:19:08.646 }, 00:19:08.646 { 00:19:08.646 "name": "BaseBdev3", 00:19:08.646 "uuid": "98d37710-a07f-4c10-9490-673fcb5838c7", 00:19:08.646 "is_configured": true, 00:19:08.646 "data_offset": 2048, 00:19:08.646 "data_size": 63488 00:19:08.646 } 00:19:08.646 ] 00:19:08.646 }' 00:19:08.646 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:08.646 14:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:09.582 14:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:09.582 14:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:09.582 14:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:19:09.582 14:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:19:09.841 [2024-07-15 14:11:55.801192] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:09.841 14:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:19:09.841 14:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:09.841 14:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:09.841 14:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:09.841 14:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:09.841 14:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:09.841 14:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:09.841 14:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:09.841 14:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:09.841 14:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:09.841 14:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:09.841 14:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:10.407 14:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:10.407 "name": "Existed_Raid", 00:19:10.407 "uuid": "4f787df3-6323-446d-8a86-f7813f03f0b9", 00:19:10.407 "strip_size_kb": 64, 00:19:10.407 "state": "configuring", 00:19:10.407 "raid_level": "concat", 00:19:10.407 "superblock": true, 00:19:10.407 "num_base_bdevs": 3, 00:19:10.407 "num_base_bdevs_discovered": 2, 00:19:10.407 "num_base_bdevs_operational": 3, 00:19:10.407 "base_bdevs_list": [ 00:19:10.407 { 00:19:10.407 "name": null, 00:19:10.407 "uuid": "8b7298c7-d825-48e2-b07d-6127e5e4fa83", 00:19:10.407 "is_configured": false, 00:19:10.407 "data_offset": 2048, 00:19:10.407 "data_size": 63488 00:19:10.407 }, 00:19:10.407 { 00:19:10.407 "name": "BaseBdev2", 00:19:10.407 "uuid": "d4031a9a-6cf2-41e9-b512-fb48edfab950", 00:19:10.407 "is_configured": true, 00:19:10.407 "data_offset": 2048, 00:19:10.407 "data_size": 63488 00:19:10.407 }, 00:19:10.407 { 00:19:10.407 "name": "BaseBdev3", 00:19:10.407 "uuid": "98d37710-a07f-4c10-9490-673fcb5838c7", 00:19:10.407 "is_configured": true, 00:19:10.407 "data_offset": 2048, 00:19:10.407 "data_size": 63488 00:19:10.407 } 00:19:10.407 ] 00:19:10.407 }' 00:19:10.407 14:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:10.407 14:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:10.985 14:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:10.985 14:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:11.258 14:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:19:11.258 14:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:11.258 14:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:19:11.516 14:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 8b7298c7-d825-48e2-b07d-6127e5e4fa83 00:19:11.774 [2024-07-15 14:11:57.551673] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:19:11.774 [2024-07-15 14:11:57.552037] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:19:11.774 NewBaseBdev 00:19:11.774 [2024-07-15 14:11:57.553111] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:11.774 [2024-07-15 14:11:57.553324] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:11.774 [2024-07-15 14:11:57.553651] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:19:11.774 [2024-07-15 14:11:57.553794] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000008a80 00:19:11.774 [2024-07-15 14:11:57.554009] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:11.774 14:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:19:11.774 14:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:19:11.774 14:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:11.774 14:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:19:11.774 14:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:11.774 14:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:11.774 14:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:12.032 14:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:19:12.290 [ 00:19:12.290 { 00:19:12.290 "name": "NewBaseBdev", 00:19:12.290 "aliases": [ 00:19:12.290 "8b7298c7-d825-48e2-b07d-6127e5e4fa83" 00:19:12.290 ], 00:19:12.290 "product_name": "Malloc disk", 00:19:12.290 "block_size": 512, 00:19:12.290 "num_blocks": 65536, 00:19:12.290 "uuid": "8b7298c7-d825-48e2-b07d-6127e5e4fa83", 00:19:12.290 "assigned_rate_limits": { 00:19:12.290 "rw_ios_per_sec": 0, 00:19:12.290 "rw_mbytes_per_sec": 0, 00:19:12.290 "r_mbytes_per_sec": 0, 00:19:12.290 "w_mbytes_per_sec": 0 00:19:12.290 }, 00:19:12.290 "claimed": true, 00:19:12.290 "claim_type": "exclusive_write", 00:19:12.290 "zoned": false, 00:19:12.290 "supported_io_types": { 00:19:12.290 "read": true, 00:19:12.290 "write": true, 00:19:12.290 "unmap": true, 00:19:12.290 "flush": true, 00:19:12.290 "reset": true, 00:19:12.290 "nvme_admin": false, 00:19:12.290 "nvme_io": false, 00:19:12.290 "nvme_io_md": false, 00:19:12.290 "write_zeroes": true, 00:19:12.290 "zcopy": true, 00:19:12.290 "get_zone_info": false, 00:19:12.290 "zone_management": false, 00:19:12.290 "zone_append": false, 00:19:12.290 "compare": false, 00:19:12.290 "compare_and_write": false, 00:19:12.290 "abort": true, 00:19:12.290 "seek_hole": false, 00:19:12.290 "seek_data": false, 00:19:12.290 "copy": true, 00:19:12.290 "nvme_iov_md": false 00:19:12.290 }, 00:19:12.290 "memory_domains": [ 00:19:12.290 { 00:19:12.290 "dma_device_id": "system", 00:19:12.290 "dma_device_type": 1 00:19:12.290 }, 00:19:12.290 { 00:19:12.290 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:12.290 "dma_device_type": 2 00:19:12.290 } 00:19:12.290 ], 00:19:12.290 "driver_specific": {} 00:19:12.290 } 00:19:12.290 ] 00:19:12.290 14:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:19:12.290 14:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:19:12.290 14:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:12.290 14:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:12.290 14:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:12.290 14:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:12.290 14:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:12.290 14:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:12.290 14:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:12.290 14:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:12.290 14:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:12.290 14:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:12.290 14:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:12.548 14:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:12.548 "name": "Existed_Raid", 00:19:12.548 "uuid": "4f787df3-6323-446d-8a86-f7813f03f0b9", 00:19:12.548 "strip_size_kb": 64, 00:19:12.548 "state": "online", 00:19:12.548 "raid_level": "concat", 00:19:12.548 "superblock": true, 00:19:12.548 "num_base_bdevs": 3, 00:19:12.548 "num_base_bdevs_discovered": 3, 00:19:12.548 "num_base_bdevs_operational": 3, 00:19:12.548 "base_bdevs_list": [ 00:19:12.548 { 00:19:12.548 "name": "NewBaseBdev", 00:19:12.548 "uuid": "8b7298c7-d825-48e2-b07d-6127e5e4fa83", 00:19:12.548 "is_configured": true, 00:19:12.548 "data_offset": 2048, 00:19:12.548 "data_size": 63488 00:19:12.548 }, 00:19:12.548 { 00:19:12.548 "name": "BaseBdev2", 00:19:12.548 "uuid": "d4031a9a-6cf2-41e9-b512-fb48edfab950", 00:19:12.548 "is_configured": true, 00:19:12.548 "data_offset": 2048, 00:19:12.548 "data_size": 63488 00:19:12.548 }, 00:19:12.548 { 00:19:12.548 "name": "BaseBdev3", 00:19:12.548 "uuid": "98d37710-a07f-4c10-9490-673fcb5838c7", 00:19:12.548 "is_configured": true, 00:19:12.548 "data_offset": 2048, 00:19:12.548 "data_size": 63488 00:19:12.548 } 00:19:12.548 ] 00:19:12.548 }' 00:19:12.548 14:11:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:12.548 14:11:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.114 14:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:19:13.115 14:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:19:13.115 14:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:13.115 14:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:13.115 14:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:13.115 14:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:19:13.115 14:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:19:13.115 14:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:13.372 [2024-07-15 14:11:59.348219] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:13.372 14:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:13.372 "name": "Existed_Raid", 00:19:13.372 "aliases": [ 00:19:13.372 "4f787df3-6323-446d-8a86-f7813f03f0b9" 00:19:13.372 ], 00:19:13.372 "product_name": "Raid Volume", 00:19:13.372 "block_size": 512, 00:19:13.372 "num_blocks": 190464, 00:19:13.372 "uuid": "4f787df3-6323-446d-8a86-f7813f03f0b9", 00:19:13.372 "assigned_rate_limits": { 00:19:13.372 "rw_ios_per_sec": 0, 00:19:13.372 "rw_mbytes_per_sec": 0, 00:19:13.372 "r_mbytes_per_sec": 0, 00:19:13.372 "w_mbytes_per_sec": 0 00:19:13.372 }, 00:19:13.372 "claimed": false, 00:19:13.372 "zoned": false, 00:19:13.372 "supported_io_types": { 00:19:13.372 "read": true, 00:19:13.372 "write": true, 00:19:13.372 "unmap": true, 00:19:13.372 "flush": true, 00:19:13.372 "reset": true, 00:19:13.372 "nvme_admin": false, 00:19:13.372 "nvme_io": false, 00:19:13.372 "nvme_io_md": false, 00:19:13.372 "write_zeroes": true, 00:19:13.372 "zcopy": false, 00:19:13.372 "get_zone_info": false, 00:19:13.372 "zone_management": false, 00:19:13.372 "zone_append": false, 00:19:13.372 "compare": false, 00:19:13.372 "compare_and_write": false, 00:19:13.372 "abort": false, 00:19:13.372 "seek_hole": false, 00:19:13.372 "seek_data": false, 00:19:13.372 "copy": false, 00:19:13.372 "nvme_iov_md": false 00:19:13.372 }, 00:19:13.372 "memory_domains": [ 00:19:13.372 { 00:19:13.372 "dma_device_id": "system", 00:19:13.372 "dma_device_type": 1 00:19:13.372 }, 00:19:13.372 { 00:19:13.372 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:13.373 "dma_device_type": 2 00:19:13.373 }, 00:19:13.373 { 00:19:13.373 "dma_device_id": "system", 00:19:13.373 "dma_device_type": 1 00:19:13.373 }, 00:19:13.373 { 00:19:13.373 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:13.373 "dma_device_type": 2 00:19:13.373 }, 00:19:13.373 { 00:19:13.373 "dma_device_id": "system", 00:19:13.373 "dma_device_type": 1 00:19:13.373 }, 00:19:13.373 { 00:19:13.373 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:13.373 "dma_device_type": 2 00:19:13.373 } 00:19:13.373 ], 00:19:13.373 "driver_specific": { 00:19:13.373 "raid": { 00:19:13.373 "uuid": "4f787df3-6323-446d-8a86-f7813f03f0b9", 00:19:13.373 "strip_size_kb": 64, 00:19:13.373 "state": "online", 00:19:13.373 "raid_level": "concat", 00:19:13.373 "superblock": true, 00:19:13.373 "num_base_bdevs": 3, 00:19:13.373 "num_base_bdevs_discovered": 3, 00:19:13.373 "num_base_bdevs_operational": 3, 00:19:13.373 "base_bdevs_list": [ 00:19:13.373 { 00:19:13.373 "name": "NewBaseBdev", 00:19:13.373 "uuid": "8b7298c7-d825-48e2-b07d-6127e5e4fa83", 00:19:13.373 "is_configured": true, 00:19:13.373 "data_offset": 2048, 00:19:13.373 "data_size": 63488 00:19:13.373 }, 00:19:13.373 { 00:19:13.373 "name": "BaseBdev2", 00:19:13.373 "uuid": "d4031a9a-6cf2-41e9-b512-fb48edfab950", 00:19:13.373 "is_configured": true, 00:19:13.373 "data_offset": 2048, 00:19:13.373 "data_size": 63488 00:19:13.373 }, 00:19:13.373 { 00:19:13.373 "name": "BaseBdev3", 00:19:13.373 "uuid": "98d37710-a07f-4c10-9490-673fcb5838c7", 00:19:13.373 "is_configured": true, 00:19:13.373 "data_offset": 2048, 00:19:13.373 "data_size": 63488 00:19:13.373 } 00:19:13.373 ] 00:19:13.373 } 00:19:13.373 } 00:19:13.373 }' 00:19:13.631 14:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:13.631 14:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:19:13.631 BaseBdev2 00:19:13.631 BaseBdev3' 00:19:13.631 14:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:13.631 14:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:19:13.631 14:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:13.890 14:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:13.890 "name": "NewBaseBdev", 00:19:13.890 "aliases": [ 00:19:13.890 "8b7298c7-d825-48e2-b07d-6127e5e4fa83" 00:19:13.890 ], 00:19:13.890 "product_name": "Malloc disk", 00:19:13.890 "block_size": 512, 00:19:13.890 "num_blocks": 65536, 00:19:13.890 "uuid": "8b7298c7-d825-48e2-b07d-6127e5e4fa83", 00:19:13.890 "assigned_rate_limits": { 00:19:13.890 "rw_ios_per_sec": 0, 00:19:13.890 "rw_mbytes_per_sec": 0, 00:19:13.890 "r_mbytes_per_sec": 0, 00:19:13.890 "w_mbytes_per_sec": 0 00:19:13.890 }, 00:19:13.890 "claimed": true, 00:19:13.890 "claim_type": "exclusive_write", 00:19:13.890 "zoned": false, 00:19:13.890 "supported_io_types": { 00:19:13.890 "read": true, 00:19:13.890 "write": true, 00:19:13.890 "unmap": true, 00:19:13.890 "flush": true, 00:19:13.890 "reset": true, 00:19:13.890 "nvme_admin": false, 00:19:13.890 "nvme_io": false, 00:19:13.890 "nvme_io_md": false, 00:19:13.890 "write_zeroes": true, 00:19:13.890 "zcopy": true, 00:19:13.890 "get_zone_info": false, 00:19:13.890 "zone_management": false, 00:19:13.890 "zone_append": false, 00:19:13.890 "compare": false, 00:19:13.890 "compare_and_write": false, 00:19:13.890 "abort": true, 00:19:13.890 "seek_hole": false, 00:19:13.890 "seek_data": false, 00:19:13.890 "copy": true, 00:19:13.890 "nvme_iov_md": false 00:19:13.890 }, 00:19:13.890 "memory_domains": [ 00:19:13.890 { 00:19:13.890 "dma_device_id": "system", 00:19:13.890 "dma_device_type": 1 00:19:13.890 }, 00:19:13.890 { 00:19:13.890 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:13.890 "dma_device_type": 2 00:19:13.890 } 00:19:13.890 ], 00:19:13.890 "driver_specific": {} 00:19:13.890 }' 00:19:13.890 14:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:13.890 14:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:13.890 14:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:13.890 14:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:13.890 14:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:13.890 14:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:13.890 14:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:14.149 14:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:14.149 14:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:14.149 14:11:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:14.149 14:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:14.149 14:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:14.149 14:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:14.149 14:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:19:14.149 14:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:14.408 14:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:14.408 "name": "BaseBdev2", 00:19:14.408 "aliases": [ 00:19:14.408 "d4031a9a-6cf2-41e9-b512-fb48edfab950" 00:19:14.408 ], 00:19:14.408 "product_name": "Malloc disk", 00:19:14.408 "block_size": 512, 00:19:14.408 "num_blocks": 65536, 00:19:14.408 "uuid": "d4031a9a-6cf2-41e9-b512-fb48edfab950", 00:19:14.408 "assigned_rate_limits": { 00:19:14.408 "rw_ios_per_sec": 0, 00:19:14.408 "rw_mbytes_per_sec": 0, 00:19:14.408 "r_mbytes_per_sec": 0, 00:19:14.408 "w_mbytes_per_sec": 0 00:19:14.408 }, 00:19:14.408 "claimed": true, 00:19:14.408 "claim_type": "exclusive_write", 00:19:14.408 "zoned": false, 00:19:14.408 "supported_io_types": { 00:19:14.408 "read": true, 00:19:14.408 "write": true, 00:19:14.408 "unmap": true, 00:19:14.408 "flush": true, 00:19:14.408 "reset": true, 00:19:14.408 "nvme_admin": false, 00:19:14.408 "nvme_io": false, 00:19:14.408 "nvme_io_md": false, 00:19:14.408 "write_zeroes": true, 00:19:14.408 "zcopy": true, 00:19:14.408 "get_zone_info": false, 00:19:14.408 "zone_management": false, 00:19:14.408 "zone_append": false, 00:19:14.408 "compare": false, 00:19:14.408 "compare_and_write": false, 00:19:14.408 "abort": true, 00:19:14.408 "seek_hole": false, 00:19:14.408 "seek_data": false, 00:19:14.408 "copy": true, 00:19:14.408 "nvme_iov_md": false 00:19:14.408 }, 00:19:14.408 "memory_domains": [ 00:19:14.408 { 00:19:14.408 "dma_device_id": "system", 00:19:14.408 "dma_device_type": 1 00:19:14.408 }, 00:19:14.408 { 00:19:14.408 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:14.408 "dma_device_type": 2 00:19:14.408 } 00:19:14.408 ], 00:19:14.408 "driver_specific": {} 00:19:14.408 }' 00:19:14.408 14:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:14.666 14:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:14.666 14:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:14.666 14:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:14.667 14:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:14.667 14:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:14.667 14:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:14.667 14:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:14.667 14:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:14.667 14:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:14.925 14:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:14.925 14:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:14.925 14:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:14.925 14:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:19:14.925 14:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:15.184 14:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:15.184 "name": "BaseBdev3", 00:19:15.184 "aliases": [ 00:19:15.184 "98d37710-a07f-4c10-9490-673fcb5838c7" 00:19:15.184 ], 00:19:15.184 "product_name": "Malloc disk", 00:19:15.184 "block_size": 512, 00:19:15.184 "num_blocks": 65536, 00:19:15.184 "uuid": "98d37710-a07f-4c10-9490-673fcb5838c7", 00:19:15.184 "assigned_rate_limits": { 00:19:15.184 "rw_ios_per_sec": 0, 00:19:15.184 "rw_mbytes_per_sec": 0, 00:19:15.184 "r_mbytes_per_sec": 0, 00:19:15.184 "w_mbytes_per_sec": 0 00:19:15.184 }, 00:19:15.184 "claimed": true, 00:19:15.184 "claim_type": "exclusive_write", 00:19:15.184 "zoned": false, 00:19:15.184 "supported_io_types": { 00:19:15.184 "read": true, 00:19:15.184 "write": true, 00:19:15.184 "unmap": true, 00:19:15.184 "flush": true, 00:19:15.184 "reset": true, 00:19:15.184 "nvme_admin": false, 00:19:15.184 "nvme_io": false, 00:19:15.184 "nvme_io_md": false, 00:19:15.184 "write_zeroes": true, 00:19:15.184 "zcopy": true, 00:19:15.184 "get_zone_info": false, 00:19:15.184 "zone_management": false, 00:19:15.184 "zone_append": false, 00:19:15.184 "compare": false, 00:19:15.184 "compare_and_write": false, 00:19:15.184 "abort": true, 00:19:15.184 "seek_hole": false, 00:19:15.184 "seek_data": false, 00:19:15.184 "copy": true, 00:19:15.184 "nvme_iov_md": false 00:19:15.184 }, 00:19:15.184 "memory_domains": [ 00:19:15.184 { 00:19:15.184 "dma_device_id": "system", 00:19:15.184 "dma_device_type": 1 00:19:15.184 }, 00:19:15.184 { 00:19:15.184 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:15.184 "dma_device_type": 2 00:19:15.184 } 00:19:15.184 ], 00:19:15.184 "driver_specific": {} 00:19:15.184 }' 00:19:15.184 14:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:15.184 14:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:15.184 14:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:15.184 14:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:15.185 14:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:15.185 14:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:15.185 14:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:15.443 14:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:15.443 14:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:15.443 14:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:15.443 14:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:15.443 14:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:15.443 14:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:15.702 [2024-07-15 14:12:01.609081] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:15.702 [2024-07-15 14:12:01.609310] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:15.702 [2024-07-15 14:12:01.609505] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:15.702 [2024-07-15 14:12:01.609684] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:15.702 [2024-07-15 14:12:01.609833] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name Existed_Raid, state offline 00:19:15.702 14:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 195175 00:19:15.702 14:12:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 195175 ']' 00:19:15.702 14:12:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 195175 00:19:15.702 14:12:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:19:15.702 14:12:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:15.702 14:12:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 195175 00:19:15.702 14:12:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:15.702 14:12:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:15.702 14:12:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 195175' 00:19:15.702 killing process with pid 195175 00:19:15.702 14:12:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 195175 00:19:15.702 [2024-07-15 14:12:01.655497] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:15.702 14:12:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 195175 00:19:15.960 [2024-07-15 14:12:01.916713] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:17.397 14:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:19:17.397 00:19:17.397 real 0m35.095s 00:19:17.397 user 1m4.630s 00:19:17.397 sys 0m3.969s 00:19:17.397 14:12:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:17.397 14:12:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:17.397 ************************************ 00:19:17.397 END TEST raid_state_function_test_sb 00:19:17.397 ************************************ 00:19:17.397 14:12:03 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:19:17.397 14:12:03 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:19:17.397 14:12:03 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:19:17.397 14:12:03 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:17.397 14:12:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:17.397 ************************************ 00:19:17.397 START TEST raid_superblock_test 00:19:17.397 ************************************ 00:19:17.397 14:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test concat 3 00:19:17.397 14:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=concat 00:19:17.397 14:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:19:17.397 14:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:19:17.397 14:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:19:17.397 14:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:19:17.397 14:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:19:17.397 14:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:19:17.397 14:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:19:17.397 14:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:19:17.397 14:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:19:17.397 14:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:19:17.397 14:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:19:17.397 14:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:19:17.397 14:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' concat '!=' raid1 ']' 00:19:17.397 14:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:19:17.397 14:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:19:17.397 14:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=196205 00:19:17.397 14:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 196205 /var/tmp/spdk-raid.sock 00:19:17.397 14:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:19:17.397 14:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 196205 ']' 00:19:17.397 14:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:17.397 14:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:17.397 14:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:17.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:17.397 14:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:17.397 14:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:17.397 [2024-07-15 14:12:03.174648] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:19:17.397 [2024-07-15 14:12:03.175035] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid196205 ] 00:19:17.397 [2024-07-15 14:12:03.330497] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:17.654 [2024-07-15 14:12:03.627527] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:17.912 [2024-07-15 14:12:03.827645] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:18.478 14:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:18.478 14:12:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:19:18.478 14:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:19:18.478 14:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:19:18.478 14:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:19:18.478 14:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:19:18.478 14:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:18.478 14:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:18.478 14:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:19:18.478 14:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:18.478 14:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:19:18.736 malloc1 00:19:18.736 14:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:18.994 [2024-07-15 14:12:04.817705] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:18.994 [2024-07-15 14:12:04.818237] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:18.994 [2024-07-15 14:12:04.818549] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:19:18.994 [2024-07-15 14:12:04.818783] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:18.994 [2024-07-15 14:12:04.820764] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:18.994 [2024-07-15 14:12:04.821062] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:18.994 pt1 00:19:18.994 14:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:19:18.994 14:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:19:18.994 14:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:19:18.994 14:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:19:18.994 14:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:18.994 14:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:18.994 14:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:19:18.994 14:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:18.994 14:12:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:19:19.251 malloc2 00:19:19.251 14:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:19.508 [2024-07-15 14:12:05.358672] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:19.508 [2024-07-15 14:12:05.358986] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:19.508 [2024-07-15 14:12:05.359160] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:19:19.508 [2024-07-15 14:12:05.359301] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:19.508 [2024-07-15 14:12:05.361104] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:19.508 [2024-07-15 14:12:05.361269] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:19.508 pt2 00:19:19.508 14:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:19:19.508 14:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:19:19.508 14:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:19:19.508 14:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:19:19.509 14:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:19:19.509 14:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:19.509 14:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:19:19.509 14:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:19.509 14:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:19:19.766 malloc3 00:19:19.766 14:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:20.024 [2024-07-15 14:12:05.881648] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:20.024 [2024-07-15 14:12:05.881971] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:20.024 [2024-07-15 14:12:05.882128] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:19:20.024 [2024-07-15 14:12:05.882308] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:20.024 [2024-07-15 14:12:05.884122] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:20.024 [2024-07-15 14:12:05.884292] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:20.024 pt3 00:19:20.024 14:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:19:20.024 14:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:19:20.024 14:12:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:19:20.306 [2024-07-15 14:12:06.141749] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:20.306 [2024-07-15 14:12:06.143532] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:20.306 [2024-07-15 14:12:06.143758] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:20.306 [2024-07-15 14:12:06.144073] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:19:20.306 [2024-07-15 14:12:06.144190] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:20.306 [2024-07-15 14:12:06.144348] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:20.306 [2024-07-15 14:12:06.144669] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:19:20.306 [2024-07-15 14:12:06.144823] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:19:20.306 [2024-07-15 14:12:06.145138] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:20.306 14:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:19:20.306 14:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:20.306 14:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:20.306 14:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:20.306 14:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:20.306 14:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:20.306 14:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:20.306 14:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:20.306 14:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:20.306 14:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:20.306 14:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:20.306 14:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:20.565 14:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:20.565 "name": "raid_bdev1", 00:19:20.565 "uuid": "1475c63c-7747-45f7-89f8-37afcb96f3b5", 00:19:20.565 "strip_size_kb": 64, 00:19:20.565 "state": "online", 00:19:20.565 "raid_level": "concat", 00:19:20.565 "superblock": true, 00:19:20.565 "num_base_bdevs": 3, 00:19:20.565 "num_base_bdevs_discovered": 3, 00:19:20.565 "num_base_bdevs_operational": 3, 00:19:20.565 "base_bdevs_list": [ 00:19:20.565 { 00:19:20.565 "name": "pt1", 00:19:20.565 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:20.565 "is_configured": true, 00:19:20.565 "data_offset": 2048, 00:19:20.566 "data_size": 63488 00:19:20.566 }, 00:19:20.566 { 00:19:20.566 "name": "pt2", 00:19:20.566 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:20.566 "is_configured": true, 00:19:20.566 "data_offset": 2048, 00:19:20.566 "data_size": 63488 00:19:20.566 }, 00:19:20.566 { 00:19:20.566 "name": "pt3", 00:19:20.566 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:20.566 "is_configured": true, 00:19:20.566 "data_offset": 2048, 00:19:20.566 "data_size": 63488 00:19:20.566 } 00:19:20.566 ] 00:19:20.566 }' 00:19:20.566 14:12:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:20.566 14:12:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.501 14:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:19:21.501 14:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:19:21.501 14:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:21.501 14:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:21.501 14:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:21.501 14:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:19:21.501 14:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:21.502 14:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:21.502 [2024-07-15 14:12:07.418101] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:21.502 14:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:21.502 "name": "raid_bdev1", 00:19:21.502 "aliases": [ 00:19:21.502 "1475c63c-7747-45f7-89f8-37afcb96f3b5" 00:19:21.502 ], 00:19:21.502 "product_name": "Raid Volume", 00:19:21.502 "block_size": 512, 00:19:21.502 "num_blocks": 190464, 00:19:21.502 "uuid": "1475c63c-7747-45f7-89f8-37afcb96f3b5", 00:19:21.502 "assigned_rate_limits": { 00:19:21.502 "rw_ios_per_sec": 0, 00:19:21.502 "rw_mbytes_per_sec": 0, 00:19:21.502 "r_mbytes_per_sec": 0, 00:19:21.502 "w_mbytes_per_sec": 0 00:19:21.502 }, 00:19:21.502 "claimed": false, 00:19:21.502 "zoned": false, 00:19:21.502 "supported_io_types": { 00:19:21.502 "read": true, 00:19:21.502 "write": true, 00:19:21.502 "unmap": true, 00:19:21.502 "flush": true, 00:19:21.502 "reset": true, 00:19:21.502 "nvme_admin": false, 00:19:21.502 "nvme_io": false, 00:19:21.502 "nvme_io_md": false, 00:19:21.502 "write_zeroes": true, 00:19:21.502 "zcopy": false, 00:19:21.502 "get_zone_info": false, 00:19:21.502 "zone_management": false, 00:19:21.502 "zone_append": false, 00:19:21.502 "compare": false, 00:19:21.502 "compare_and_write": false, 00:19:21.502 "abort": false, 00:19:21.502 "seek_hole": false, 00:19:21.502 "seek_data": false, 00:19:21.502 "copy": false, 00:19:21.502 "nvme_iov_md": false 00:19:21.502 }, 00:19:21.502 "memory_domains": [ 00:19:21.502 { 00:19:21.502 "dma_device_id": "system", 00:19:21.502 "dma_device_type": 1 00:19:21.502 }, 00:19:21.502 { 00:19:21.502 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:21.502 "dma_device_type": 2 00:19:21.502 }, 00:19:21.502 { 00:19:21.502 "dma_device_id": "system", 00:19:21.502 "dma_device_type": 1 00:19:21.502 }, 00:19:21.502 { 00:19:21.502 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:21.502 "dma_device_type": 2 00:19:21.502 }, 00:19:21.502 { 00:19:21.502 "dma_device_id": "system", 00:19:21.502 "dma_device_type": 1 00:19:21.502 }, 00:19:21.502 { 00:19:21.502 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:21.502 "dma_device_type": 2 00:19:21.502 } 00:19:21.502 ], 00:19:21.502 "driver_specific": { 00:19:21.502 "raid": { 00:19:21.502 "uuid": "1475c63c-7747-45f7-89f8-37afcb96f3b5", 00:19:21.502 "strip_size_kb": 64, 00:19:21.502 "state": "online", 00:19:21.502 "raid_level": "concat", 00:19:21.502 "superblock": true, 00:19:21.502 "num_base_bdevs": 3, 00:19:21.502 "num_base_bdevs_discovered": 3, 00:19:21.502 "num_base_bdevs_operational": 3, 00:19:21.502 "base_bdevs_list": [ 00:19:21.502 { 00:19:21.502 "name": "pt1", 00:19:21.502 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:21.502 "is_configured": true, 00:19:21.502 "data_offset": 2048, 00:19:21.502 "data_size": 63488 00:19:21.502 }, 00:19:21.502 { 00:19:21.502 "name": "pt2", 00:19:21.502 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:21.502 "is_configured": true, 00:19:21.502 "data_offset": 2048, 00:19:21.502 "data_size": 63488 00:19:21.502 }, 00:19:21.502 { 00:19:21.502 "name": "pt3", 00:19:21.502 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:21.502 "is_configured": true, 00:19:21.502 "data_offset": 2048, 00:19:21.502 "data_size": 63488 00:19:21.502 } 00:19:21.502 ] 00:19:21.502 } 00:19:21.502 } 00:19:21.502 }' 00:19:21.502 14:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:21.502 14:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:19:21.502 pt2 00:19:21.502 pt3' 00:19:21.502 14:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:21.502 14:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:19:21.502 14:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:21.796 14:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:21.796 "name": "pt1", 00:19:21.796 "aliases": [ 00:19:21.796 "00000000-0000-0000-0000-000000000001" 00:19:21.796 ], 00:19:21.796 "product_name": "passthru", 00:19:21.796 "block_size": 512, 00:19:21.796 "num_blocks": 65536, 00:19:21.796 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:21.796 "assigned_rate_limits": { 00:19:21.796 "rw_ios_per_sec": 0, 00:19:21.796 "rw_mbytes_per_sec": 0, 00:19:21.796 "r_mbytes_per_sec": 0, 00:19:21.796 "w_mbytes_per_sec": 0 00:19:21.796 }, 00:19:21.796 "claimed": true, 00:19:21.796 "claim_type": "exclusive_write", 00:19:21.796 "zoned": false, 00:19:21.796 "supported_io_types": { 00:19:21.796 "read": true, 00:19:21.796 "write": true, 00:19:21.796 "unmap": true, 00:19:21.796 "flush": true, 00:19:21.796 "reset": true, 00:19:21.796 "nvme_admin": false, 00:19:21.796 "nvme_io": false, 00:19:21.796 "nvme_io_md": false, 00:19:21.796 "write_zeroes": true, 00:19:21.796 "zcopy": true, 00:19:21.796 "get_zone_info": false, 00:19:21.796 "zone_management": false, 00:19:21.796 "zone_append": false, 00:19:21.796 "compare": false, 00:19:21.796 "compare_and_write": false, 00:19:21.796 "abort": true, 00:19:21.796 "seek_hole": false, 00:19:21.796 "seek_data": false, 00:19:21.796 "copy": true, 00:19:21.796 "nvme_iov_md": false 00:19:21.796 }, 00:19:21.796 "memory_domains": [ 00:19:21.796 { 00:19:21.796 "dma_device_id": "system", 00:19:21.796 "dma_device_type": 1 00:19:21.796 }, 00:19:21.796 { 00:19:21.796 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:21.796 "dma_device_type": 2 00:19:21.796 } 00:19:21.796 ], 00:19:21.796 "driver_specific": { 00:19:21.796 "passthru": { 00:19:21.796 "name": "pt1", 00:19:21.796 "base_bdev_name": "malloc1" 00:19:21.796 } 00:19:21.796 } 00:19:21.796 }' 00:19:21.796 14:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:22.054 14:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:22.054 14:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:22.054 14:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:22.054 14:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:22.054 14:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:22.054 14:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:22.054 14:12:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:22.055 14:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:22.055 14:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:22.314 14:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:22.314 14:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:22.314 14:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:22.314 14:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:19:22.314 14:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:22.574 14:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:22.574 "name": "pt2", 00:19:22.574 "aliases": [ 00:19:22.574 "00000000-0000-0000-0000-000000000002" 00:19:22.574 ], 00:19:22.574 "product_name": "passthru", 00:19:22.574 "block_size": 512, 00:19:22.574 "num_blocks": 65536, 00:19:22.574 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:22.574 "assigned_rate_limits": { 00:19:22.574 "rw_ios_per_sec": 0, 00:19:22.574 "rw_mbytes_per_sec": 0, 00:19:22.574 "r_mbytes_per_sec": 0, 00:19:22.574 "w_mbytes_per_sec": 0 00:19:22.574 }, 00:19:22.574 "claimed": true, 00:19:22.574 "claim_type": "exclusive_write", 00:19:22.574 "zoned": false, 00:19:22.574 "supported_io_types": { 00:19:22.574 "read": true, 00:19:22.574 "write": true, 00:19:22.574 "unmap": true, 00:19:22.574 "flush": true, 00:19:22.574 "reset": true, 00:19:22.574 "nvme_admin": false, 00:19:22.574 "nvme_io": false, 00:19:22.574 "nvme_io_md": false, 00:19:22.574 "write_zeroes": true, 00:19:22.574 "zcopy": true, 00:19:22.574 "get_zone_info": false, 00:19:22.574 "zone_management": false, 00:19:22.574 "zone_append": false, 00:19:22.574 "compare": false, 00:19:22.574 "compare_and_write": false, 00:19:22.574 "abort": true, 00:19:22.574 "seek_hole": false, 00:19:22.574 "seek_data": false, 00:19:22.574 "copy": true, 00:19:22.574 "nvme_iov_md": false 00:19:22.574 }, 00:19:22.574 "memory_domains": [ 00:19:22.574 { 00:19:22.574 "dma_device_id": "system", 00:19:22.574 "dma_device_type": 1 00:19:22.574 }, 00:19:22.574 { 00:19:22.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:22.574 "dma_device_type": 2 00:19:22.574 } 00:19:22.574 ], 00:19:22.574 "driver_specific": { 00:19:22.574 "passthru": { 00:19:22.574 "name": "pt2", 00:19:22.574 "base_bdev_name": "malloc2" 00:19:22.574 } 00:19:22.574 } 00:19:22.574 }' 00:19:22.574 14:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:22.574 14:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:22.574 14:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:22.574 14:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:22.574 14:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:22.832 14:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:22.832 14:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:22.832 14:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:22.832 14:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:22.832 14:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:22.832 14:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:22.832 14:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:22.832 14:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:22.832 14:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:19:22.832 14:12:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:23.399 14:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:23.399 "name": "pt3", 00:19:23.399 "aliases": [ 00:19:23.399 "00000000-0000-0000-0000-000000000003" 00:19:23.399 ], 00:19:23.399 "product_name": "passthru", 00:19:23.399 "block_size": 512, 00:19:23.399 "num_blocks": 65536, 00:19:23.399 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:23.399 "assigned_rate_limits": { 00:19:23.399 "rw_ios_per_sec": 0, 00:19:23.399 "rw_mbytes_per_sec": 0, 00:19:23.399 "r_mbytes_per_sec": 0, 00:19:23.399 "w_mbytes_per_sec": 0 00:19:23.399 }, 00:19:23.399 "claimed": true, 00:19:23.399 "claim_type": "exclusive_write", 00:19:23.399 "zoned": false, 00:19:23.399 "supported_io_types": { 00:19:23.399 "read": true, 00:19:23.399 "write": true, 00:19:23.399 "unmap": true, 00:19:23.399 "flush": true, 00:19:23.399 "reset": true, 00:19:23.399 "nvme_admin": false, 00:19:23.399 "nvme_io": false, 00:19:23.399 "nvme_io_md": false, 00:19:23.399 "write_zeroes": true, 00:19:23.399 "zcopy": true, 00:19:23.399 "get_zone_info": false, 00:19:23.399 "zone_management": false, 00:19:23.399 "zone_append": false, 00:19:23.399 "compare": false, 00:19:23.399 "compare_and_write": false, 00:19:23.399 "abort": true, 00:19:23.399 "seek_hole": false, 00:19:23.399 "seek_data": false, 00:19:23.399 "copy": true, 00:19:23.399 "nvme_iov_md": false 00:19:23.399 }, 00:19:23.399 "memory_domains": [ 00:19:23.399 { 00:19:23.399 "dma_device_id": "system", 00:19:23.399 "dma_device_type": 1 00:19:23.399 }, 00:19:23.399 { 00:19:23.399 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:23.399 "dma_device_type": 2 00:19:23.399 } 00:19:23.399 ], 00:19:23.399 "driver_specific": { 00:19:23.399 "passthru": { 00:19:23.399 "name": "pt3", 00:19:23.399 "base_bdev_name": "malloc3" 00:19:23.399 } 00:19:23.399 } 00:19:23.399 }' 00:19:23.399 14:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:23.399 14:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:23.399 14:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:23.399 14:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:23.399 14:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:23.399 14:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:23.399 14:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:23.399 14:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:23.657 14:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:23.657 14:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:23.657 14:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:23.657 14:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:23.657 14:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:23.657 14:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:19:23.916 [2024-07-15 14:12:09.802547] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:23.916 14:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=1475c63c-7747-45f7-89f8-37afcb96f3b5 00:19:23.916 14:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 1475c63c-7747-45f7-89f8-37afcb96f3b5 ']' 00:19:23.916 14:12:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:24.174 [2024-07-15 14:12:10.094438] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:24.174 [2024-07-15 14:12:10.094697] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:24.174 [2024-07-15 14:12:10.094896] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:24.174 [2024-07-15 14:12:10.095044] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:24.174 [2024-07-15 14:12:10.095149] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:19:24.174 14:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:24.174 14:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:19:24.433 14:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:19:24.433 14:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:19:24.433 14:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:19:24.433 14:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:19:24.692 14:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:19:24.692 14:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:24.952 14:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:19:24.952 14:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:19:25.210 14:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:19:25.211 14:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:25.470 14:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:19:25.470 14:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:19:25.470 14:12:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:19:25.470 14:12:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:19:25.470 14:12:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:25.470 14:12:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:25.470 14:12:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:25.470 14:12:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:25.470 14:12:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:25.470 14:12:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:25.470 14:12:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:25.470 14:12:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:19:25.470 14:12:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:19:25.728 [2024-07-15 14:12:11.626733] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:25.728 [2024-07-15 14:12:11.628343] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:25.728 [2024-07-15 14:12:11.628535] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:19:25.728 [2024-07-15 14:12:11.628682] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:25.728 [2024-07-15 14:12:11.629080] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:25.728 [2024-07-15 14:12:11.629422] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:19:25.728 [2024-07-15 14:12:11.629702] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:25.728 [2024-07-15 14:12:11.629954] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state configuring 00:19:25.728 request: 00:19:25.728 { 00:19:25.728 "name": "raid_bdev1", 00:19:25.728 "raid_level": "concat", 00:19:25.728 "base_bdevs": [ 00:19:25.728 "malloc1", 00:19:25.728 "malloc2", 00:19:25.728 "malloc3" 00:19:25.728 ], 00:19:25.728 "strip_size_kb": 64, 00:19:25.728 "superblock": false, 00:19:25.728 "method": "bdev_raid_create", 00:19:25.728 "req_id": 1 00:19:25.728 } 00:19:25.728 Got JSON-RPC error response 00:19:25.728 response: 00:19:25.728 { 00:19:25.728 "code": -17, 00:19:25.728 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:25.728 } 00:19:25.728 14:12:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:19:25.728 14:12:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:25.728 14:12:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:25.728 14:12:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:25.728 14:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:25.728 14:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:19:25.986 14:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:19:25.986 14:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:19:25.986 14:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:26.245 [2024-07-15 14:12:12.191019] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:26.245 [2024-07-15 14:12:12.191299] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:26.245 [2024-07-15 14:12:12.191382] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:19:26.245 [2024-07-15 14:12:12.191610] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:26.245 [2024-07-15 14:12:12.193453] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:26.245 [2024-07-15 14:12:12.193635] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:26.245 [2024-07-15 14:12:12.193867] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:26.245 [2024-07-15 14:12:12.194025] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:26.245 pt1 00:19:26.245 14:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:19:26.245 14:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:26.245 14:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:26.245 14:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:26.245 14:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:26.245 14:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:26.245 14:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:26.245 14:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:26.245 14:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:26.245 14:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:26.245 14:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:26.245 14:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:26.505 14:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:26.505 "name": "raid_bdev1", 00:19:26.505 "uuid": "1475c63c-7747-45f7-89f8-37afcb96f3b5", 00:19:26.505 "strip_size_kb": 64, 00:19:26.505 "state": "configuring", 00:19:26.505 "raid_level": "concat", 00:19:26.505 "superblock": true, 00:19:26.505 "num_base_bdevs": 3, 00:19:26.505 "num_base_bdevs_discovered": 1, 00:19:26.505 "num_base_bdevs_operational": 3, 00:19:26.505 "base_bdevs_list": [ 00:19:26.505 { 00:19:26.505 "name": "pt1", 00:19:26.505 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:26.505 "is_configured": true, 00:19:26.505 "data_offset": 2048, 00:19:26.505 "data_size": 63488 00:19:26.505 }, 00:19:26.505 { 00:19:26.505 "name": null, 00:19:26.505 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:26.505 "is_configured": false, 00:19:26.505 "data_offset": 2048, 00:19:26.505 "data_size": 63488 00:19:26.505 }, 00:19:26.505 { 00:19:26.505 "name": null, 00:19:26.505 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:26.505 "is_configured": false, 00:19:26.506 "data_offset": 2048, 00:19:26.506 "data_size": 63488 00:19:26.506 } 00:19:26.506 ] 00:19:26.506 }' 00:19:26.506 14:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:26.506 14:12:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:27.441 14:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:19:27.441 14:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:27.441 [2024-07-15 14:12:13.367551] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:27.441 [2024-07-15 14:12:13.367860] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:27.441 [2024-07-15 14:12:13.368053] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:19:27.441 [2024-07-15 14:12:13.368203] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:27.441 [2024-07-15 14:12:13.368789] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:27.441 [2024-07-15 14:12:13.368993] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:27.441 [2024-07-15 14:12:13.369244] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:27.441 [2024-07-15 14:12:13.369415] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:27.441 pt2 00:19:27.441 14:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:27.700 [2024-07-15 14:12:13.691622] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:19:27.958 14:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:19:27.958 14:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:27.958 14:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:27.958 14:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:27.958 14:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:27.958 14:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:27.958 14:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:27.958 14:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:27.958 14:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:27.958 14:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:27.958 14:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:27.958 14:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:27.958 14:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:27.958 "name": "raid_bdev1", 00:19:27.958 "uuid": "1475c63c-7747-45f7-89f8-37afcb96f3b5", 00:19:27.958 "strip_size_kb": 64, 00:19:27.958 "state": "configuring", 00:19:27.958 "raid_level": "concat", 00:19:27.958 "superblock": true, 00:19:27.958 "num_base_bdevs": 3, 00:19:27.958 "num_base_bdevs_discovered": 1, 00:19:27.958 "num_base_bdevs_operational": 3, 00:19:27.958 "base_bdevs_list": [ 00:19:27.958 { 00:19:27.958 "name": "pt1", 00:19:27.958 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:27.958 "is_configured": true, 00:19:27.958 "data_offset": 2048, 00:19:27.958 "data_size": 63488 00:19:27.958 }, 00:19:27.958 { 00:19:27.958 "name": null, 00:19:27.958 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:27.958 "is_configured": false, 00:19:27.958 "data_offset": 2048, 00:19:27.958 "data_size": 63488 00:19:27.958 }, 00:19:27.958 { 00:19:27.958 "name": null, 00:19:27.958 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:27.958 "is_configured": false, 00:19:27.958 "data_offset": 2048, 00:19:27.958 "data_size": 63488 00:19:27.958 } 00:19:27.958 ] 00:19:27.958 }' 00:19:27.958 14:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:27.958 14:12:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.894 14:12:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:19:28.894 14:12:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:19:28.894 14:12:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:29.153 [2024-07-15 14:12:14.907767] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:29.153 [2024-07-15 14:12:14.908090] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:29.153 [2024-07-15 14:12:14.908244] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:19:29.153 [2024-07-15 14:12:14.908375] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:29.153 [2024-07-15 14:12:14.908858] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:29.153 [2024-07-15 14:12:14.909033] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:29.153 [2024-07-15 14:12:14.909240] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:29.153 [2024-07-15 14:12:14.909382] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:29.153 pt2 00:19:29.153 14:12:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:19:29.153 14:12:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:19:29.153 14:12:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:29.411 [2024-07-15 14:12:15.187807] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:29.411 [2024-07-15 14:12:15.188048] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:29.411 [2024-07-15 14:12:15.188221] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:19:29.411 [2024-07-15 14:12:15.188355] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:29.411 [2024-07-15 14:12:15.188931] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:29.411 [2024-07-15 14:12:15.189086] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:29.411 [2024-07-15 14:12:15.189290] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:19:29.411 [2024-07-15 14:12:15.189410] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:29.411 [2024-07-15 14:12:15.189639] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:19:29.411 [2024-07-15 14:12:15.189761] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:29.411 [2024-07-15 14:12:15.189886] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:19:29.411 [2024-07-15 14:12:15.190161] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:19:29.411 [2024-07-15 14:12:15.190211] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:19:29.411 [2024-07-15 14:12:15.190417] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:29.411 pt3 00:19:29.411 14:12:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:19:29.411 14:12:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:19:29.411 14:12:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:19:29.411 14:12:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:29.411 14:12:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:29.411 14:12:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:29.411 14:12:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:29.411 14:12:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:29.411 14:12:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:29.411 14:12:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:29.411 14:12:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:29.411 14:12:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:29.411 14:12:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:29.411 14:12:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:29.668 14:12:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:29.668 "name": "raid_bdev1", 00:19:29.668 "uuid": "1475c63c-7747-45f7-89f8-37afcb96f3b5", 00:19:29.668 "strip_size_kb": 64, 00:19:29.668 "state": "online", 00:19:29.668 "raid_level": "concat", 00:19:29.668 "superblock": true, 00:19:29.668 "num_base_bdevs": 3, 00:19:29.668 "num_base_bdevs_discovered": 3, 00:19:29.668 "num_base_bdevs_operational": 3, 00:19:29.668 "base_bdevs_list": [ 00:19:29.668 { 00:19:29.668 "name": "pt1", 00:19:29.668 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:29.668 "is_configured": true, 00:19:29.668 "data_offset": 2048, 00:19:29.668 "data_size": 63488 00:19:29.668 }, 00:19:29.668 { 00:19:29.668 "name": "pt2", 00:19:29.668 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:29.668 "is_configured": true, 00:19:29.668 "data_offset": 2048, 00:19:29.668 "data_size": 63488 00:19:29.668 }, 00:19:29.668 { 00:19:29.668 "name": "pt3", 00:19:29.668 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:29.668 "is_configured": true, 00:19:29.668 "data_offset": 2048, 00:19:29.668 "data_size": 63488 00:19:29.668 } 00:19:29.668 ] 00:19:29.668 }' 00:19:29.669 14:12:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:29.669 14:12:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:30.234 14:12:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:19:30.234 14:12:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:19:30.234 14:12:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:30.234 14:12:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:30.234 14:12:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:30.234 14:12:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:19:30.234 14:12:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:30.234 14:12:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:30.492 [2024-07-15 14:12:16.388161] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:30.492 14:12:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:30.492 "name": "raid_bdev1", 00:19:30.492 "aliases": [ 00:19:30.492 "1475c63c-7747-45f7-89f8-37afcb96f3b5" 00:19:30.492 ], 00:19:30.492 "product_name": "Raid Volume", 00:19:30.492 "block_size": 512, 00:19:30.492 "num_blocks": 190464, 00:19:30.492 "uuid": "1475c63c-7747-45f7-89f8-37afcb96f3b5", 00:19:30.492 "assigned_rate_limits": { 00:19:30.492 "rw_ios_per_sec": 0, 00:19:30.492 "rw_mbytes_per_sec": 0, 00:19:30.492 "r_mbytes_per_sec": 0, 00:19:30.492 "w_mbytes_per_sec": 0 00:19:30.492 }, 00:19:30.492 "claimed": false, 00:19:30.492 "zoned": false, 00:19:30.492 "supported_io_types": { 00:19:30.492 "read": true, 00:19:30.492 "write": true, 00:19:30.492 "unmap": true, 00:19:30.492 "flush": true, 00:19:30.492 "reset": true, 00:19:30.492 "nvme_admin": false, 00:19:30.492 "nvme_io": false, 00:19:30.492 "nvme_io_md": false, 00:19:30.492 "write_zeroes": true, 00:19:30.492 "zcopy": false, 00:19:30.492 "get_zone_info": false, 00:19:30.492 "zone_management": false, 00:19:30.492 "zone_append": false, 00:19:30.492 "compare": false, 00:19:30.492 "compare_and_write": false, 00:19:30.492 "abort": false, 00:19:30.492 "seek_hole": false, 00:19:30.492 "seek_data": false, 00:19:30.492 "copy": false, 00:19:30.492 "nvme_iov_md": false 00:19:30.492 }, 00:19:30.492 "memory_domains": [ 00:19:30.492 { 00:19:30.492 "dma_device_id": "system", 00:19:30.492 "dma_device_type": 1 00:19:30.492 }, 00:19:30.492 { 00:19:30.492 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:30.492 "dma_device_type": 2 00:19:30.492 }, 00:19:30.492 { 00:19:30.492 "dma_device_id": "system", 00:19:30.492 "dma_device_type": 1 00:19:30.492 }, 00:19:30.492 { 00:19:30.492 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:30.492 "dma_device_type": 2 00:19:30.492 }, 00:19:30.492 { 00:19:30.492 "dma_device_id": "system", 00:19:30.492 "dma_device_type": 1 00:19:30.492 }, 00:19:30.492 { 00:19:30.492 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:30.492 "dma_device_type": 2 00:19:30.492 } 00:19:30.492 ], 00:19:30.492 "driver_specific": { 00:19:30.492 "raid": { 00:19:30.492 "uuid": "1475c63c-7747-45f7-89f8-37afcb96f3b5", 00:19:30.492 "strip_size_kb": 64, 00:19:30.492 "state": "online", 00:19:30.492 "raid_level": "concat", 00:19:30.492 "superblock": true, 00:19:30.492 "num_base_bdevs": 3, 00:19:30.492 "num_base_bdevs_discovered": 3, 00:19:30.492 "num_base_bdevs_operational": 3, 00:19:30.492 "base_bdevs_list": [ 00:19:30.492 { 00:19:30.492 "name": "pt1", 00:19:30.492 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:30.492 "is_configured": true, 00:19:30.492 "data_offset": 2048, 00:19:30.492 "data_size": 63488 00:19:30.492 }, 00:19:30.492 { 00:19:30.492 "name": "pt2", 00:19:30.492 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:30.492 "is_configured": true, 00:19:30.492 "data_offset": 2048, 00:19:30.492 "data_size": 63488 00:19:30.492 }, 00:19:30.492 { 00:19:30.492 "name": "pt3", 00:19:30.492 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:30.492 "is_configured": true, 00:19:30.492 "data_offset": 2048, 00:19:30.492 "data_size": 63488 00:19:30.492 } 00:19:30.492 ] 00:19:30.492 } 00:19:30.492 } 00:19:30.492 }' 00:19:30.492 14:12:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:30.492 14:12:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:19:30.492 pt2 00:19:30.492 pt3' 00:19:30.492 14:12:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:30.492 14:12:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:19:30.492 14:12:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:30.750 14:12:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:30.750 "name": "pt1", 00:19:30.750 "aliases": [ 00:19:30.750 "00000000-0000-0000-0000-000000000001" 00:19:30.750 ], 00:19:30.750 "product_name": "passthru", 00:19:30.750 "block_size": 512, 00:19:30.750 "num_blocks": 65536, 00:19:30.750 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:30.750 "assigned_rate_limits": { 00:19:30.750 "rw_ios_per_sec": 0, 00:19:30.750 "rw_mbytes_per_sec": 0, 00:19:30.750 "r_mbytes_per_sec": 0, 00:19:30.750 "w_mbytes_per_sec": 0 00:19:30.750 }, 00:19:30.750 "claimed": true, 00:19:30.750 "claim_type": "exclusive_write", 00:19:30.750 "zoned": false, 00:19:30.750 "supported_io_types": { 00:19:30.750 "read": true, 00:19:30.750 "write": true, 00:19:30.750 "unmap": true, 00:19:30.750 "flush": true, 00:19:30.750 "reset": true, 00:19:30.750 "nvme_admin": false, 00:19:30.750 "nvme_io": false, 00:19:30.750 "nvme_io_md": false, 00:19:30.750 "write_zeroes": true, 00:19:30.750 "zcopy": true, 00:19:30.750 "get_zone_info": false, 00:19:30.750 "zone_management": false, 00:19:30.750 "zone_append": false, 00:19:30.750 "compare": false, 00:19:30.750 "compare_and_write": false, 00:19:30.750 "abort": true, 00:19:30.750 "seek_hole": false, 00:19:30.750 "seek_data": false, 00:19:30.750 "copy": true, 00:19:30.750 "nvme_iov_md": false 00:19:30.750 }, 00:19:30.750 "memory_domains": [ 00:19:30.750 { 00:19:30.750 "dma_device_id": "system", 00:19:30.750 "dma_device_type": 1 00:19:30.750 }, 00:19:30.750 { 00:19:30.750 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:30.750 "dma_device_type": 2 00:19:30.750 } 00:19:30.750 ], 00:19:30.750 "driver_specific": { 00:19:30.750 "passthru": { 00:19:30.750 "name": "pt1", 00:19:30.750 "base_bdev_name": "malloc1" 00:19:30.750 } 00:19:30.750 } 00:19:30.750 }' 00:19:30.750 14:12:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:31.008 14:12:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:31.008 14:12:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:31.008 14:12:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:31.008 14:12:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:31.008 14:12:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:31.008 14:12:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:31.008 14:12:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:31.266 14:12:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:31.266 14:12:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:31.266 14:12:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:31.266 14:12:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:31.266 14:12:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:31.266 14:12:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:19:31.266 14:12:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:31.523 14:12:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:31.523 "name": "pt2", 00:19:31.523 "aliases": [ 00:19:31.523 "00000000-0000-0000-0000-000000000002" 00:19:31.523 ], 00:19:31.523 "product_name": "passthru", 00:19:31.523 "block_size": 512, 00:19:31.523 "num_blocks": 65536, 00:19:31.523 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:31.523 "assigned_rate_limits": { 00:19:31.523 "rw_ios_per_sec": 0, 00:19:31.523 "rw_mbytes_per_sec": 0, 00:19:31.523 "r_mbytes_per_sec": 0, 00:19:31.523 "w_mbytes_per_sec": 0 00:19:31.523 }, 00:19:31.523 "claimed": true, 00:19:31.523 "claim_type": "exclusive_write", 00:19:31.523 "zoned": false, 00:19:31.523 "supported_io_types": { 00:19:31.523 "read": true, 00:19:31.523 "write": true, 00:19:31.523 "unmap": true, 00:19:31.523 "flush": true, 00:19:31.523 "reset": true, 00:19:31.523 "nvme_admin": false, 00:19:31.523 "nvme_io": false, 00:19:31.523 "nvme_io_md": false, 00:19:31.523 "write_zeroes": true, 00:19:31.523 "zcopy": true, 00:19:31.523 "get_zone_info": false, 00:19:31.523 "zone_management": false, 00:19:31.523 "zone_append": false, 00:19:31.523 "compare": false, 00:19:31.523 "compare_and_write": false, 00:19:31.523 "abort": true, 00:19:31.523 "seek_hole": false, 00:19:31.523 "seek_data": false, 00:19:31.523 "copy": true, 00:19:31.523 "nvme_iov_md": false 00:19:31.523 }, 00:19:31.523 "memory_domains": [ 00:19:31.523 { 00:19:31.523 "dma_device_id": "system", 00:19:31.523 "dma_device_type": 1 00:19:31.524 }, 00:19:31.524 { 00:19:31.524 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:31.524 "dma_device_type": 2 00:19:31.524 } 00:19:31.524 ], 00:19:31.524 "driver_specific": { 00:19:31.524 "passthru": { 00:19:31.524 "name": "pt2", 00:19:31.524 "base_bdev_name": "malloc2" 00:19:31.524 } 00:19:31.524 } 00:19:31.524 }' 00:19:31.524 14:12:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:31.524 14:12:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:31.781 14:12:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:31.781 14:12:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:31.781 14:12:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:31.781 14:12:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:31.781 14:12:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:31.781 14:12:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:31.781 14:12:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:31.781 14:12:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:31.781 14:12:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:32.039 14:12:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:32.039 14:12:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:32.039 14:12:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:19:32.039 14:12:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:32.297 14:12:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:32.297 "name": "pt3", 00:19:32.297 "aliases": [ 00:19:32.297 "00000000-0000-0000-0000-000000000003" 00:19:32.297 ], 00:19:32.297 "product_name": "passthru", 00:19:32.297 "block_size": 512, 00:19:32.297 "num_blocks": 65536, 00:19:32.297 "uuid": "00000000-0000-0000-0000-000000000003", 00:19:32.297 "assigned_rate_limits": { 00:19:32.297 "rw_ios_per_sec": 0, 00:19:32.297 "rw_mbytes_per_sec": 0, 00:19:32.297 "r_mbytes_per_sec": 0, 00:19:32.297 "w_mbytes_per_sec": 0 00:19:32.297 }, 00:19:32.297 "claimed": true, 00:19:32.297 "claim_type": "exclusive_write", 00:19:32.297 "zoned": false, 00:19:32.297 "supported_io_types": { 00:19:32.297 "read": true, 00:19:32.297 "write": true, 00:19:32.297 "unmap": true, 00:19:32.297 "flush": true, 00:19:32.297 "reset": true, 00:19:32.297 "nvme_admin": false, 00:19:32.297 "nvme_io": false, 00:19:32.297 "nvme_io_md": false, 00:19:32.297 "write_zeroes": true, 00:19:32.297 "zcopy": true, 00:19:32.297 "get_zone_info": false, 00:19:32.297 "zone_management": false, 00:19:32.297 "zone_append": false, 00:19:32.297 "compare": false, 00:19:32.297 "compare_and_write": false, 00:19:32.297 "abort": true, 00:19:32.297 "seek_hole": false, 00:19:32.297 "seek_data": false, 00:19:32.297 "copy": true, 00:19:32.297 "nvme_iov_md": false 00:19:32.297 }, 00:19:32.297 "memory_domains": [ 00:19:32.297 { 00:19:32.297 "dma_device_id": "system", 00:19:32.297 "dma_device_type": 1 00:19:32.297 }, 00:19:32.297 { 00:19:32.297 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:32.297 "dma_device_type": 2 00:19:32.297 } 00:19:32.297 ], 00:19:32.297 "driver_specific": { 00:19:32.297 "passthru": { 00:19:32.297 "name": "pt3", 00:19:32.297 "base_bdev_name": "malloc3" 00:19:32.297 } 00:19:32.297 } 00:19:32.297 }' 00:19:32.297 14:12:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:32.298 14:12:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:32.298 14:12:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:32.298 14:12:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:32.564 14:12:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:32.564 14:12:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:32.564 14:12:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:32.564 14:12:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:32.564 14:12:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:32.564 14:12:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:32.564 14:12:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:32.564 14:12:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:32.564 14:12:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:32.564 14:12:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:19:32.837 [2024-07-15 14:12:18.779679] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:32.837 14:12:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 1475c63c-7747-45f7-89f8-37afcb96f3b5 '!=' 1475c63c-7747-45f7-89f8-37afcb96f3b5 ']' 00:19:32.837 14:12:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy concat 00:19:32.837 14:12:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:19:32.837 14:12:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:19:32.837 14:12:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 196205 00:19:32.837 14:12:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 196205 ']' 00:19:32.837 14:12:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 196205 00:19:32.837 14:12:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:19:32.837 14:12:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:32.837 14:12:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 196205 00:19:32.837 14:12:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:32.837 14:12:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:32.837 14:12:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 196205' 00:19:32.837 killing process with pid 196205 00:19:32.837 14:12:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 196205 00:19:32.837 [2024-07-15 14:12:18.829471] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:32.837 14:12:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 196205 00:19:32.837 [2024-07-15 14:12:18.829741] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:32.837 [2024-07-15 14:12:18.829894] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:32.837 [2024-07-15 14:12:18.830015] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:19:33.096 [2024-07-15 14:12:19.096628] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:34.473 14:12:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:19:34.473 00:19:34.473 real 0m17.095s 00:19:34.473 user 0m30.621s 00:19:34.473 sys 0m1.958s 00:19:34.473 14:12:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:34.473 14:12:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.473 ************************************ 00:19:34.473 END TEST raid_superblock_test 00:19:34.473 ************************************ 00:19:34.473 14:12:20 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:19:34.473 14:12:20 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:19:34.473 14:12:20 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:19:34.473 14:12:20 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:34.473 14:12:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:34.473 ************************************ 00:19:34.473 START TEST raid_read_error_test 00:19:34.473 ************************************ 00:19:34.473 14:12:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 3 read 00:19:34.473 14:12:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:19:34.473 14:12:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:19:34.473 14:12:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:19:34.473 14:12:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:19:34.473 14:12:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:34.473 14:12:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:19:34.473 14:12:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:19:34.473 14:12:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:34.473 14:12:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:19:34.473 14:12:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:19:34.473 14:12:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:34.473 14:12:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:19:34.473 14:12:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:19:34.473 14:12:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:34.473 14:12:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:19:34.473 14:12:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:19:34.473 14:12:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:19:34.473 14:12:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:19:34.473 14:12:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:19:34.473 14:12:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:19:34.473 14:12:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:19:34.473 14:12:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:19:34.473 14:12:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:19:34.473 14:12:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:19:34.473 14:12:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:19:34.473 14:12:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.ZSeo9UE58M 00:19:34.473 14:12:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=196728 00:19:34.473 14:12:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 196728 /var/tmp/spdk-raid.sock 00:19:34.473 14:12:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:19:34.473 14:12:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 196728 ']' 00:19:34.473 14:12:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:34.473 14:12:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:34.473 14:12:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:34.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:34.473 14:12:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:34.473 14:12:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:34.473 [2024-07-15 14:12:20.342372] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:19:34.474 [2024-07-15 14:12:20.342820] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid196728 ] 00:19:34.732 [2024-07-15 14:12:20.505847] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:34.990 [2024-07-15 14:12:20.755973] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:34.990 [2024-07-15 14:12:20.953877] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:35.556 14:12:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:35.556 14:12:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:19:35.556 14:12:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:19:35.556 14:12:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:35.814 BaseBdev1_malloc 00:19:35.814 14:12:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:19:36.072 true 00:19:36.072 14:12:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:19:36.330 [2024-07-15 14:12:22.282282] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:19:36.330 [2024-07-15 14:12:22.282999] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:36.330 [2024-07-15 14:12:22.283236] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:19:36.330 [2024-07-15 14:12:22.283458] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:36.330 [2024-07-15 14:12:22.285446] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:36.330 [2024-07-15 14:12:22.285693] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:36.330 BaseBdev1 00:19:36.330 14:12:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:19:36.331 14:12:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:36.589 BaseBdev2_malloc 00:19:36.848 14:12:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:19:37.105 true 00:19:37.105 14:12:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:19:37.105 [2024-07-15 14:12:23.105646] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:19:37.105 [2024-07-15 14:12:23.106065] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:37.105 [2024-07-15 14:12:23.106305] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:37.105 [2024-07-15 14:12:23.106514] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:37.105 [2024-07-15 14:12:23.108423] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:37.105 [2024-07-15 14:12:23.108652] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:37.363 BaseBdev2 00:19:37.363 14:12:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:19:37.363 14:12:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:37.622 BaseBdev3_malloc 00:19:37.622 14:12:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:19:37.892 true 00:19:37.892 14:12:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:19:38.149 [2024-07-15 14:12:23.960858] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:19:38.149 [2024-07-15 14:12:23.961570] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:38.149 [2024-07-15 14:12:23.961821] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:19:38.149 [2024-07-15 14:12:23.962038] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:38.149 [2024-07-15 14:12:23.963942] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:38.149 [2024-07-15 14:12:23.964180] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:38.149 BaseBdev3 00:19:38.149 14:12:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:19:38.407 [2024-07-15 14:12:24.257128] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:38.407 [2024-07-15 14:12:24.258846] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:38.407 [2024-07-15 14:12:24.259033] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:38.407 [2024-07-15 14:12:24.259328] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:19:38.407 [2024-07-15 14:12:24.259454] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:38.407 [2024-07-15 14:12:24.259636] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:38.407 [2024-07-15 14:12:24.260030] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:19:38.407 [2024-07-15 14:12:24.260153] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009380 00:19:38.407 [2024-07-15 14:12:24.260382] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:38.407 14:12:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:19:38.407 14:12:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:38.407 14:12:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:38.407 14:12:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:38.407 14:12:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:38.407 14:12:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:38.407 14:12:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:38.407 14:12:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:38.407 14:12:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:38.407 14:12:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:38.407 14:12:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:38.407 14:12:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:38.664 14:12:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:38.664 "name": "raid_bdev1", 00:19:38.664 "uuid": "3fdaeda3-2e5a-43a0-8ff3-5a810fb038a2", 00:19:38.664 "strip_size_kb": 64, 00:19:38.664 "state": "online", 00:19:38.664 "raid_level": "concat", 00:19:38.664 "superblock": true, 00:19:38.664 "num_base_bdevs": 3, 00:19:38.664 "num_base_bdevs_discovered": 3, 00:19:38.664 "num_base_bdevs_operational": 3, 00:19:38.664 "base_bdevs_list": [ 00:19:38.664 { 00:19:38.664 "name": "BaseBdev1", 00:19:38.664 "uuid": "49761e8b-cbd8-5ede-84fc-52cedd8c4afb", 00:19:38.664 "is_configured": true, 00:19:38.664 "data_offset": 2048, 00:19:38.664 "data_size": 63488 00:19:38.664 }, 00:19:38.664 { 00:19:38.664 "name": "BaseBdev2", 00:19:38.664 "uuid": "11b7e532-46a7-50bd-b582-32b0cf2e4452", 00:19:38.664 "is_configured": true, 00:19:38.664 "data_offset": 2048, 00:19:38.664 "data_size": 63488 00:19:38.664 }, 00:19:38.664 { 00:19:38.664 "name": "BaseBdev3", 00:19:38.664 "uuid": "23b17f2e-b9b1-5042-9f4a-739b8cddddc4", 00:19:38.664 "is_configured": true, 00:19:38.664 "data_offset": 2048, 00:19:38.664 "data_size": 63488 00:19:38.664 } 00:19:38.664 ] 00:19:38.664 }' 00:19:38.664 14:12:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:38.664 14:12:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.229 14:12:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:19:39.229 14:12:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:19:39.487 [2024-07-15 14:12:25.258358] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:40.422 14:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:19:40.681 14:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:19:40.681 14:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:19:40.681 14:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:19:40.681 14:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:19:40.681 14:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:40.681 14:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:40.681 14:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:40.681 14:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:40.681 14:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:40.681 14:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:40.681 14:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:40.681 14:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:40.681 14:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:40.681 14:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:40.681 14:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:40.939 14:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:40.939 "name": "raid_bdev1", 00:19:40.939 "uuid": "3fdaeda3-2e5a-43a0-8ff3-5a810fb038a2", 00:19:40.939 "strip_size_kb": 64, 00:19:40.939 "state": "online", 00:19:40.939 "raid_level": "concat", 00:19:40.939 "superblock": true, 00:19:40.939 "num_base_bdevs": 3, 00:19:40.939 "num_base_bdevs_discovered": 3, 00:19:40.939 "num_base_bdevs_operational": 3, 00:19:40.939 "base_bdevs_list": [ 00:19:40.939 { 00:19:40.939 "name": "BaseBdev1", 00:19:40.939 "uuid": "49761e8b-cbd8-5ede-84fc-52cedd8c4afb", 00:19:40.939 "is_configured": true, 00:19:40.939 "data_offset": 2048, 00:19:40.939 "data_size": 63488 00:19:40.939 }, 00:19:40.939 { 00:19:40.939 "name": "BaseBdev2", 00:19:40.939 "uuid": "11b7e532-46a7-50bd-b582-32b0cf2e4452", 00:19:40.939 "is_configured": true, 00:19:40.939 "data_offset": 2048, 00:19:40.939 "data_size": 63488 00:19:40.939 }, 00:19:40.940 { 00:19:40.940 "name": "BaseBdev3", 00:19:40.940 "uuid": "23b17f2e-b9b1-5042-9f4a-739b8cddddc4", 00:19:40.940 "is_configured": true, 00:19:40.940 "data_offset": 2048, 00:19:40.940 "data_size": 63488 00:19:40.940 } 00:19:40.940 ] 00:19:40.940 }' 00:19:40.940 14:12:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:40.940 14:12:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.507 14:12:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:41.766 [2024-07-15 14:12:27.650694] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:41.766 [2024-07-15 14:12:27.651035] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:41.766 [2024-07-15 14:12:27.652493] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:41.766 [2024-07-15 14:12:27.652682] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:41.766 [2024-07-15 14:12:27.652765] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:41.766 [2024-07-15 14:12:27.652990] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state offline 00:19:41.766 0 00:19:41.766 14:12:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 196728 00:19:41.766 14:12:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 196728 ']' 00:19:41.766 14:12:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 196728 00:19:41.766 14:12:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:19:41.766 14:12:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:41.766 14:12:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 196728 00:19:41.766 14:12:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:41.766 14:12:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:41.766 14:12:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 196728' 00:19:41.766 killing process with pid 196728 00:19:41.766 14:12:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 196728 00:19:41.766 [2024-07-15 14:12:27.696222] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:41.766 14:12:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 196728 00:19:42.024 [2024-07-15 14:12:27.898266] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:43.471 14:12:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.ZSeo9UE58M 00:19:43.471 14:12:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:19:43.471 14:12:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:19:43.471 14:12:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.42 00:19:43.471 14:12:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:19:43.471 14:12:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:19:43.471 14:12:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:19:43.471 14:12:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.42 != \0\.\0\0 ]] 00:19:43.471 00:19:43.471 real 0m8.815s 00:19:43.471 user 0m13.637s 00:19:43.471 sys 0m0.978s 00:19:43.471 14:12:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:43.471 14:12:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:43.471 ************************************ 00:19:43.471 END TEST raid_read_error_test 00:19:43.471 ************************************ 00:19:43.471 14:12:29 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:19:43.471 14:12:29 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:19:43.471 14:12:29 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:19:43.471 14:12:29 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:43.471 14:12:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:43.471 ************************************ 00:19:43.471 START TEST raid_write_error_test 00:19:43.471 ************************************ 00:19:43.471 14:12:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 3 write 00:19:43.471 14:12:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:19:43.471 14:12:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:19:43.471 14:12:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:19:43.471 14:12:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:19:43.471 14:12:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:43.471 14:12:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:19:43.471 14:12:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:19:43.471 14:12:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:43.471 14:12:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:19:43.471 14:12:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:19:43.471 14:12:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:43.471 14:12:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:19:43.471 14:12:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:19:43.471 14:12:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:43.471 14:12:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:19:43.471 14:12:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:19:43.471 14:12:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:19:43.471 14:12:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:19:43.471 14:12:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:19:43.471 14:12:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:19:43.471 14:12:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:19:43.471 14:12:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:19:43.471 14:12:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:19:43.471 14:12:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:19:43.471 14:12:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:19:43.471 14:12:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.1SPx24lgv2 00:19:43.471 14:12:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=196946 00:19:43.471 14:12:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 196946 /var/tmp/spdk-raid.sock 00:19:43.471 14:12:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 196946 ']' 00:19:43.471 14:12:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:19:43.471 14:12:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:43.471 14:12:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:43.471 14:12:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:43.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:43.471 14:12:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:43.471 14:12:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:43.471 [2024-07-15 14:12:29.205542] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:19:43.471 [2024-07-15 14:12:29.206220] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid196946 ] 00:19:43.471 [2024-07-15 14:12:29.359066] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:43.730 [2024-07-15 14:12:29.579358] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:43.988 [2024-07-15 14:12:29.782737] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:44.555 14:12:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:44.555 14:12:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:19:44.555 14:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:19:44.555 14:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:44.813 BaseBdev1_malloc 00:19:44.813 14:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:19:45.071 true 00:19:45.071 14:12:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:19:45.329 [2024-07-15 14:12:31.081122] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:19:45.329 [2024-07-15 14:12:31.081829] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:45.329 [2024-07-15 14:12:31.082073] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:19:45.329 [2024-07-15 14:12:31.082291] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:45.329 [2024-07-15 14:12:31.084263] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:45.329 [2024-07-15 14:12:31.084517] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:45.329 BaseBdev1 00:19:45.329 14:12:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:19:45.329 14:12:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:45.588 BaseBdev2_malloc 00:19:45.588 14:12:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:19:45.846 true 00:19:45.846 14:12:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:19:46.105 [2024-07-15 14:12:31.907377] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:19:46.105 [2024-07-15 14:12:31.907862] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:46.105 [2024-07-15 14:12:31.908104] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:46.105 [2024-07-15 14:12:31.908316] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:46.105 [2024-07-15 14:12:31.910253] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:46.105 [2024-07-15 14:12:31.910481] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:46.105 BaseBdev2 00:19:46.105 14:12:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:19:46.105 14:12:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:46.364 BaseBdev3_malloc 00:19:46.364 14:12:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:19:46.623 true 00:19:46.623 14:12:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:19:46.881 [2024-07-15 14:12:32.653696] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:19:46.881 [2024-07-15 14:12:32.654357] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:46.881 [2024-07-15 14:12:32.654589] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:19:46.881 [2024-07-15 14:12:32.654865] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:46.881 [2024-07-15 14:12:32.656793] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:46.881 [2024-07-15 14:12:32.657034] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:46.881 BaseBdev3 00:19:46.881 14:12:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:19:47.178 [2024-07-15 14:12:32.901892] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:47.178 [2024-07-15 14:12:32.903553] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:47.178 [2024-07-15 14:12:32.903749] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:47.178 [2024-07-15 14:12:32.904112] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:19:47.178 [2024-07-15 14:12:32.904241] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:47.178 [2024-07-15 14:12:32.904432] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:47.178 [2024-07-15 14:12:32.904722] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:19:47.178 [2024-07-15 14:12:32.904861] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009380 00:19:47.178 [2024-07-15 14:12:32.905096] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:47.178 14:12:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:19:47.178 14:12:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:47.178 14:12:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:47.178 14:12:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:47.178 14:12:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:47.178 14:12:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:47.178 14:12:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:47.178 14:12:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:47.178 14:12:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:47.178 14:12:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:47.178 14:12:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:47.178 14:12:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:47.478 14:12:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:47.478 "name": "raid_bdev1", 00:19:47.478 "uuid": "2f2151e0-cc28-434f-af6d-b9fbae84925c", 00:19:47.478 "strip_size_kb": 64, 00:19:47.478 "state": "online", 00:19:47.478 "raid_level": "concat", 00:19:47.478 "superblock": true, 00:19:47.478 "num_base_bdevs": 3, 00:19:47.478 "num_base_bdevs_discovered": 3, 00:19:47.478 "num_base_bdevs_operational": 3, 00:19:47.478 "base_bdevs_list": [ 00:19:47.478 { 00:19:47.478 "name": "BaseBdev1", 00:19:47.478 "uuid": "d24898dd-3b00-5bc9-9593-55755735a0a4", 00:19:47.478 "is_configured": true, 00:19:47.478 "data_offset": 2048, 00:19:47.478 "data_size": 63488 00:19:47.478 }, 00:19:47.478 { 00:19:47.478 "name": "BaseBdev2", 00:19:47.478 "uuid": "80de7c3a-c638-51e8-9c80-ffa2cec44832", 00:19:47.478 "is_configured": true, 00:19:47.478 "data_offset": 2048, 00:19:47.478 "data_size": 63488 00:19:47.478 }, 00:19:47.478 { 00:19:47.478 "name": "BaseBdev3", 00:19:47.478 "uuid": "a7343827-88d0-548b-8a88-c30fd82be54e", 00:19:47.478 "is_configured": true, 00:19:47.478 "data_offset": 2048, 00:19:47.478 "data_size": 63488 00:19:47.478 } 00:19:47.478 ] 00:19:47.478 }' 00:19:47.478 14:12:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:47.478 14:12:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:48.044 14:12:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:19:48.044 14:12:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:19:48.044 [2024-07-15 14:12:33.899324] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:48.979 14:12:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:19:49.238 14:12:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:19:49.238 14:12:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:19:49.238 14:12:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:19:49.238 14:12:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:19:49.238 14:12:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:49.238 14:12:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:49.238 14:12:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:49.238 14:12:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:49.238 14:12:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:49.238 14:12:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:49.238 14:12:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:49.238 14:12:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:49.238 14:12:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:49.238 14:12:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:49.238 14:12:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:49.497 14:12:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:49.497 "name": "raid_bdev1", 00:19:49.497 "uuid": "2f2151e0-cc28-434f-af6d-b9fbae84925c", 00:19:49.497 "strip_size_kb": 64, 00:19:49.497 "state": "online", 00:19:49.497 "raid_level": "concat", 00:19:49.497 "superblock": true, 00:19:49.497 "num_base_bdevs": 3, 00:19:49.497 "num_base_bdevs_discovered": 3, 00:19:49.497 "num_base_bdevs_operational": 3, 00:19:49.497 "base_bdevs_list": [ 00:19:49.497 { 00:19:49.497 "name": "BaseBdev1", 00:19:49.497 "uuid": "d24898dd-3b00-5bc9-9593-55755735a0a4", 00:19:49.497 "is_configured": true, 00:19:49.497 "data_offset": 2048, 00:19:49.497 "data_size": 63488 00:19:49.497 }, 00:19:49.497 { 00:19:49.497 "name": "BaseBdev2", 00:19:49.497 "uuid": "80de7c3a-c638-51e8-9c80-ffa2cec44832", 00:19:49.497 "is_configured": true, 00:19:49.497 "data_offset": 2048, 00:19:49.497 "data_size": 63488 00:19:49.497 }, 00:19:49.497 { 00:19:49.497 "name": "BaseBdev3", 00:19:49.497 "uuid": "a7343827-88d0-548b-8a88-c30fd82be54e", 00:19:49.497 "is_configured": true, 00:19:49.497 "data_offset": 2048, 00:19:49.497 "data_size": 63488 00:19:49.497 } 00:19:49.497 ] 00:19:49.497 }' 00:19:49.497 14:12:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:49.497 14:12:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:50.064 14:12:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:50.323 [2024-07-15 14:12:36.291818] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:50.323 [2024-07-15 14:12:36.292057] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:50.323 [2024-07-15 14:12:36.293516] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:50.323 [2024-07-15 14:12:36.293685] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:50.323 [2024-07-15 14:12:36.293858] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:50.323 [2024-07-15 14:12:36.293968] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state offline 00:19:50.323 0 00:19:50.323 14:12:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 196946 00:19:50.323 14:12:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 196946 ']' 00:19:50.323 14:12:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 196946 00:19:50.323 14:12:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:19:50.323 14:12:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:50.323 14:12:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 196946 00:19:50.582 14:12:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:50.582 14:12:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:50.582 14:12:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 196946' 00:19:50.582 killing process with pid 196946 00:19:50.582 14:12:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 196946 00:19:50.582 [2024-07-15 14:12:36.332197] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:50.582 14:12:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 196946 00:19:50.582 [2024-07-15 14:12:36.529364] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:51.956 14:12:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.1SPx24lgv2 00:19:51.956 14:12:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:19:51.956 14:12:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:19:51.956 14:12:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.42 00:19:51.956 14:12:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:19:51.956 14:12:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:19:51.956 14:12:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:19:51.956 14:12:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.42 != \0\.\0\0 ]] 00:19:51.956 00:19:51.956 real 0m8.563s 00:19:51.956 user 0m13.191s 00:19:51.956 sys 0m0.926s 00:19:51.956 14:12:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:51.956 14:12:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:51.956 ************************************ 00:19:51.956 END TEST raid_write_error_test 00:19:51.956 ************************************ 00:19:51.956 14:12:37 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:19:51.956 14:12:37 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:19:51.956 14:12:37 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:19:51.956 14:12:37 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:19:51.956 14:12:37 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:51.956 14:12:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:51.956 ************************************ 00:19:51.956 START TEST raid_state_function_test 00:19:51.956 ************************************ 00:19:51.956 14:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 3 false 00:19:51.956 14:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:19:51.956 14:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:19:51.956 14:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:19:51.956 14:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:19:51.956 14:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:19:51.956 14:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:51.956 14:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:19:51.956 14:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:51.956 14:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:51.956 14:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:19:51.956 14:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:51.956 14:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:51.956 14:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:19:51.956 14:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:51.956 14:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:51.956 14:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:19:51.956 14:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:19:51.956 14:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:19:51.956 14:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:19:51.956 14:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:19:51.956 14:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:19:51.956 14:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:19:51.956 14:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:19:51.956 14:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:19:51.956 14:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:19:51.956 14:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=197156 00:19:51.956 14:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:19:51.956 14:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 197156' 00:19:51.956 Process raid pid: 197156 00:19:51.956 14:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 197156 /var/tmp/spdk-raid.sock 00:19:51.956 14:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 197156 ']' 00:19:51.956 14:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:51.956 14:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:51.956 14:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:51.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:51.956 14:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:51.956 14:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:51.956 [2024-07-15 14:12:37.822754] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:19:51.957 [2024-07-15 14:12:37.823222] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:52.214 [2024-07-15 14:12:37.988934] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:52.214 [2024-07-15 14:12:38.204179] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:52.472 [2024-07-15 14:12:38.403672] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:53.039 14:12:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:53.039 14:12:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:19:53.039 14:12:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:53.297 [2024-07-15 14:12:39.111081] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:53.297 [2024-07-15 14:12:39.111363] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:53.297 [2024-07-15 14:12:39.111525] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:53.297 [2024-07-15 14:12:39.111664] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:53.297 [2024-07-15 14:12:39.111791] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:53.297 [2024-07-15 14:12:39.111927] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:53.297 14:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:19:53.297 14:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:53.297 14:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:53.297 14:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:53.297 14:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:53.297 14:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:53.297 14:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:53.297 14:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:53.297 14:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:53.297 14:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:53.297 14:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:53.297 14:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:53.555 14:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:53.555 "name": "Existed_Raid", 00:19:53.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:53.555 "strip_size_kb": 0, 00:19:53.555 "state": "configuring", 00:19:53.555 "raid_level": "raid1", 00:19:53.555 "superblock": false, 00:19:53.555 "num_base_bdevs": 3, 00:19:53.555 "num_base_bdevs_discovered": 0, 00:19:53.555 "num_base_bdevs_operational": 3, 00:19:53.555 "base_bdevs_list": [ 00:19:53.555 { 00:19:53.555 "name": "BaseBdev1", 00:19:53.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:53.555 "is_configured": false, 00:19:53.555 "data_offset": 0, 00:19:53.555 "data_size": 0 00:19:53.555 }, 00:19:53.555 { 00:19:53.555 "name": "BaseBdev2", 00:19:53.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:53.555 "is_configured": false, 00:19:53.555 "data_offset": 0, 00:19:53.555 "data_size": 0 00:19:53.555 }, 00:19:53.555 { 00:19:53.555 "name": "BaseBdev3", 00:19:53.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:53.555 "is_configured": false, 00:19:53.555 "data_offset": 0, 00:19:53.555 "data_size": 0 00:19:53.555 } 00:19:53.555 ] 00:19:53.555 }' 00:19:53.555 14:12:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:53.555 14:12:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:54.488 14:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:54.488 [2024-07-15 14:12:40.399174] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:54.488 [2024-07-15 14:12:40.399412] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:19:54.488 14:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:54.746 [2024-07-15 14:12:40.631232] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:54.746 [2024-07-15 14:12:40.631872] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:54.746 [2024-07-15 14:12:40.632005] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:54.746 [2024-07-15 14:12:40.632136] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:54.746 [2024-07-15 14:12:40.632332] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:54.746 [2024-07-15 14:12:40.632494] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:54.746 14:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:55.004 [2024-07-15 14:12:40.902293] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:55.004 BaseBdev1 00:19:55.004 14:12:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:19:55.004 14:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:19:55.004 14:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:55.004 14:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:19:55.004 14:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:55.004 14:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:55.004 14:12:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:55.262 14:12:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:55.520 [ 00:19:55.520 { 00:19:55.520 "name": "BaseBdev1", 00:19:55.520 "aliases": [ 00:19:55.520 "4f079c7f-76f9-4cd1-a94a-a38976c72dbc" 00:19:55.520 ], 00:19:55.520 "product_name": "Malloc disk", 00:19:55.520 "block_size": 512, 00:19:55.520 "num_blocks": 65536, 00:19:55.520 "uuid": "4f079c7f-76f9-4cd1-a94a-a38976c72dbc", 00:19:55.520 "assigned_rate_limits": { 00:19:55.520 "rw_ios_per_sec": 0, 00:19:55.520 "rw_mbytes_per_sec": 0, 00:19:55.520 "r_mbytes_per_sec": 0, 00:19:55.520 "w_mbytes_per_sec": 0 00:19:55.520 }, 00:19:55.520 "claimed": true, 00:19:55.520 "claim_type": "exclusive_write", 00:19:55.520 "zoned": false, 00:19:55.520 "supported_io_types": { 00:19:55.520 "read": true, 00:19:55.520 "write": true, 00:19:55.520 "unmap": true, 00:19:55.520 "flush": true, 00:19:55.520 "reset": true, 00:19:55.520 "nvme_admin": false, 00:19:55.520 "nvme_io": false, 00:19:55.520 "nvme_io_md": false, 00:19:55.520 "write_zeroes": true, 00:19:55.520 "zcopy": true, 00:19:55.520 "get_zone_info": false, 00:19:55.520 "zone_management": false, 00:19:55.520 "zone_append": false, 00:19:55.520 "compare": false, 00:19:55.520 "compare_and_write": false, 00:19:55.520 "abort": true, 00:19:55.520 "seek_hole": false, 00:19:55.520 "seek_data": false, 00:19:55.520 "copy": true, 00:19:55.520 "nvme_iov_md": false 00:19:55.520 }, 00:19:55.520 "memory_domains": [ 00:19:55.520 { 00:19:55.520 "dma_device_id": "system", 00:19:55.520 "dma_device_type": 1 00:19:55.520 }, 00:19:55.520 { 00:19:55.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:55.520 "dma_device_type": 2 00:19:55.520 } 00:19:55.520 ], 00:19:55.520 "driver_specific": {} 00:19:55.520 } 00:19:55.520 ] 00:19:55.520 14:12:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:19:55.520 14:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:19:55.520 14:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:55.520 14:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:55.520 14:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:55.520 14:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:55.520 14:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:55.520 14:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:55.520 14:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:55.520 14:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:55.520 14:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:55.520 14:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:55.520 14:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:55.827 14:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:55.827 "name": "Existed_Raid", 00:19:55.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:55.827 "strip_size_kb": 0, 00:19:55.827 "state": "configuring", 00:19:55.827 "raid_level": "raid1", 00:19:55.827 "superblock": false, 00:19:55.827 "num_base_bdevs": 3, 00:19:55.827 "num_base_bdevs_discovered": 1, 00:19:55.827 "num_base_bdevs_operational": 3, 00:19:55.827 "base_bdevs_list": [ 00:19:55.827 { 00:19:55.827 "name": "BaseBdev1", 00:19:55.827 "uuid": "4f079c7f-76f9-4cd1-a94a-a38976c72dbc", 00:19:55.827 "is_configured": true, 00:19:55.827 "data_offset": 0, 00:19:55.827 "data_size": 65536 00:19:55.827 }, 00:19:55.827 { 00:19:55.827 "name": "BaseBdev2", 00:19:55.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:55.827 "is_configured": false, 00:19:55.827 "data_offset": 0, 00:19:55.827 "data_size": 0 00:19:55.827 }, 00:19:55.827 { 00:19:55.827 "name": "BaseBdev3", 00:19:55.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:55.827 "is_configured": false, 00:19:55.827 "data_offset": 0, 00:19:55.827 "data_size": 0 00:19:55.827 } 00:19:55.827 ] 00:19:55.827 }' 00:19:55.827 14:12:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:55.827 14:12:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:56.760 14:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:56.760 [2024-07-15 14:12:42.638554] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:56.760 [2024-07-15 14:12:42.638816] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:19:56.760 14:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:57.018 [2024-07-15 14:12:42.874645] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:57.018 [2024-07-15 14:12:42.876343] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:57.018 [2024-07-15 14:12:42.876534] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:57.018 [2024-07-15 14:12:42.876646] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:57.018 [2024-07-15 14:12:42.876721] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:57.018 14:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:19:57.018 14:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:57.018 14:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:19:57.018 14:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:57.018 14:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:57.018 14:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:57.018 14:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:57.018 14:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:57.018 14:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:57.018 14:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:57.018 14:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:57.018 14:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:57.018 14:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:57.018 14:12:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:57.276 14:12:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:57.276 "name": "Existed_Raid", 00:19:57.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:57.276 "strip_size_kb": 0, 00:19:57.276 "state": "configuring", 00:19:57.276 "raid_level": "raid1", 00:19:57.276 "superblock": false, 00:19:57.276 "num_base_bdevs": 3, 00:19:57.276 "num_base_bdevs_discovered": 1, 00:19:57.276 "num_base_bdevs_operational": 3, 00:19:57.276 "base_bdevs_list": [ 00:19:57.276 { 00:19:57.276 "name": "BaseBdev1", 00:19:57.276 "uuid": "4f079c7f-76f9-4cd1-a94a-a38976c72dbc", 00:19:57.276 "is_configured": true, 00:19:57.276 "data_offset": 0, 00:19:57.276 "data_size": 65536 00:19:57.276 }, 00:19:57.276 { 00:19:57.276 "name": "BaseBdev2", 00:19:57.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:57.276 "is_configured": false, 00:19:57.276 "data_offset": 0, 00:19:57.276 "data_size": 0 00:19:57.276 }, 00:19:57.276 { 00:19:57.276 "name": "BaseBdev3", 00:19:57.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:57.276 "is_configured": false, 00:19:57.276 "data_offset": 0, 00:19:57.276 "data_size": 0 00:19:57.276 } 00:19:57.276 ] 00:19:57.276 }' 00:19:57.276 14:12:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:57.276 14:12:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:57.841 14:12:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:58.409 [2024-07-15 14:12:44.180428] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:58.409 BaseBdev2 00:19:58.409 14:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:19:58.409 14:12:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:19:58.409 14:12:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:58.409 14:12:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:19:58.409 14:12:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:58.409 14:12:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:58.409 14:12:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:58.668 14:12:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:58.953 [ 00:19:58.953 { 00:19:58.953 "name": "BaseBdev2", 00:19:58.953 "aliases": [ 00:19:58.953 "cc315dc1-ee4a-4eb6-ac08-0553e6798941" 00:19:58.953 ], 00:19:58.953 "product_name": "Malloc disk", 00:19:58.953 "block_size": 512, 00:19:58.953 "num_blocks": 65536, 00:19:58.953 "uuid": "cc315dc1-ee4a-4eb6-ac08-0553e6798941", 00:19:58.953 "assigned_rate_limits": { 00:19:58.953 "rw_ios_per_sec": 0, 00:19:58.953 "rw_mbytes_per_sec": 0, 00:19:58.953 "r_mbytes_per_sec": 0, 00:19:58.953 "w_mbytes_per_sec": 0 00:19:58.953 }, 00:19:58.954 "claimed": true, 00:19:58.954 "claim_type": "exclusive_write", 00:19:58.954 "zoned": false, 00:19:58.954 "supported_io_types": { 00:19:58.954 "read": true, 00:19:58.954 "write": true, 00:19:58.954 "unmap": true, 00:19:58.954 "flush": true, 00:19:58.954 "reset": true, 00:19:58.954 "nvme_admin": false, 00:19:58.954 "nvme_io": false, 00:19:58.954 "nvme_io_md": false, 00:19:58.954 "write_zeroes": true, 00:19:58.954 "zcopy": true, 00:19:58.954 "get_zone_info": false, 00:19:58.954 "zone_management": false, 00:19:58.954 "zone_append": false, 00:19:58.954 "compare": false, 00:19:58.954 "compare_and_write": false, 00:19:58.954 "abort": true, 00:19:58.954 "seek_hole": false, 00:19:58.954 "seek_data": false, 00:19:58.954 "copy": true, 00:19:58.954 "nvme_iov_md": false 00:19:58.954 }, 00:19:58.954 "memory_domains": [ 00:19:58.954 { 00:19:58.954 "dma_device_id": "system", 00:19:58.954 "dma_device_type": 1 00:19:58.954 }, 00:19:58.954 { 00:19:58.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:58.954 "dma_device_type": 2 00:19:58.954 } 00:19:58.954 ], 00:19:58.954 "driver_specific": {} 00:19:58.954 } 00:19:58.954 ] 00:19:58.954 14:12:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:19:58.954 14:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:19:58.954 14:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:58.954 14:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:19:58.954 14:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:58.954 14:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:58.954 14:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:58.954 14:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:58.954 14:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:58.954 14:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:58.954 14:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:58.954 14:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:58.954 14:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:58.954 14:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:58.954 14:12:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:59.228 14:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:59.228 "name": "Existed_Raid", 00:19:59.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:59.229 "strip_size_kb": 0, 00:19:59.229 "state": "configuring", 00:19:59.229 "raid_level": "raid1", 00:19:59.229 "superblock": false, 00:19:59.229 "num_base_bdevs": 3, 00:19:59.229 "num_base_bdevs_discovered": 2, 00:19:59.229 "num_base_bdevs_operational": 3, 00:19:59.229 "base_bdevs_list": [ 00:19:59.229 { 00:19:59.229 "name": "BaseBdev1", 00:19:59.229 "uuid": "4f079c7f-76f9-4cd1-a94a-a38976c72dbc", 00:19:59.229 "is_configured": true, 00:19:59.229 "data_offset": 0, 00:19:59.229 "data_size": 65536 00:19:59.229 }, 00:19:59.229 { 00:19:59.229 "name": "BaseBdev2", 00:19:59.229 "uuid": "cc315dc1-ee4a-4eb6-ac08-0553e6798941", 00:19:59.229 "is_configured": true, 00:19:59.229 "data_offset": 0, 00:19:59.229 "data_size": 65536 00:19:59.229 }, 00:19:59.229 { 00:19:59.229 "name": "BaseBdev3", 00:19:59.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:59.229 "is_configured": false, 00:19:59.229 "data_offset": 0, 00:19:59.229 "data_size": 0 00:19:59.229 } 00:19:59.229 ] 00:19:59.229 }' 00:19:59.229 14:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:59.229 14:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:59.797 14:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:00.057 [2024-07-15 14:12:45.938210] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:00.057 [2024-07-15 14:12:45.938482] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:20:00.057 [2024-07-15 14:12:45.938535] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:20:00.057 [2024-07-15 14:12:45.938758] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:20:00.057 [2024-07-15 14:12:45.939135] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:20:00.057 [2024-07-15 14:12:45.939267] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:20:00.057 [2024-07-15 14:12:45.939582] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:00.057 BaseBdev3 00:20:00.057 14:12:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:20:00.057 14:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:20:00.057 14:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:00.057 14:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:20:00.057 14:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:00.057 14:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:00.057 14:12:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:00.319 14:12:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:00.584 [ 00:20:00.584 { 00:20:00.584 "name": "BaseBdev3", 00:20:00.584 "aliases": [ 00:20:00.584 "14f9e9e5-ee49-4f43-b000-b14e69e1ccee" 00:20:00.584 ], 00:20:00.584 "product_name": "Malloc disk", 00:20:00.584 "block_size": 512, 00:20:00.584 "num_blocks": 65536, 00:20:00.584 "uuid": "14f9e9e5-ee49-4f43-b000-b14e69e1ccee", 00:20:00.584 "assigned_rate_limits": { 00:20:00.584 "rw_ios_per_sec": 0, 00:20:00.584 "rw_mbytes_per_sec": 0, 00:20:00.584 "r_mbytes_per_sec": 0, 00:20:00.584 "w_mbytes_per_sec": 0 00:20:00.584 }, 00:20:00.584 "claimed": true, 00:20:00.584 "claim_type": "exclusive_write", 00:20:00.584 "zoned": false, 00:20:00.584 "supported_io_types": { 00:20:00.584 "read": true, 00:20:00.584 "write": true, 00:20:00.584 "unmap": true, 00:20:00.584 "flush": true, 00:20:00.584 "reset": true, 00:20:00.584 "nvme_admin": false, 00:20:00.584 "nvme_io": false, 00:20:00.584 "nvme_io_md": false, 00:20:00.584 "write_zeroes": true, 00:20:00.584 "zcopy": true, 00:20:00.584 "get_zone_info": false, 00:20:00.584 "zone_management": false, 00:20:00.584 "zone_append": false, 00:20:00.584 "compare": false, 00:20:00.584 "compare_and_write": false, 00:20:00.584 "abort": true, 00:20:00.584 "seek_hole": false, 00:20:00.584 "seek_data": false, 00:20:00.584 "copy": true, 00:20:00.584 "nvme_iov_md": false 00:20:00.584 }, 00:20:00.584 "memory_domains": [ 00:20:00.584 { 00:20:00.584 "dma_device_id": "system", 00:20:00.584 "dma_device_type": 1 00:20:00.584 }, 00:20:00.584 { 00:20:00.584 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:00.584 "dma_device_type": 2 00:20:00.584 } 00:20:00.584 ], 00:20:00.584 "driver_specific": {} 00:20:00.584 } 00:20:00.584 ] 00:20:00.584 14:12:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:20:00.584 14:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:20:00.585 14:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:00.585 14:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:20:00.585 14:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:00.585 14:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:00.585 14:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:00.585 14:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:00.585 14:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:00.585 14:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:00.585 14:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:00.585 14:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:00.585 14:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:00.585 14:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:00.585 14:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:00.851 14:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:00.851 "name": "Existed_Raid", 00:20:00.851 "uuid": "4a2f963d-fbd1-488a-b5ce-dec2a30688fa", 00:20:00.851 "strip_size_kb": 0, 00:20:00.851 "state": "online", 00:20:00.851 "raid_level": "raid1", 00:20:00.851 "superblock": false, 00:20:00.851 "num_base_bdevs": 3, 00:20:00.851 "num_base_bdevs_discovered": 3, 00:20:00.851 "num_base_bdevs_operational": 3, 00:20:00.851 "base_bdevs_list": [ 00:20:00.851 { 00:20:00.851 "name": "BaseBdev1", 00:20:00.851 "uuid": "4f079c7f-76f9-4cd1-a94a-a38976c72dbc", 00:20:00.851 "is_configured": true, 00:20:00.851 "data_offset": 0, 00:20:00.851 "data_size": 65536 00:20:00.851 }, 00:20:00.851 { 00:20:00.851 "name": "BaseBdev2", 00:20:00.851 "uuid": "cc315dc1-ee4a-4eb6-ac08-0553e6798941", 00:20:00.851 "is_configured": true, 00:20:00.851 "data_offset": 0, 00:20:00.851 "data_size": 65536 00:20:00.851 }, 00:20:00.851 { 00:20:00.851 "name": "BaseBdev3", 00:20:00.851 "uuid": "14f9e9e5-ee49-4f43-b000-b14e69e1ccee", 00:20:00.851 "is_configured": true, 00:20:00.851 "data_offset": 0, 00:20:00.851 "data_size": 65536 00:20:00.851 } 00:20:00.851 ] 00:20:00.851 }' 00:20:00.851 14:12:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:00.851 14:12:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:01.813 14:12:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:20:01.813 14:12:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:20:01.813 14:12:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:01.813 14:12:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:01.813 14:12:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:01.813 14:12:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:20:01.813 14:12:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:20:01.813 14:12:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:01.813 [2024-07-15 14:12:47.697554] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:01.813 14:12:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:01.813 "name": "Existed_Raid", 00:20:01.813 "aliases": [ 00:20:01.813 "4a2f963d-fbd1-488a-b5ce-dec2a30688fa" 00:20:01.813 ], 00:20:01.813 "product_name": "Raid Volume", 00:20:01.813 "block_size": 512, 00:20:01.813 "num_blocks": 65536, 00:20:01.813 "uuid": "4a2f963d-fbd1-488a-b5ce-dec2a30688fa", 00:20:01.813 "assigned_rate_limits": { 00:20:01.813 "rw_ios_per_sec": 0, 00:20:01.813 "rw_mbytes_per_sec": 0, 00:20:01.813 "r_mbytes_per_sec": 0, 00:20:01.813 "w_mbytes_per_sec": 0 00:20:01.813 }, 00:20:01.813 "claimed": false, 00:20:01.813 "zoned": false, 00:20:01.813 "supported_io_types": { 00:20:01.813 "read": true, 00:20:01.813 "write": true, 00:20:01.813 "unmap": false, 00:20:01.813 "flush": false, 00:20:01.813 "reset": true, 00:20:01.813 "nvme_admin": false, 00:20:01.813 "nvme_io": false, 00:20:01.813 "nvme_io_md": false, 00:20:01.813 "write_zeroes": true, 00:20:01.813 "zcopy": false, 00:20:01.813 "get_zone_info": false, 00:20:01.813 "zone_management": false, 00:20:01.813 "zone_append": false, 00:20:01.813 "compare": false, 00:20:01.813 "compare_and_write": false, 00:20:01.813 "abort": false, 00:20:01.813 "seek_hole": false, 00:20:01.813 "seek_data": false, 00:20:01.813 "copy": false, 00:20:01.813 "nvme_iov_md": false 00:20:01.813 }, 00:20:01.813 "memory_domains": [ 00:20:01.813 { 00:20:01.813 "dma_device_id": "system", 00:20:01.813 "dma_device_type": 1 00:20:01.813 }, 00:20:01.813 { 00:20:01.813 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:01.813 "dma_device_type": 2 00:20:01.813 }, 00:20:01.813 { 00:20:01.813 "dma_device_id": "system", 00:20:01.813 "dma_device_type": 1 00:20:01.813 }, 00:20:01.813 { 00:20:01.813 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:01.813 "dma_device_type": 2 00:20:01.813 }, 00:20:01.813 { 00:20:01.813 "dma_device_id": "system", 00:20:01.813 "dma_device_type": 1 00:20:01.813 }, 00:20:01.813 { 00:20:01.813 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:01.813 "dma_device_type": 2 00:20:01.813 } 00:20:01.813 ], 00:20:01.813 "driver_specific": { 00:20:01.813 "raid": { 00:20:01.813 "uuid": "4a2f963d-fbd1-488a-b5ce-dec2a30688fa", 00:20:01.813 "strip_size_kb": 0, 00:20:01.813 "state": "online", 00:20:01.813 "raid_level": "raid1", 00:20:01.813 "superblock": false, 00:20:01.813 "num_base_bdevs": 3, 00:20:01.813 "num_base_bdevs_discovered": 3, 00:20:01.813 "num_base_bdevs_operational": 3, 00:20:01.813 "base_bdevs_list": [ 00:20:01.813 { 00:20:01.813 "name": "BaseBdev1", 00:20:01.813 "uuid": "4f079c7f-76f9-4cd1-a94a-a38976c72dbc", 00:20:01.813 "is_configured": true, 00:20:01.813 "data_offset": 0, 00:20:01.813 "data_size": 65536 00:20:01.813 }, 00:20:01.813 { 00:20:01.813 "name": "BaseBdev2", 00:20:01.813 "uuid": "cc315dc1-ee4a-4eb6-ac08-0553e6798941", 00:20:01.813 "is_configured": true, 00:20:01.813 "data_offset": 0, 00:20:01.813 "data_size": 65536 00:20:01.813 }, 00:20:01.813 { 00:20:01.813 "name": "BaseBdev3", 00:20:01.813 "uuid": "14f9e9e5-ee49-4f43-b000-b14e69e1ccee", 00:20:01.813 "is_configured": true, 00:20:01.813 "data_offset": 0, 00:20:01.813 "data_size": 65536 00:20:01.813 } 00:20:01.813 ] 00:20:01.813 } 00:20:01.813 } 00:20:01.813 }' 00:20:01.813 14:12:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:01.813 14:12:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:20:01.813 BaseBdev2 00:20:01.813 BaseBdev3' 00:20:01.813 14:12:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:01.813 14:12:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:20:01.813 14:12:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:02.073 14:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:02.073 "name": "BaseBdev1", 00:20:02.073 "aliases": [ 00:20:02.073 "4f079c7f-76f9-4cd1-a94a-a38976c72dbc" 00:20:02.073 ], 00:20:02.073 "product_name": "Malloc disk", 00:20:02.073 "block_size": 512, 00:20:02.073 "num_blocks": 65536, 00:20:02.073 "uuid": "4f079c7f-76f9-4cd1-a94a-a38976c72dbc", 00:20:02.073 "assigned_rate_limits": { 00:20:02.073 "rw_ios_per_sec": 0, 00:20:02.073 "rw_mbytes_per_sec": 0, 00:20:02.073 "r_mbytes_per_sec": 0, 00:20:02.073 "w_mbytes_per_sec": 0 00:20:02.073 }, 00:20:02.073 "claimed": true, 00:20:02.073 "claim_type": "exclusive_write", 00:20:02.073 "zoned": false, 00:20:02.073 "supported_io_types": { 00:20:02.073 "read": true, 00:20:02.073 "write": true, 00:20:02.073 "unmap": true, 00:20:02.073 "flush": true, 00:20:02.073 "reset": true, 00:20:02.073 "nvme_admin": false, 00:20:02.073 "nvme_io": false, 00:20:02.073 "nvme_io_md": false, 00:20:02.073 "write_zeroes": true, 00:20:02.073 "zcopy": true, 00:20:02.073 "get_zone_info": false, 00:20:02.073 "zone_management": false, 00:20:02.073 "zone_append": false, 00:20:02.073 "compare": false, 00:20:02.073 "compare_and_write": false, 00:20:02.073 "abort": true, 00:20:02.073 "seek_hole": false, 00:20:02.073 "seek_data": false, 00:20:02.073 "copy": true, 00:20:02.073 "nvme_iov_md": false 00:20:02.073 }, 00:20:02.073 "memory_domains": [ 00:20:02.073 { 00:20:02.073 "dma_device_id": "system", 00:20:02.073 "dma_device_type": 1 00:20:02.073 }, 00:20:02.073 { 00:20:02.073 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:02.073 "dma_device_type": 2 00:20:02.073 } 00:20:02.073 ], 00:20:02.073 "driver_specific": {} 00:20:02.073 }' 00:20:02.073 14:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:02.332 14:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:02.332 14:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:02.332 14:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:02.332 14:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:02.332 14:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:02.332 14:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:02.332 14:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:02.332 14:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:02.332 14:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:02.592 14:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:02.592 14:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:02.592 14:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:02.592 14:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:20:02.592 14:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:02.851 14:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:02.851 "name": "BaseBdev2", 00:20:02.851 "aliases": [ 00:20:02.851 "cc315dc1-ee4a-4eb6-ac08-0553e6798941" 00:20:02.851 ], 00:20:02.851 "product_name": "Malloc disk", 00:20:02.851 "block_size": 512, 00:20:02.851 "num_blocks": 65536, 00:20:02.851 "uuid": "cc315dc1-ee4a-4eb6-ac08-0553e6798941", 00:20:02.851 "assigned_rate_limits": { 00:20:02.851 "rw_ios_per_sec": 0, 00:20:02.851 "rw_mbytes_per_sec": 0, 00:20:02.851 "r_mbytes_per_sec": 0, 00:20:02.851 "w_mbytes_per_sec": 0 00:20:02.851 }, 00:20:02.851 "claimed": true, 00:20:02.851 "claim_type": "exclusive_write", 00:20:02.851 "zoned": false, 00:20:02.851 "supported_io_types": { 00:20:02.851 "read": true, 00:20:02.851 "write": true, 00:20:02.851 "unmap": true, 00:20:02.851 "flush": true, 00:20:02.851 "reset": true, 00:20:02.851 "nvme_admin": false, 00:20:02.851 "nvme_io": false, 00:20:02.851 "nvme_io_md": false, 00:20:02.851 "write_zeroes": true, 00:20:02.851 "zcopy": true, 00:20:02.851 "get_zone_info": false, 00:20:02.851 "zone_management": false, 00:20:02.851 "zone_append": false, 00:20:02.851 "compare": false, 00:20:02.851 "compare_and_write": false, 00:20:02.851 "abort": true, 00:20:02.851 "seek_hole": false, 00:20:02.851 "seek_data": false, 00:20:02.851 "copy": true, 00:20:02.851 "nvme_iov_md": false 00:20:02.851 }, 00:20:02.851 "memory_domains": [ 00:20:02.851 { 00:20:02.851 "dma_device_id": "system", 00:20:02.851 "dma_device_type": 1 00:20:02.851 }, 00:20:02.851 { 00:20:02.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:02.851 "dma_device_type": 2 00:20:02.851 } 00:20:02.851 ], 00:20:02.851 "driver_specific": {} 00:20:02.851 }' 00:20:02.851 14:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:02.851 14:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:03.109 14:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:03.109 14:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:03.109 14:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:03.109 14:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:03.109 14:12:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:03.109 14:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:03.109 14:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:03.109 14:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:03.109 14:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:03.367 14:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:03.367 14:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:03.367 14:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:20:03.367 14:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:03.626 14:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:03.626 "name": "BaseBdev3", 00:20:03.626 "aliases": [ 00:20:03.626 "14f9e9e5-ee49-4f43-b000-b14e69e1ccee" 00:20:03.626 ], 00:20:03.626 "product_name": "Malloc disk", 00:20:03.626 "block_size": 512, 00:20:03.626 "num_blocks": 65536, 00:20:03.626 "uuid": "14f9e9e5-ee49-4f43-b000-b14e69e1ccee", 00:20:03.626 "assigned_rate_limits": { 00:20:03.626 "rw_ios_per_sec": 0, 00:20:03.626 "rw_mbytes_per_sec": 0, 00:20:03.626 "r_mbytes_per_sec": 0, 00:20:03.626 "w_mbytes_per_sec": 0 00:20:03.626 }, 00:20:03.626 "claimed": true, 00:20:03.626 "claim_type": "exclusive_write", 00:20:03.626 "zoned": false, 00:20:03.626 "supported_io_types": { 00:20:03.626 "read": true, 00:20:03.626 "write": true, 00:20:03.626 "unmap": true, 00:20:03.626 "flush": true, 00:20:03.626 "reset": true, 00:20:03.626 "nvme_admin": false, 00:20:03.626 "nvme_io": false, 00:20:03.626 "nvme_io_md": false, 00:20:03.626 "write_zeroes": true, 00:20:03.626 "zcopy": true, 00:20:03.626 "get_zone_info": false, 00:20:03.626 "zone_management": false, 00:20:03.626 "zone_append": false, 00:20:03.626 "compare": false, 00:20:03.626 "compare_and_write": false, 00:20:03.626 "abort": true, 00:20:03.626 "seek_hole": false, 00:20:03.626 "seek_data": false, 00:20:03.626 "copy": true, 00:20:03.626 "nvme_iov_md": false 00:20:03.626 }, 00:20:03.626 "memory_domains": [ 00:20:03.626 { 00:20:03.626 "dma_device_id": "system", 00:20:03.626 "dma_device_type": 1 00:20:03.626 }, 00:20:03.626 { 00:20:03.626 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:03.626 "dma_device_type": 2 00:20:03.626 } 00:20:03.626 ], 00:20:03.626 "driver_specific": {} 00:20:03.626 }' 00:20:03.626 14:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:03.626 14:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:03.626 14:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:03.626 14:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:03.626 14:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:03.626 14:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:03.626 14:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:03.884 14:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:03.884 14:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:03.884 14:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:03.884 14:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:03.884 14:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:03.884 14:12:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:04.142 [2024-07-15 14:12:50.036395] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:04.142 14:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:20:04.142 14:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:20:04.142 14:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:04.142 14:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:20:04.142 14:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:20:04.142 14:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:20:04.142 14:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:04.142 14:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:04.142 14:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:04.142 14:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:04.142 14:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:20:04.142 14:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:04.142 14:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:04.142 14:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:04.142 14:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:04.142 14:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:04.142 14:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:04.709 14:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:04.709 "name": "Existed_Raid", 00:20:04.709 "uuid": "4a2f963d-fbd1-488a-b5ce-dec2a30688fa", 00:20:04.709 "strip_size_kb": 0, 00:20:04.709 "state": "online", 00:20:04.709 "raid_level": "raid1", 00:20:04.709 "superblock": false, 00:20:04.709 "num_base_bdevs": 3, 00:20:04.709 "num_base_bdevs_discovered": 2, 00:20:04.709 "num_base_bdevs_operational": 2, 00:20:04.709 "base_bdevs_list": [ 00:20:04.709 { 00:20:04.709 "name": null, 00:20:04.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:04.709 "is_configured": false, 00:20:04.709 "data_offset": 0, 00:20:04.709 "data_size": 65536 00:20:04.709 }, 00:20:04.709 { 00:20:04.709 "name": "BaseBdev2", 00:20:04.709 "uuid": "cc315dc1-ee4a-4eb6-ac08-0553e6798941", 00:20:04.709 "is_configured": true, 00:20:04.709 "data_offset": 0, 00:20:04.709 "data_size": 65536 00:20:04.709 }, 00:20:04.709 { 00:20:04.709 "name": "BaseBdev3", 00:20:04.709 "uuid": "14f9e9e5-ee49-4f43-b000-b14e69e1ccee", 00:20:04.709 "is_configured": true, 00:20:04.709 "data_offset": 0, 00:20:04.709 "data_size": 65536 00:20:04.709 } 00:20:04.709 ] 00:20:04.709 }' 00:20:04.709 14:12:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:04.709 14:12:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:05.277 14:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:20:05.277 14:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:05.277 14:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:20:05.277 14:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:05.536 14:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:20:05.536 14:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:05.536 14:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:20:05.795 [2024-07-15 14:12:51.627077] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:05.795 14:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:20:05.795 14:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:05.795 14:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:20:05.795 14:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:06.062 14:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:20:06.063 14:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:06.063 14:12:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:20:06.321 [2024-07-15 14:12:52.176511] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:06.321 [2024-07-15 14:12:52.176776] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:06.321 [2024-07-15 14:12:52.260252] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:06.321 [2024-07-15 14:12:52.260449] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:06.321 [2024-07-15 14:12:52.260560] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:20:06.321 14:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:20:06.321 14:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:06.321 14:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:20:06.321 14:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:06.579 14:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:20:06.579 14:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:20:06.579 14:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:20:06.579 14:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:20:06.579 14:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:06.579 14:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:06.837 BaseBdev2 00:20:06.837 14:12:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:20:06.837 14:12:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:20:06.837 14:12:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:06.837 14:12:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:20:06.837 14:12:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:06.837 14:12:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:06.837 14:12:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:07.095 14:12:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:07.352 [ 00:20:07.352 { 00:20:07.352 "name": "BaseBdev2", 00:20:07.352 "aliases": [ 00:20:07.352 "6426658b-ed94-487a-90a3-adb8df466b42" 00:20:07.352 ], 00:20:07.352 "product_name": "Malloc disk", 00:20:07.352 "block_size": 512, 00:20:07.352 "num_blocks": 65536, 00:20:07.352 "uuid": "6426658b-ed94-487a-90a3-adb8df466b42", 00:20:07.352 "assigned_rate_limits": { 00:20:07.352 "rw_ios_per_sec": 0, 00:20:07.352 "rw_mbytes_per_sec": 0, 00:20:07.352 "r_mbytes_per_sec": 0, 00:20:07.352 "w_mbytes_per_sec": 0 00:20:07.352 }, 00:20:07.352 "claimed": false, 00:20:07.352 "zoned": false, 00:20:07.352 "supported_io_types": { 00:20:07.352 "read": true, 00:20:07.352 "write": true, 00:20:07.352 "unmap": true, 00:20:07.352 "flush": true, 00:20:07.352 "reset": true, 00:20:07.352 "nvme_admin": false, 00:20:07.352 "nvme_io": false, 00:20:07.352 "nvme_io_md": false, 00:20:07.352 "write_zeroes": true, 00:20:07.352 "zcopy": true, 00:20:07.352 "get_zone_info": false, 00:20:07.352 "zone_management": false, 00:20:07.352 "zone_append": false, 00:20:07.352 "compare": false, 00:20:07.352 "compare_and_write": false, 00:20:07.352 "abort": true, 00:20:07.352 "seek_hole": false, 00:20:07.352 "seek_data": false, 00:20:07.352 "copy": true, 00:20:07.352 "nvme_iov_md": false 00:20:07.352 }, 00:20:07.352 "memory_domains": [ 00:20:07.352 { 00:20:07.352 "dma_device_id": "system", 00:20:07.352 "dma_device_type": 1 00:20:07.352 }, 00:20:07.352 { 00:20:07.352 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:07.352 "dma_device_type": 2 00:20:07.352 } 00:20:07.352 ], 00:20:07.352 "driver_specific": {} 00:20:07.352 } 00:20:07.352 ] 00:20:07.352 14:12:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:20:07.352 14:12:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:20:07.352 14:12:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:07.352 14:12:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:07.918 BaseBdev3 00:20:07.918 14:12:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:20:07.918 14:12:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:20:07.919 14:12:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:07.919 14:12:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:20:07.919 14:12:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:07.919 14:12:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:07.919 14:12:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:08.177 14:12:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:08.436 [ 00:20:08.436 { 00:20:08.436 "name": "BaseBdev3", 00:20:08.436 "aliases": [ 00:20:08.436 "2600c58c-54ca-484f-8c79-8405a16b5b37" 00:20:08.436 ], 00:20:08.436 "product_name": "Malloc disk", 00:20:08.436 "block_size": 512, 00:20:08.436 "num_blocks": 65536, 00:20:08.436 "uuid": "2600c58c-54ca-484f-8c79-8405a16b5b37", 00:20:08.436 "assigned_rate_limits": { 00:20:08.436 "rw_ios_per_sec": 0, 00:20:08.436 "rw_mbytes_per_sec": 0, 00:20:08.436 "r_mbytes_per_sec": 0, 00:20:08.436 "w_mbytes_per_sec": 0 00:20:08.436 }, 00:20:08.436 "claimed": false, 00:20:08.436 "zoned": false, 00:20:08.436 "supported_io_types": { 00:20:08.436 "read": true, 00:20:08.436 "write": true, 00:20:08.436 "unmap": true, 00:20:08.436 "flush": true, 00:20:08.436 "reset": true, 00:20:08.436 "nvme_admin": false, 00:20:08.436 "nvme_io": false, 00:20:08.436 "nvme_io_md": false, 00:20:08.436 "write_zeroes": true, 00:20:08.436 "zcopy": true, 00:20:08.436 "get_zone_info": false, 00:20:08.436 "zone_management": false, 00:20:08.436 "zone_append": false, 00:20:08.436 "compare": false, 00:20:08.436 "compare_and_write": false, 00:20:08.436 "abort": true, 00:20:08.436 "seek_hole": false, 00:20:08.436 "seek_data": false, 00:20:08.436 "copy": true, 00:20:08.436 "nvme_iov_md": false 00:20:08.436 }, 00:20:08.436 "memory_domains": [ 00:20:08.436 { 00:20:08.436 "dma_device_id": "system", 00:20:08.436 "dma_device_type": 1 00:20:08.436 }, 00:20:08.436 { 00:20:08.436 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:08.436 "dma_device_type": 2 00:20:08.436 } 00:20:08.436 ], 00:20:08.436 "driver_specific": {} 00:20:08.436 } 00:20:08.436 ] 00:20:08.436 14:12:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:20:08.436 14:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:20:08.436 14:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:08.436 14:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:08.719 [2024-07-15 14:12:54.486768] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:08.719 [2024-07-15 14:12:54.487081] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:08.719 [2024-07-15 14:12:54.487222] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:08.719 [2024-07-15 14:12:54.488800] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:08.719 14:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:08.719 14:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:08.719 14:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:08.719 14:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:08.719 14:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:08.719 14:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:08.719 14:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:08.719 14:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:08.719 14:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:08.719 14:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:08.719 14:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:08.719 14:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:08.978 14:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:08.978 "name": "Existed_Raid", 00:20:08.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:08.978 "strip_size_kb": 0, 00:20:08.978 "state": "configuring", 00:20:08.978 "raid_level": "raid1", 00:20:08.978 "superblock": false, 00:20:08.978 "num_base_bdevs": 3, 00:20:08.978 "num_base_bdevs_discovered": 2, 00:20:08.978 "num_base_bdevs_operational": 3, 00:20:08.978 "base_bdevs_list": [ 00:20:08.978 { 00:20:08.978 "name": "BaseBdev1", 00:20:08.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:08.978 "is_configured": false, 00:20:08.978 "data_offset": 0, 00:20:08.978 "data_size": 0 00:20:08.978 }, 00:20:08.978 { 00:20:08.978 "name": "BaseBdev2", 00:20:08.978 "uuid": "6426658b-ed94-487a-90a3-adb8df466b42", 00:20:08.978 "is_configured": true, 00:20:08.978 "data_offset": 0, 00:20:08.978 "data_size": 65536 00:20:08.978 }, 00:20:08.978 { 00:20:08.978 "name": "BaseBdev3", 00:20:08.978 "uuid": "2600c58c-54ca-484f-8c79-8405a16b5b37", 00:20:08.978 "is_configured": true, 00:20:08.978 "data_offset": 0, 00:20:08.978 "data_size": 65536 00:20:08.978 } 00:20:08.978 ] 00:20:08.978 }' 00:20:08.978 14:12:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:08.978 14:12:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:09.544 14:12:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:20:09.803 [2024-07-15 14:12:55.734996] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:09.803 14:12:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:09.803 14:12:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:09.803 14:12:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:09.803 14:12:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:09.803 14:12:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:09.803 14:12:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:09.803 14:12:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:09.803 14:12:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:09.803 14:12:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:09.803 14:12:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:09.803 14:12:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:09.803 14:12:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:10.371 14:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:10.371 "name": "Existed_Raid", 00:20:10.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.371 "strip_size_kb": 0, 00:20:10.371 "state": "configuring", 00:20:10.371 "raid_level": "raid1", 00:20:10.371 "superblock": false, 00:20:10.371 "num_base_bdevs": 3, 00:20:10.371 "num_base_bdevs_discovered": 1, 00:20:10.371 "num_base_bdevs_operational": 3, 00:20:10.371 "base_bdevs_list": [ 00:20:10.371 { 00:20:10.371 "name": "BaseBdev1", 00:20:10.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.371 "is_configured": false, 00:20:10.371 "data_offset": 0, 00:20:10.371 "data_size": 0 00:20:10.371 }, 00:20:10.371 { 00:20:10.371 "name": null, 00:20:10.371 "uuid": "6426658b-ed94-487a-90a3-adb8df466b42", 00:20:10.371 "is_configured": false, 00:20:10.371 "data_offset": 0, 00:20:10.371 "data_size": 65536 00:20:10.371 }, 00:20:10.371 { 00:20:10.371 "name": "BaseBdev3", 00:20:10.371 "uuid": "2600c58c-54ca-484f-8c79-8405a16b5b37", 00:20:10.371 "is_configured": true, 00:20:10.371 "data_offset": 0, 00:20:10.371 "data_size": 65536 00:20:10.371 } 00:20:10.371 ] 00:20:10.371 }' 00:20:10.371 14:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:10.371 14:12:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:10.938 14:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:10.938 14:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:11.196 14:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:20:11.196 14:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:11.454 [2024-07-15 14:12:57.306366] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:11.454 BaseBdev1 00:20:11.454 14:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:20:11.454 14:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:20:11.454 14:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:11.454 14:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:20:11.454 14:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:11.454 14:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:11.454 14:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:11.712 14:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:11.971 [ 00:20:11.971 { 00:20:11.971 "name": "BaseBdev1", 00:20:11.971 "aliases": [ 00:20:11.971 "52f070de-1e2c-42be-a00c-eeb0b45b9067" 00:20:11.971 ], 00:20:11.971 "product_name": "Malloc disk", 00:20:11.971 "block_size": 512, 00:20:11.971 "num_blocks": 65536, 00:20:11.971 "uuid": "52f070de-1e2c-42be-a00c-eeb0b45b9067", 00:20:11.971 "assigned_rate_limits": { 00:20:11.971 "rw_ios_per_sec": 0, 00:20:11.971 "rw_mbytes_per_sec": 0, 00:20:11.971 "r_mbytes_per_sec": 0, 00:20:11.971 "w_mbytes_per_sec": 0 00:20:11.971 }, 00:20:11.971 "claimed": true, 00:20:11.971 "claim_type": "exclusive_write", 00:20:11.971 "zoned": false, 00:20:11.971 "supported_io_types": { 00:20:11.971 "read": true, 00:20:11.971 "write": true, 00:20:11.971 "unmap": true, 00:20:11.971 "flush": true, 00:20:11.971 "reset": true, 00:20:11.971 "nvme_admin": false, 00:20:11.971 "nvme_io": false, 00:20:11.971 "nvme_io_md": false, 00:20:11.971 "write_zeroes": true, 00:20:11.971 "zcopy": true, 00:20:11.971 "get_zone_info": false, 00:20:11.971 "zone_management": false, 00:20:11.971 "zone_append": false, 00:20:11.971 "compare": false, 00:20:11.971 "compare_and_write": false, 00:20:11.971 "abort": true, 00:20:11.971 "seek_hole": false, 00:20:11.971 "seek_data": false, 00:20:11.971 "copy": true, 00:20:11.971 "nvme_iov_md": false 00:20:11.971 }, 00:20:11.971 "memory_domains": [ 00:20:11.971 { 00:20:11.971 "dma_device_id": "system", 00:20:11.971 "dma_device_type": 1 00:20:11.971 }, 00:20:11.971 { 00:20:11.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:11.971 "dma_device_type": 2 00:20:11.971 } 00:20:11.971 ], 00:20:11.971 "driver_specific": {} 00:20:11.971 } 00:20:11.971 ] 00:20:11.971 14:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:20:11.971 14:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:11.971 14:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:11.971 14:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:11.971 14:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:11.971 14:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:11.971 14:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:11.971 14:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:11.971 14:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:11.971 14:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:11.971 14:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:11.971 14:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:11.971 14:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:12.228 14:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:12.228 "name": "Existed_Raid", 00:20:12.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:12.228 "strip_size_kb": 0, 00:20:12.228 "state": "configuring", 00:20:12.228 "raid_level": "raid1", 00:20:12.228 "superblock": false, 00:20:12.228 "num_base_bdevs": 3, 00:20:12.228 "num_base_bdevs_discovered": 2, 00:20:12.228 "num_base_bdevs_operational": 3, 00:20:12.228 "base_bdevs_list": [ 00:20:12.228 { 00:20:12.228 "name": "BaseBdev1", 00:20:12.228 "uuid": "52f070de-1e2c-42be-a00c-eeb0b45b9067", 00:20:12.228 "is_configured": true, 00:20:12.228 "data_offset": 0, 00:20:12.228 "data_size": 65536 00:20:12.228 }, 00:20:12.228 { 00:20:12.228 "name": null, 00:20:12.228 "uuid": "6426658b-ed94-487a-90a3-adb8df466b42", 00:20:12.228 "is_configured": false, 00:20:12.228 "data_offset": 0, 00:20:12.228 "data_size": 65536 00:20:12.228 }, 00:20:12.228 { 00:20:12.228 "name": "BaseBdev3", 00:20:12.228 "uuid": "2600c58c-54ca-484f-8c79-8405a16b5b37", 00:20:12.228 "is_configured": true, 00:20:12.228 "data_offset": 0, 00:20:12.228 "data_size": 65536 00:20:12.228 } 00:20:12.228 ] 00:20:12.229 }' 00:20:12.229 14:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:12.229 14:12:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:12.794 14:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:12.794 14:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:13.052 14:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:20:13.052 14:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:20:13.311 [2024-07-15 14:12:59.247626] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:13.311 14:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:13.311 14:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:13.311 14:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:13.311 14:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:13.311 14:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:13.311 14:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:13.311 14:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:13.311 14:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:13.311 14:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:13.311 14:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:13.311 14:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:13.311 14:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:13.879 14:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:13.879 "name": "Existed_Raid", 00:20:13.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:13.879 "strip_size_kb": 0, 00:20:13.879 "state": "configuring", 00:20:13.879 "raid_level": "raid1", 00:20:13.879 "superblock": false, 00:20:13.879 "num_base_bdevs": 3, 00:20:13.879 "num_base_bdevs_discovered": 1, 00:20:13.879 "num_base_bdevs_operational": 3, 00:20:13.879 "base_bdevs_list": [ 00:20:13.879 { 00:20:13.879 "name": "BaseBdev1", 00:20:13.879 "uuid": "52f070de-1e2c-42be-a00c-eeb0b45b9067", 00:20:13.879 "is_configured": true, 00:20:13.879 "data_offset": 0, 00:20:13.879 "data_size": 65536 00:20:13.879 }, 00:20:13.879 { 00:20:13.879 "name": null, 00:20:13.879 "uuid": "6426658b-ed94-487a-90a3-adb8df466b42", 00:20:13.879 "is_configured": false, 00:20:13.879 "data_offset": 0, 00:20:13.879 "data_size": 65536 00:20:13.879 }, 00:20:13.879 { 00:20:13.879 "name": null, 00:20:13.879 "uuid": "2600c58c-54ca-484f-8c79-8405a16b5b37", 00:20:13.879 "is_configured": false, 00:20:13.879 "data_offset": 0, 00:20:13.879 "data_size": 65536 00:20:13.879 } 00:20:13.879 ] 00:20:13.879 }' 00:20:13.879 14:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:13.879 14:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:14.447 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:14.447 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:14.706 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:20:14.706 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:20:14.965 [2024-07-15 14:13:00.939950] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:14.965 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:14.965 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:14.965 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:14.965 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:14.965 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:14.965 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:14.965 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:14.965 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:14.965 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:14.965 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:14.965 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:14.965 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:15.223 14:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:15.223 "name": "Existed_Raid", 00:20:15.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:15.223 "strip_size_kb": 0, 00:20:15.223 "state": "configuring", 00:20:15.223 "raid_level": "raid1", 00:20:15.223 "superblock": false, 00:20:15.223 "num_base_bdevs": 3, 00:20:15.223 "num_base_bdevs_discovered": 2, 00:20:15.223 "num_base_bdevs_operational": 3, 00:20:15.223 "base_bdevs_list": [ 00:20:15.223 { 00:20:15.223 "name": "BaseBdev1", 00:20:15.223 "uuid": "52f070de-1e2c-42be-a00c-eeb0b45b9067", 00:20:15.223 "is_configured": true, 00:20:15.223 "data_offset": 0, 00:20:15.223 "data_size": 65536 00:20:15.223 }, 00:20:15.223 { 00:20:15.223 "name": null, 00:20:15.223 "uuid": "6426658b-ed94-487a-90a3-adb8df466b42", 00:20:15.223 "is_configured": false, 00:20:15.223 "data_offset": 0, 00:20:15.223 "data_size": 65536 00:20:15.223 }, 00:20:15.223 { 00:20:15.223 "name": "BaseBdev3", 00:20:15.223 "uuid": "2600c58c-54ca-484f-8c79-8405a16b5b37", 00:20:15.223 "is_configured": true, 00:20:15.223 "data_offset": 0, 00:20:15.223 "data_size": 65536 00:20:15.223 } 00:20:15.223 ] 00:20:15.223 }' 00:20:15.223 14:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:15.223 14:13:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:15.790 14:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:15.790 14:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:16.048 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:20:16.048 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:16.307 [2024-07-15 14:13:02.244084] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:16.565 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:16.565 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:16.565 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:16.565 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:16.565 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:16.565 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:16.565 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:16.565 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:16.565 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:16.565 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:16.565 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:16.565 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:16.824 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:16.824 "name": "Existed_Raid", 00:20:16.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:16.824 "strip_size_kb": 0, 00:20:16.824 "state": "configuring", 00:20:16.824 "raid_level": "raid1", 00:20:16.824 "superblock": false, 00:20:16.824 "num_base_bdevs": 3, 00:20:16.824 "num_base_bdevs_discovered": 1, 00:20:16.824 "num_base_bdevs_operational": 3, 00:20:16.824 "base_bdevs_list": [ 00:20:16.824 { 00:20:16.824 "name": null, 00:20:16.824 "uuid": "52f070de-1e2c-42be-a00c-eeb0b45b9067", 00:20:16.824 "is_configured": false, 00:20:16.824 "data_offset": 0, 00:20:16.824 "data_size": 65536 00:20:16.824 }, 00:20:16.824 { 00:20:16.824 "name": null, 00:20:16.824 "uuid": "6426658b-ed94-487a-90a3-adb8df466b42", 00:20:16.824 "is_configured": false, 00:20:16.824 "data_offset": 0, 00:20:16.824 "data_size": 65536 00:20:16.824 }, 00:20:16.824 { 00:20:16.824 "name": "BaseBdev3", 00:20:16.824 "uuid": "2600c58c-54ca-484f-8c79-8405a16b5b37", 00:20:16.824 "is_configured": true, 00:20:16.824 "data_offset": 0, 00:20:16.824 "data_size": 65536 00:20:16.824 } 00:20:16.824 ] 00:20:16.824 }' 00:20:16.824 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:16.824 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:17.392 14:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:17.392 14:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:17.651 14:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:20:17.651 14:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:20:17.909 [2024-07-15 14:13:03.748353] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:17.909 14:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:17.909 14:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:17.909 14:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:17.909 14:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:17.909 14:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:17.909 14:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:17.910 14:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:17.910 14:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:17.910 14:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:17.910 14:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:17.910 14:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:17.910 14:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:18.169 14:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:18.169 "name": "Existed_Raid", 00:20:18.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:18.169 "strip_size_kb": 0, 00:20:18.169 "state": "configuring", 00:20:18.169 "raid_level": "raid1", 00:20:18.169 "superblock": false, 00:20:18.169 "num_base_bdevs": 3, 00:20:18.169 "num_base_bdevs_discovered": 2, 00:20:18.169 "num_base_bdevs_operational": 3, 00:20:18.169 "base_bdevs_list": [ 00:20:18.169 { 00:20:18.169 "name": null, 00:20:18.169 "uuid": "52f070de-1e2c-42be-a00c-eeb0b45b9067", 00:20:18.169 "is_configured": false, 00:20:18.169 "data_offset": 0, 00:20:18.169 "data_size": 65536 00:20:18.169 }, 00:20:18.169 { 00:20:18.169 "name": "BaseBdev2", 00:20:18.169 "uuid": "6426658b-ed94-487a-90a3-adb8df466b42", 00:20:18.169 "is_configured": true, 00:20:18.169 "data_offset": 0, 00:20:18.169 "data_size": 65536 00:20:18.169 }, 00:20:18.169 { 00:20:18.169 "name": "BaseBdev3", 00:20:18.169 "uuid": "2600c58c-54ca-484f-8c79-8405a16b5b37", 00:20:18.169 "is_configured": true, 00:20:18.169 "data_offset": 0, 00:20:18.169 "data_size": 65536 00:20:18.169 } 00:20:18.169 ] 00:20:18.169 }' 00:20:18.169 14:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:18.169 14:13:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:18.736 14:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:18.736 14:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:18.994 14:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:20:18.994 14:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:18.994 14:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:20:19.253 14:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 52f070de-1e2c-42be-a00c-eeb0b45b9067 00:20:19.512 [2024-07-15 14:13:05.511790] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:20:19.512 [2024-07-15 14:13:05.511849] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:20:19.512 [2024-07-15 14:13:05.511860] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:20:19.512 [2024-07-15 14:13:05.511953] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:19.512 [2024-07-15 14:13:05.512174] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:20:19.512 [2024-07-15 14:13:05.512189] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000008a80 00:20:19.512 [2024-07-15 14:13:05.512383] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:19.512 NewBaseBdev 00:20:19.771 14:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:20:19.771 14:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:20:19.771 14:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:19.771 14:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:20:19.771 14:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:19.771 14:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:19.771 14:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:20.030 14:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:20:20.030 [ 00:20:20.030 { 00:20:20.030 "name": "NewBaseBdev", 00:20:20.030 "aliases": [ 00:20:20.030 "52f070de-1e2c-42be-a00c-eeb0b45b9067" 00:20:20.030 ], 00:20:20.030 "product_name": "Malloc disk", 00:20:20.030 "block_size": 512, 00:20:20.030 "num_blocks": 65536, 00:20:20.030 "uuid": "52f070de-1e2c-42be-a00c-eeb0b45b9067", 00:20:20.030 "assigned_rate_limits": { 00:20:20.030 "rw_ios_per_sec": 0, 00:20:20.030 "rw_mbytes_per_sec": 0, 00:20:20.030 "r_mbytes_per_sec": 0, 00:20:20.030 "w_mbytes_per_sec": 0 00:20:20.030 }, 00:20:20.030 "claimed": true, 00:20:20.030 "claim_type": "exclusive_write", 00:20:20.030 "zoned": false, 00:20:20.030 "supported_io_types": { 00:20:20.030 "read": true, 00:20:20.030 "write": true, 00:20:20.030 "unmap": true, 00:20:20.030 "flush": true, 00:20:20.030 "reset": true, 00:20:20.030 "nvme_admin": false, 00:20:20.030 "nvme_io": false, 00:20:20.030 "nvme_io_md": false, 00:20:20.030 "write_zeroes": true, 00:20:20.030 "zcopy": true, 00:20:20.030 "get_zone_info": false, 00:20:20.030 "zone_management": false, 00:20:20.030 "zone_append": false, 00:20:20.030 "compare": false, 00:20:20.030 "compare_and_write": false, 00:20:20.030 "abort": true, 00:20:20.030 "seek_hole": false, 00:20:20.030 "seek_data": false, 00:20:20.030 "copy": true, 00:20:20.030 "nvme_iov_md": false 00:20:20.030 }, 00:20:20.030 "memory_domains": [ 00:20:20.030 { 00:20:20.030 "dma_device_id": "system", 00:20:20.030 "dma_device_type": 1 00:20:20.030 }, 00:20:20.030 { 00:20:20.030 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:20.030 "dma_device_type": 2 00:20:20.030 } 00:20:20.030 ], 00:20:20.030 "driver_specific": {} 00:20:20.030 } 00:20:20.030 ] 00:20:20.030 14:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:20:20.030 14:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:20:20.030 14:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:20.030 14:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:20.030 14:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:20.030 14:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:20.030 14:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:20.030 14:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:20.030 14:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:20.030 14:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:20.030 14:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:20.030 14:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:20.030 14:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:20.596 14:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:20.596 "name": "Existed_Raid", 00:20:20.596 "uuid": "b0618e28-27b7-4538-a62f-755cc1563ee9", 00:20:20.596 "strip_size_kb": 0, 00:20:20.596 "state": "online", 00:20:20.596 "raid_level": "raid1", 00:20:20.596 "superblock": false, 00:20:20.596 "num_base_bdevs": 3, 00:20:20.596 "num_base_bdevs_discovered": 3, 00:20:20.596 "num_base_bdevs_operational": 3, 00:20:20.596 "base_bdevs_list": [ 00:20:20.596 { 00:20:20.596 "name": "NewBaseBdev", 00:20:20.596 "uuid": "52f070de-1e2c-42be-a00c-eeb0b45b9067", 00:20:20.596 "is_configured": true, 00:20:20.596 "data_offset": 0, 00:20:20.596 "data_size": 65536 00:20:20.596 }, 00:20:20.596 { 00:20:20.596 "name": "BaseBdev2", 00:20:20.596 "uuid": "6426658b-ed94-487a-90a3-adb8df466b42", 00:20:20.596 "is_configured": true, 00:20:20.596 "data_offset": 0, 00:20:20.596 "data_size": 65536 00:20:20.596 }, 00:20:20.596 { 00:20:20.596 "name": "BaseBdev3", 00:20:20.596 "uuid": "2600c58c-54ca-484f-8c79-8405a16b5b37", 00:20:20.596 "is_configured": true, 00:20:20.596 "data_offset": 0, 00:20:20.596 "data_size": 65536 00:20:20.596 } 00:20:20.596 ] 00:20:20.596 }' 00:20:20.597 14:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:20.597 14:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:21.163 14:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:20:21.163 14:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:20:21.163 14:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:21.163 14:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:21.163 14:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:21.163 14:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:20:21.163 14:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:20:21.163 14:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:21.420 [2024-07-15 14:13:07.280303] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:21.420 14:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:21.420 "name": "Existed_Raid", 00:20:21.420 "aliases": [ 00:20:21.420 "b0618e28-27b7-4538-a62f-755cc1563ee9" 00:20:21.420 ], 00:20:21.420 "product_name": "Raid Volume", 00:20:21.420 "block_size": 512, 00:20:21.420 "num_blocks": 65536, 00:20:21.420 "uuid": "b0618e28-27b7-4538-a62f-755cc1563ee9", 00:20:21.420 "assigned_rate_limits": { 00:20:21.420 "rw_ios_per_sec": 0, 00:20:21.420 "rw_mbytes_per_sec": 0, 00:20:21.420 "r_mbytes_per_sec": 0, 00:20:21.420 "w_mbytes_per_sec": 0 00:20:21.420 }, 00:20:21.420 "claimed": false, 00:20:21.420 "zoned": false, 00:20:21.421 "supported_io_types": { 00:20:21.421 "read": true, 00:20:21.421 "write": true, 00:20:21.421 "unmap": false, 00:20:21.421 "flush": false, 00:20:21.421 "reset": true, 00:20:21.421 "nvme_admin": false, 00:20:21.421 "nvme_io": false, 00:20:21.421 "nvme_io_md": false, 00:20:21.421 "write_zeroes": true, 00:20:21.421 "zcopy": false, 00:20:21.421 "get_zone_info": false, 00:20:21.421 "zone_management": false, 00:20:21.421 "zone_append": false, 00:20:21.421 "compare": false, 00:20:21.421 "compare_and_write": false, 00:20:21.421 "abort": false, 00:20:21.421 "seek_hole": false, 00:20:21.421 "seek_data": false, 00:20:21.421 "copy": false, 00:20:21.421 "nvme_iov_md": false 00:20:21.421 }, 00:20:21.421 "memory_domains": [ 00:20:21.421 { 00:20:21.421 "dma_device_id": "system", 00:20:21.421 "dma_device_type": 1 00:20:21.421 }, 00:20:21.421 { 00:20:21.421 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:21.421 "dma_device_type": 2 00:20:21.421 }, 00:20:21.421 { 00:20:21.421 "dma_device_id": "system", 00:20:21.421 "dma_device_type": 1 00:20:21.421 }, 00:20:21.421 { 00:20:21.421 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:21.421 "dma_device_type": 2 00:20:21.421 }, 00:20:21.421 { 00:20:21.421 "dma_device_id": "system", 00:20:21.421 "dma_device_type": 1 00:20:21.421 }, 00:20:21.421 { 00:20:21.421 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:21.421 "dma_device_type": 2 00:20:21.421 } 00:20:21.421 ], 00:20:21.421 "driver_specific": { 00:20:21.421 "raid": { 00:20:21.421 "uuid": "b0618e28-27b7-4538-a62f-755cc1563ee9", 00:20:21.421 "strip_size_kb": 0, 00:20:21.421 "state": "online", 00:20:21.421 "raid_level": "raid1", 00:20:21.421 "superblock": false, 00:20:21.421 "num_base_bdevs": 3, 00:20:21.421 "num_base_bdevs_discovered": 3, 00:20:21.421 "num_base_bdevs_operational": 3, 00:20:21.421 "base_bdevs_list": [ 00:20:21.421 { 00:20:21.421 "name": "NewBaseBdev", 00:20:21.421 "uuid": "52f070de-1e2c-42be-a00c-eeb0b45b9067", 00:20:21.421 "is_configured": true, 00:20:21.421 "data_offset": 0, 00:20:21.421 "data_size": 65536 00:20:21.421 }, 00:20:21.421 { 00:20:21.421 "name": "BaseBdev2", 00:20:21.421 "uuid": "6426658b-ed94-487a-90a3-adb8df466b42", 00:20:21.421 "is_configured": true, 00:20:21.421 "data_offset": 0, 00:20:21.421 "data_size": 65536 00:20:21.421 }, 00:20:21.421 { 00:20:21.421 "name": "BaseBdev3", 00:20:21.421 "uuid": "2600c58c-54ca-484f-8c79-8405a16b5b37", 00:20:21.421 "is_configured": true, 00:20:21.421 "data_offset": 0, 00:20:21.421 "data_size": 65536 00:20:21.421 } 00:20:21.421 ] 00:20:21.421 } 00:20:21.421 } 00:20:21.421 }' 00:20:21.421 14:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:21.421 14:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:20:21.421 BaseBdev2 00:20:21.421 BaseBdev3' 00:20:21.421 14:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:21.421 14:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:21.421 14:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:20:21.679 14:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:21.679 "name": "NewBaseBdev", 00:20:21.679 "aliases": [ 00:20:21.679 "52f070de-1e2c-42be-a00c-eeb0b45b9067" 00:20:21.679 ], 00:20:21.679 "product_name": "Malloc disk", 00:20:21.679 "block_size": 512, 00:20:21.679 "num_blocks": 65536, 00:20:21.679 "uuid": "52f070de-1e2c-42be-a00c-eeb0b45b9067", 00:20:21.679 "assigned_rate_limits": { 00:20:21.679 "rw_ios_per_sec": 0, 00:20:21.679 "rw_mbytes_per_sec": 0, 00:20:21.679 "r_mbytes_per_sec": 0, 00:20:21.679 "w_mbytes_per_sec": 0 00:20:21.679 }, 00:20:21.679 "claimed": true, 00:20:21.679 "claim_type": "exclusive_write", 00:20:21.679 "zoned": false, 00:20:21.679 "supported_io_types": { 00:20:21.679 "read": true, 00:20:21.679 "write": true, 00:20:21.679 "unmap": true, 00:20:21.679 "flush": true, 00:20:21.679 "reset": true, 00:20:21.679 "nvme_admin": false, 00:20:21.679 "nvme_io": false, 00:20:21.679 "nvme_io_md": false, 00:20:21.679 "write_zeroes": true, 00:20:21.679 "zcopy": true, 00:20:21.679 "get_zone_info": false, 00:20:21.679 "zone_management": false, 00:20:21.679 "zone_append": false, 00:20:21.679 "compare": false, 00:20:21.679 "compare_and_write": false, 00:20:21.679 "abort": true, 00:20:21.679 "seek_hole": false, 00:20:21.679 "seek_data": false, 00:20:21.679 "copy": true, 00:20:21.679 "nvme_iov_md": false 00:20:21.679 }, 00:20:21.679 "memory_domains": [ 00:20:21.679 { 00:20:21.679 "dma_device_id": "system", 00:20:21.679 "dma_device_type": 1 00:20:21.679 }, 00:20:21.679 { 00:20:21.679 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:21.679 "dma_device_type": 2 00:20:21.679 } 00:20:21.679 ], 00:20:21.679 "driver_specific": {} 00:20:21.679 }' 00:20:21.679 14:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:21.679 14:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:21.937 14:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:21.937 14:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:21.937 14:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:21.937 14:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:21.937 14:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:21.937 14:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:21.937 14:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:21.937 14:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:22.195 14:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:22.195 14:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:22.195 14:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:22.195 14:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:22.195 14:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:20:22.453 14:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:22.453 "name": "BaseBdev2", 00:20:22.453 "aliases": [ 00:20:22.453 "6426658b-ed94-487a-90a3-adb8df466b42" 00:20:22.453 ], 00:20:22.453 "product_name": "Malloc disk", 00:20:22.453 "block_size": 512, 00:20:22.453 "num_blocks": 65536, 00:20:22.453 "uuid": "6426658b-ed94-487a-90a3-adb8df466b42", 00:20:22.453 "assigned_rate_limits": { 00:20:22.453 "rw_ios_per_sec": 0, 00:20:22.453 "rw_mbytes_per_sec": 0, 00:20:22.453 "r_mbytes_per_sec": 0, 00:20:22.453 "w_mbytes_per_sec": 0 00:20:22.453 }, 00:20:22.453 "claimed": true, 00:20:22.453 "claim_type": "exclusive_write", 00:20:22.453 "zoned": false, 00:20:22.453 "supported_io_types": { 00:20:22.453 "read": true, 00:20:22.453 "write": true, 00:20:22.453 "unmap": true, 00:20:22.453 "flush": true, 00:20:22.453 "reset": true, 00:20:22.453 "nvme_admin": false, 00:20:22.453 "nvme_io": false, 00:20:22.453 "nvme_io_md": false, 00:20:22.453 "write_zeroes": true, 00:20:22.453 "zcopy": true, 00:20:22.453 "get_zone_info": false, 00:20:22.453 "zone_management": false, 00:20:22.453 "zone_append": false, 00:20:22.453 "compare": false, 00:20:22.453 "compare_and_write": false, 00:20:22.453 "abort": true, 00:20:22.453 "seek_hole": false, 00:20:22.453 "seek_data": false, 00:20:22.453 "copy": true, 00:20:22.453 "nvme_iov_md": false 00:20:22.453 }, 00:20:22.453 "memory_domains": [ 00:20:22.453 { 00:20:22.453 "dma_device_id": "system", 00:20:22.453 "dma_device_type": 1 00:20:22.453 }, 00:20:22.453 { 00:20:22.453 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:22.453 "dma_device_type": 2 00:20:22.453 } 00:20:22.453 ], 00:20:22.453 "driver_specific": {} 00:20:22.453 }' 00:20:22.453 14:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:22.453 14:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:22.453 14:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:22.453 14:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:22.453 14:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:22.711 14:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:22.711 14:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:22.711 14:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:22.711 14:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:22.711 14:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:22.711 14:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:22.711 14:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:22.711 14:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:22.711 14:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:20:22.711 14:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:23.278 14:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:23.278 "name": "BaseBdev3", 00:20:23.278 "aliases": [ 00:20:23.278 "2600c58c-54ca-484f-8c79-8405a16b5b37" 00:20:23.278 ], 00:20:23.278 "product_name": "Malloc disk", 00:20:23.278 "block_size": 512, 00:20:23.278 "num_blocks": 65536, 00:20:23.278 "uuid": "2600c58c-54ca-484f-8c79-8405a16b5b37", 00:20:23.278 "assigned_rate_limits": { 00:20:23.278 "rw_ios_per_sec": 0, 00:20:23.278 "rw_mbytes_per_sec": 0, 00:20:23.278 "r_mbytes_per_sec": 0, 00:20:23.278 "w_mbytes_per_sec": 0 00:20:23.278 }, 00:20:23.278 "claimed": true, 00:20:23.278 "claim_type": "exclusive_write", 00:20:23.278 "zoned": false, 00:20:23.278 "supported_io_types": { 00:20:23.278 "read": true, 00:20:23.278 "write": true, 00:20:23.278 "unmap": true, 00:20:23.278 "flush": true, 00:20:23.278 "reset": true, 00:20:23.278 "nvme_admin": false, 00:20:23.278 "nvme_io": false, 00:20:23.278 "nvme_io_md": false, 00:20:23.278 "write_zeroes": true, 00:20:23.278 "zcopy": true, 00:20:23.278 "get_zone_info": false, 00:20:23.278 "zone_management": false, 00:20:23.278 "zone_append": false, 00:20:23.278 "compare": false, 00:20:23.278 "compare_and_write": false, 00:20:23.278 "abort": true, 00:20:23.278 "seek_hole": false, 00:20:23.278 "seek_data": false, 00:20:23.278 "copy": true, 00:20:23.278 "nvme_iov_md": false 00:20:23.278 }, 00:20:23.278 "memory_domains": [ 00:20:23.278 { 00:20:23.278 "dma_device_id": "system", 00:20:23.278 "dma_device_type": 1 00:20:23.278 }, 00:20:23.278 { 00:20:23.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:23.278 "dma_device_type": 2 00:20:23.278 } 00:20:23.278 ], 00:20:23.278 "driver_specific": {} 00:20:23.278 }' 00:20:23.278 14:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:23.278 14:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:23.278 14:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:23.278 14:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:23.278 14:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:23.278 14:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:23.278 14:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:23.278 14:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:23.536 14:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:23.536 14:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:23.536 14:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:23.536 14:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:23.536 14:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:23.795 [2024-07-15 14:13:09.672448] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:23.795 [2024-07-15 14:13:09.672513] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:23.795 [2024-07-15 14:13:09.672601] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:23.795 [2024-07-15 14:13:09.672805] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:23.795 [2024-07-15 14:13:09.672819] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name Existed_Raid, state offline 00:20:23.795 14:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 197156 00:20:23.795 14:13:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 197156 ']' 00:20:23.795 14:13:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 197156 00:20:23.795 14:13:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:20:23.795 14:13:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:23.795 14:13:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 197156 00:20:23.795 14:13:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:23.795 14:13:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:23.795 14:13:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 197156' 00:20:23.795 killing process with pid 197156 00:20:23.795 14:13:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 197156 00:20:23.795 [2024-07-15 14:13:09.715871] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:23.795 14:13:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 197156 00:20:24.057 [2024-07-15 14:13:09.972862] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:25.435 14:13:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:20:25.435 00:20:25.435 real 0m33.328s 00:20:25.435 user 1m1.513s 00:20:25.435 sys 0m3.824s 00:20:25.435 14:13:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:25.435 ************************************ 00:20:25.435 END TEST raid_state_function_test 00:20:25.435 ************************************ 00:20:25.435 14:13:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:20:25.435 14:13:11 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:20:25.435 14:13:11 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:20:25.435 14:13:11 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:20:25.435 14:13:11 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:25.435 14:13:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:25.435 ************************************ 00:20:25.435 START TEST raid_state_function_test_sb 00:20:25.435 ************************************ 00:20:25.435 14:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 3 true 00:20:25.435 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:20:25.435 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:20:25.435 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:20:25.435 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:20:25.435 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:20:25.435 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:25.435 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:20:25.435 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:25.435 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:25.435 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:20:25.435 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:25.435 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:25.435 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:20:25.435 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:20:25.435 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:20:25.435 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:20:25.435 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:20:25.435 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:20:25.435 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:20:25.435 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:20:25.435 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:20:25.435 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:20:25.435 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:20:25.435 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:20:25.435 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:20:25.435 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=198159 00:20:25.435 Process raid pid: 198159 00:20:25.435 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 198159' 00:20:25.435 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:20:25.435 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 198159 /var/tmp/spdk-raid.sock 00:20:25.435 14:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 198159 ']' 00:20:25.435 14:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:25.435 14:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:25.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:25.435 14:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:25.435 14:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:25.435 14:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:25.435 [2024-07-15 14:13:11.203331] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:20:25.435 [2024-07-15 14:13:11.203569] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:25.435 [2024-07-15 14:13:11.365141] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:25.696 [2024-07-15 14:13:11.624915] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:25.976 [2024-07-15 14:13:11.824383] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:26.234 14:13:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:26.234 14:13:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:20:26.234 14:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:26.492 [2024-07-15 14:13:12.355467] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:26.492 [2024-07-15 14:13:12.355753] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:26.492 [2024-07-15 14:13:12.355908] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:26.492 [2024-07-15 14:13:12.355993] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:26.492 [2024-07-15 14:13:12.356088] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:26.492 [2024-07-15 14:13:12.356152] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:26.492 14:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:26.492 14:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:26.492 14:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:26.492 14:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:26.492 14:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:26.492 14:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:26.492 14:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:26.492 14:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:26.492 14:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:26.492 14:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:26.492 14:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:26.492 14:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:26.749 14:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:26.749 "name": "Existed_Raid", 00:20:26.749 "uuid": "fe2e1bfb-e678-4237-95db-5a77cca8c375", 00:20:26.749 "strip_size_kb": 0, 00:20:26.749 "state": "configuring", 00:20:26.749 "raid_level": "raid1", 00:20:26.749 "superblock": true, 00:20:26.749 "num_base_bdevs": 3, 00:20:26.749 "num_base_bdevs_discovered": 0, 00:20:26.749 "num_base_bdevs_operational": 3, 00:20:26.749 "base_bdevs_list": [ 00:20:26.749 { 00:20:26.749 "name": "BaseBdev1", 00:20:26.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:26.749 "is_configured": false, 00:20:26.749 "data_offset": 0, 00:20:26.749 "data_size": 0 00:20:26.749 }, 00:20:26.749 { 00:20:26.749 "name": "BaseBdev2", 00:20:26.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:26.749 "is_configured": false, 00:20:26.750 "data_offset": 0, 00:20:26.750 "data_size": 0 00:20:26.750 }, 00:20:26.750 { 00:20:26.750 "name": "BaseBdev3", 00:20:26.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:26.750 "is_configured": false, 00:20:26.750 "data_offset": 0, 00:20:26.750 "data_size": 0 00:20:26.750 } 00:20:26.750 ] 00:20:26.750 }' 00:20:26.750 14:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:26.750 14:13:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:27.682 14:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:27.682 [2024-07-15 14:13:13.555568] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:27.682 [2024-07-15 14:13:13.555830] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:20:27.682 14:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:27.939 [2024-07-15 14:13:13.847640] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:27.940 [2024-07-15 14:13:13.847929] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:27.940 [2024-07-15 14:13:13.848052] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:27.940 [2024-07-15 14:13:13.848121] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:27.940 [2024-07-15 14:13:13.848274] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:27.940 [2024-07-15 14:13:13.848365] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:27.940 14:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:28.197 [2024-07-15 14:13:14.118502] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:28.197 BaseBdev1 00:20:28.197 14:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:20:28.198 14:13:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:20:28.198 14:13:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:28.198 14:13:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:20:28.198 14:13:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:28.198 14:13:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:28.198 14:13:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:28.455 14:13:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:28.712 [ 00:20:28.712 { 00:20:28.712 "name": "BaseBdev1", 00:20:28.712 "aliases": [ 00:20:28.712 "e8a191b7-2b48-4dff-af52-85a297265698" 00:20:28.712 ], 00:20:28.712 "product_name": "Malloc disk", 00:20:28.712 "block_size": 512, 00:20:28.712 "num_blocks": 65536, 00:20:28.712 "uuid": "e8a191b7-2b48-4dff-af52-85a297265698", 00:20:28.712 "assigned_rate_limits": { 00:20:28.712 "rw_ios_per_sec": 0, 00:20:28.712 "rw_mbytes_per_sec": 0, 00:20:28.712 "r_mbytes_per_sec": 0, 00:20:28.712 "w_mbytes_per_sec": 0 00:20:28.712 }, 00:20:28.712 "claimed": true, 00:20:28.712 "claim_type": "exclusive_write", 00:20:28.712 "zoned": false, 00:20:28.712 "supported_io_types": { 00:20:28.712 "read": true, 00:20:28.712 "write": true, 00:20:28.712 "unmap": true, 00:20:28.712 "flush": true, 00:20:28.712 "reset": true, 00:20:28.712 "nvme_admin": false, 00:20:28.712 "nvme_io": false, 00:20:28.712 "nvme_io_md": false, 00:20:28.712 "write_zeroes": true, 00:20:28.712 "zcopy": true, 00:20:28.712 "get_zone_info": false, 00:20:28.712 "zone_management": false, 00:20:28.712 "zone_append": false, 00:20:28.712 "compare": false, 00:20:28.712 "compare_and_write": false, 00:20:28.712 "abort": true, 00:20:28.712 "seek_hole": false, 00:20:28.712 "seek_data": false, 00:20:28.712 "copy": true, 00:20:28.712 "nvme_iov_md": false 00:20:28.712 }, 00:20:28.712 "memory_domains": [ 00:20:28.712 { 00:20:28.712 "dma_device_id": "system", 00:20:28.712 "dma_device_type": 1 00:20:28.712 }, 00:20:28.712 { 00:20:28.712 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:28.712 "dma_device_type": 2 00:20:28.712 } 00:20:28.712 ], 00:20:28.712 "driver_specific": {} 00:20:28.712 } 00:20:28.712 ] 00:20:28.712 14:13:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:20:28.712 14:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:28.712 14:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:28.712 14:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:28.712 14:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:28.712 14:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:28.712 14:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:28.712 14:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:28.712 14:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:28.712 14:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:28.712 14:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:28.712 14:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:28.712 14:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:28.970 14:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:28.970 "name": "Existed_Raid", 00:20:28.970 "uuid": "aa9d78ed-b4b0-453f-947b-bd96e0e4eedc", 00:20:28.970 "strip_size_kb": 0, 00:20:28.970 "state": "configuring", 00:20:28.970 "raid_level": "raid1", 00:20:28.970 "superblock": true, 00:20:28.970 "num_base_bdevs": 3, 00:20:28.970 "num_base_bdevs_discovered": 1, 00:20:28.970 "num_base_bdevs_operational": 3, 00:20:28.970 "base_bdevs_list": [ 00:20:28.970 { 00:20:28.970 "name": "BaseBdev1", 00:20:28.970 "uuid": "e8a191b7-2b48-4dff-af52-85a297265698", 00:20:28.970 "is_configured": true, 00:20:28.970 "data_offset": 2048, 00:20:28.970 "data_size": 63488 00:20:28.970 }, 00:20:28.970 { 00:20:28.970 "name": "BaseBdev2", 00:20:28.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:28.970 "is_configured": false, 00:20:28.970 "data_offset": 0, 00:20:28.970 "data_size": 0 00:20:28.970 }, 00:20:28.970 { 00:20:28.970 "name": "BaseBdev3", 00:20:28.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:28.970 "is_configured": false, 00:20:28.970 "data_offset": 0, 00:20:28.970 "data_size": 0 00:20:28.970 } 00:20:28.970 ] 00:20:28.970 }' 00:20:28.970 14:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:28.970 14:13:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:29.595 14:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:29.853 [2024-07-15 14:13:15.750968] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:29.853 [2024-07-15 14:13:15.751213] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:20:29.853 14:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:30.111 [2024-07-15 14:13:15.983051] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:30.111 [2024-07-15 14:13:15.984825] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:30.111 [2024-07-15 14:13:15.985057] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:30.111 [2024-07-15 14:13:15.985172] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:20:30.111 [2024-07-15 14:13:15.985244] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:20:30.111 14:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:20:30.111 14:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:30.111 14:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:30.111 14:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:30.111 14:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:30.111 14:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:30.111 14:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:30.111 14:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:30.111 14:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:30.111 14:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:30.111 14:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:30.111 14:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:30.111 14:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:30.111 14:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:30.391 14:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:30.391 "name": "Existed_Raid", 00:20:30.391 "uuid": "3142752c-7aab-48b2-8590-c7d7176d3c6a", 00:20:30.391 "strip_size_kb": 0, 00:20:30.391 "state": "configuring", 00:20:30.391 "raid_level": "raid1", 00:20:30.391 "superblock": true, 00:20:30.391 "num_base_bdevs": 3, 00:20:30.391 "num_base_bdevs_discovered": 1, 00:20:30.391 "num_base_bdevs_operational": 3, 00:20:30.391 "base_bdevs_list": [ 00:20:30.391 { 00:20:30.391 "name": "BaseBdev1", 00:20:30.391 "uuid": "e8a191b7-2b48-4dff-af52-85a297265698", 00:20:30.391 "is_configured": true, 00:20:30.391 "data_offset": 2048, 00:20:30.391 "data_size": 63488 00:20:30.391 }, 00:20:30.391 { 00:20:30.391 "name": "BaseBdev2", 00:20:30.391 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:30.391 "is_configured": false, 00:20:30.391 "data_offset": 0, 00:20:30.391 "data_size": 0 00:20:30.391 }, 00:20:30.391 { 00:20:30.391 "name": "BaseBdev3", 00:20:30.391 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:30.391 "is_configured": false, 00:20:30.391 "data_offset": 0, 00:20:30.391 "data_size": 0 00:20:30.391 } 00:20:30.391 ] 00:20:30.391 }' 00:20:30.391 14:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:30.391 14:13:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:31.324 14:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:31.324 [2024-07-15 14:13:17.308844] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:31.324 BaseBdev2 00:20:31.581 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:20:31.581 14:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:20:31.581 14:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:31.581 14:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:20:31.581 14:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:31.581 14:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:31.581 14:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:31.581 14:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:31.838 [ 00:20:31.838 { 00:20:31.838 "name": "BaseBdev2", 00:20:31.838 "aliases": [ 00:20:31.838 "7f1ca74d-a68f-44e7-86a5-9727cec4d008" 00:20:31.838 ], 00:20:31.838 "product_name": "Malloc disk", 00:20:31.838 "block_size": 512, 00:20:31.838 "num_blocks": 65536, 00:20:31.838 "uuid": "7f1ca74d-a68f-44e7-86a5-9727cec4d008", 00:20:31.838 "assigned_rate_limits": { 00:20:31.838 "rw_ios_per_sec": 0, 00:20:31.838 "rw_mbytes_per_sec": 0, 00:20:31.838 "r_mbytes_per_sec": 0, 00:20:31.838 "w_mbytes_per_sec": 0 00:20:31.838 }, 00:20:31.838 "claimed": true, 00:20:31.838 "claim_type": "exclusive_write", 00:20:31.838 "zoned": false, 00:20:31.838 "supported_io_types": { 00:20:31.838 "read": true, 00:20:31.838 "write": true, 00:20:31.838 "unmap": true, 00:20:31.838 "flush": true, 00:20:31.838 "reset": true, 00:20:31.838 "nvme_admin": false, 00:20:31.838 "nvme_io": false, 00:20:31.838 "nvme_io_md": false, 00:20:31.838 "write_zeroes": true, 00:20:31.838 "zcopy": true, 00:20:31.838 "get_zone_info": false, 00:20:31.838 "zone_management": false, 00:20:31.838 "zone_append": false, 00:20:31.838 "compare": false, 00:20:31.838 "compare_and_write": false, 00:20:31.838 "abort": true, 00:20:31.838 "seek_hole": false, 00:20:31.838 "seek_data": false, 00:20:31.838 "copy": true, 00:20:31.838 "nvme_iov_md": false 00:20:31.838 }, 00:20:31.838 "memory_domains": [ 00:20:31.838 { 00:20:31.838 "dma_device_id": "system", 00:20:31.838 "dma_device_type": 1 00:20:31.838 }, 00:20:31.838 { 00:20:31.838 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:31.838 "dma_device_type": 2 00:20:31.838 } 00:20:31.838 ], 00:20:31.838 "driver_specific": {} 00:20:31.838 } 00:20:31.838 ] 00:20:31.838 14:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:20:31.838 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:20:31.838 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:31.838 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:31.838 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:31.838 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:31.838 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:31.838 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:31.838 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:31.838 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:31.838 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:31.838 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:31.838 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:31.838 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:31.838 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:32.095 14:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:32.095 "name": "Existed_Raid", 00:20:32.095 "uuid": "3142752c-7aab-48b2-8590-c7d7176d3c6a", 00:20:32.095 "strip_size_kb": 0, 00:20:32.095 "state": "configuring", 00:20:32.095 "raid_level": "raid1", 00:20:32.095 "superblock": true, 00:20:32.095 "num_base_bdevs": 3, 00:20:32.095 "num_base_bdevs_discovered": 2, 00:20:32.095 "num_base_bdevs_operational": 3, 00:20:32.095 "base_bdevs_list": [ 00:20:32.095 { 00:20:32.095 "name": "BaseBdev1", 00:20:32.095 "uuid": "e8a191b7-2b48-4dff-af52-85a297265698", 00:20:32.095 "is_configured": true, 00:20:32.095 "data_offset": 2048, 00:20:32.095 "data_size": 63488 00:20:32.095 }, 00:20:32.095 { 00:20:32.096 "name": "BaseBdev2", 00:20:32.096 "uuid": "7f1ca74d-a68f-44e7-86a5-9727cec4d008", 00:20:32.096 "is_configured": true, 00:20:32.096 "data_offset": 2048, 00:20:32.096 "data_size": 63488 00:20:32.096 }, 00:20:32.096 { 00:20:32.096 "name": "BaseBdev3", 00:20:32.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:32.096 "is_configured": false, 00:20:32.096 "data_offset": 0, 00:20:32.096 "data_size": 0 00:20:32.096 } 00:20:32.096 ] 00:20:32.096 }' 00:20:32.096 14:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:32.096 14:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:33.028 14:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:33.028 [2024-07-15 14:13:18.958664] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:33.028 [2024-07-15 14:13:18.959106] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:20:33.028 [2024-07-15 14:13:18.959249] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:33.028 [2024-07-15 14:13:18.959404] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:20:33.028 [2024-07-15 14:13:18.959800] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:20:33.028 [2024-07-15 14:13:18.959928] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:20:33.028 BaseBdev3 00:20:33.028 [2024-07-15 14:13:18.960160] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:33.028 14:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:20:33.028 14:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:20:33.028 14:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:33.028 14:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:20:33.028 14:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:33.028 14:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:33.028 14:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:33.286 14:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:33.546 [ 00:20:33.546 { 00:20:33.546 "name": "BaseBdev3", 00:20:33.546 "aliases": [ 00:20:33.546 "53a40e10-ca51-4ee9-aaaa-3a455ece8e10" 00:20:33.546 ], 00:20:33.546 "product_name": "Malloc disk", 00:20:33.546 "block_size": 512, 00:20:33.546 "num_blocks": 65536, 00:20:33.546 "uuid": "53a40e10-ca51-4ee9-aaaa-3a455ece8e10", 00:20:33.546 "assigned_rate_limits": { 00:20:33.546 "rw_ios_per_sec": 0, 00:20:33.546 "rw_mbytes_per_sec": 0, 00:20:33.546 "r_mbytes_per_sec": 0, 00:20:33.546 "w_mbytes_per_sec": 0 00:20:33.546 }, 00:20:33.546 "claimed": true, 00:20:33.546 "claim_type": "exclusive_write", 00:20:33.546 "zoned": false, 00:20:33.546 "supported_io_types": { 00:20:33.546 "read": true, 00:20:33.546 "write": true, 00:20:33.546 "unmap": true, 00:20:33.546 "flush": true, 00:20:33.546 "reset": true, 00:20:33.546 "nvme_admin": false, 00:20:33.546 "nvme_io": false, 00:20:33.546 "nvme_io_md": false, 00:20:33.546 "write_zeroes": true, 00:20:33.546 "zcopy": true, 00:20:33.546 "get_zone_info": false, 00:20:33.546 "zone_management": false, 00:20:33.546 "zone_append": false, 00:20:33.546 "compare": false, 00:20:33.546 "compare_and_write": false, 00:20:33.546 "abort": true, 00:20:33.546 "seek_hole": false, 00:20:33.546 "seek_data": false, 00:20:33.546 "copy": true, 00:20:33.546 "nvme_iov_md": false 00:20:33.546 }, 00:20:33.546 "memory_domains": [ 00:20:33.546 { 00:20:33.546 "dma_device_id": "system", 00:20:33.546 "dma_device_type": 1 00:20:33.546 }, 00:20:33.546 { 00:20:33.546 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:33.546 "dma_device_type": 2 00:20:33.546 } 00:20:33.546 ], 00:20:33.546 "driver_specific": {} 00:20:33.546 } 00:20:33.546 ] 00:20:33.805 14:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:20:33.805 14:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:20:33.805 14:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:33.805 14:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:20:33.805 14:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:33.805 14:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:33.805 14:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:33.805 14:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:33.805 14:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:33.805 14:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:33.805 14:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:33.805 14:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:33.805 14:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:33.805 14:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:33.805 14:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:34.064 14:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:34.064 "name": "Existed_Raid", 00:20:34.064 "uuid": "3142752c-7aab-48b2-8590-c7d7176d3c6a", 00:20:34.064 "strip_size_kb": 0, 00:20:34.064 "state": "online", 00:20:34.064 "raid_level": "raid1", 00:20:34.064 "superblock": true, 00:20:34.064 "num_base_bdevs": 3, 00:20:34.064 "num_base_bdevs_discovered": 3, 00:20:34.064 "num_base_bdevs_operational": 3, 00:20:34.064 "base_bdevs_list": [ 00:20:34.064 { 00:20:34.064 "name": "BaseBdev1", 00:20:34.064 "uuid": "e8a191b7-2b48-4dff-af52-85a297265698", 00:20:34.064 "is_configured": true, 00:20:34.064 "data_offset": 2048, 00:20:34.064 "data_size": 63488 00:20:34.064 }, 00:20:34.064 { 00:20:34.064 "name": "BaseBdev2", 00:20:34.064 "uuid": "7f1ca74d-a68f-44e7-86a5-9727cec4d008", 00:20:34.064 "is_configured": true, 00:20:34.064 "data_offset": 2048, 00:20:34.064 "data_size": 63488 00:20:34.064 }, 00:20:34.064 { 00:20:34.064 "name": "BaseBdev3", 00:20:34.064 "uuid": "53a40e10-ca51-4ee9-aaaa-3a455ece8e10", 00:20:34.064 "is_configured": true, 00:20:34.064 "data_offset": 2048, 00:20:34.064 "data_size": 63488 00:20:34.064 } 00:20:34.064 ] 00:20:34.064 }' 00:20:34.064 14:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:34.064 14:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:34.631 14:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:20:34.631 14:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:20:34.631 14:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:34.631 14:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:34.631 14:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:34.631 14:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:20:34.631 14:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:20:34.631 14:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:34.890 [2024-07-15 14:13:20.683188] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:34.890 14:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:34.890 "name": "Existed_Raid", 00:20:34.890 "aliases": [ 00:20:34.890 "3142752c-7aab-48b2-8590-c7d7176d3c6a" 00:20:34.890 ], 00:20:34.890 "product_name": "Raid Volume", 00:20:34.890 "block_size": 512, 00:20:34.890 "num_blocks": 63488, 00:20:34.890 "uuid": "3142752c-7aab-48b2-8590-c7d7176d3c6a", 00:20:34.890 "assigned_rate_limits": { 00:20:34.890 "rw_ios_per_sec": 0, 00:20:34.890 "rw_mbytes_per_sec": 0, 00:20:34.890 "r_mbytes_per_sec": 0, 00:20:34.890 "w_mbytes_per_sec": 0 00:20:34.890 }, 00:20:34.890 "claimed": false, 00:20:34.890 "zoned": false, 00:20:34.890 "supported_io_types": { 00:20:34.890 "read": true, 00:20:34.890 "write": true, 00:20:34.890 "unmap": false, 00:20:34.890 "flush": false, 00:20:34.890 "reset": true, 00:20:34.890 "nvme_admin": false, 00:20:34.890 "nvme_io": false, 00:20:34.890 "nvme_io_md": false, 00:20:34.890 "write_zeroes": true, 00:20:34.890 "zcopy": false, 00:20:34.890 "get_zone_info": false, 00:20:34.890 "zone_management": false, 00:20:34.890 "zone_append": false, 00:20:34.890 "compare": false, 00:20:34.890 "compare_and_write": false, 00:20:34.890 "abort": false, 00:20:34.890 "seek_hole": false, 00:20:34.890 "seek_data": false, 00:20:34.890 "copy": false, 00:20:34.890 "nvme_iov_md": false 00:20:34.890 }, 00:20:34.890 "memory_domains": [ 00:20:34.890 { 00:20:34.890 "dma_device_id": "system", 00:20:34.890 "dma_device_type": 1 00:20:34.890 }, 00:20:34.890 { 00:20:34.890 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:34.890 "dma_device_type": 2 00:20:34.890 }, 00:20:34.890 { 00:20:34.890 "dma_device_id": "system", 00:20:34.890 "dma_device_type": 1 00:20:34.890 }, 00:20:34.890 { 00:20:34.890 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:34.890 "dma_device_type": 2 00:20:34.890 }, 00:20:34.890 { 00:20:34.890 "dma_device_id": "system", 00:20:34.890 "dma_device_type": 1 00:20:34.890 }, 00:20:34.890 { 00:20:34.890 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:34.890 "dma_device_type": 2 00:20:34.890 } 00:20:34.890 ], 00:20:34.890 "driver_specific": { 00:20:34.890 "raid": { 00:20:34.890 "uuid": "3142752c-7aab-48b2-8590-c7d7176d3c6a", 00:20:34.890 "strip_size_kb": 0, 00:20:34.890 "state": "online", 00:20:34.890 "raid_level": "raid1", 00:20:34.890 "superblock": true, 00:20:34.890 "num_base_bdevs": 3, 00:20:34.890 "num_base_bdevs_discovered": 3, 00:20:34.890 "num_base_bdevs_operational": 3, 00:20:34.890 "base_bdevs_list": [ 00:20:34.890 { 00:20:34.890 "name": "BaseBdev1", 00:20:34.890 "uuid": "e8a191b7-2b48-4dff-af52-85a297265698", 00:20:34.890 "is_configured": true, 00:20:34.890 "data_offset": 2048, 00:20:34.890 "data_size": 63488 00:20:34.890 }, 00:20:34.890 { 00:20:34.890 "name": "BaseBdev2", 00:20:34.890 "uuid": "7f1ca74d-a68f-44e7-86a5-9727cec4d008", 00:20:34.890 "is_configured": true, 00:20:34.890 "data_offset": 2048, 00:20:34.890 "data_size": 63488 00:20:34.890 }, 00:20:34.890 { 00:20:34.890 "name": "BaseBdev3", 00:20:34.890 "uuid": "53a40e10-ca51-4ee9-aaaa-3a455ece8e10", 00:20:34.890 "is_configured": true, 00:20:34.890 "data_offset": 2048, 00:20:34.890 "data_size": 63488 00:20:34.890 } 00:20:34.890 ] 00:20:34.890 } 00:20:34.890 } 00:20:34.890 }' 00:20:34.890 14:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:34.890 14:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:20:34.890 BaseBdev2 00:20:34.890 BaseBdev3' 00:20:34.890 14:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:34.890 14:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:20:34.890 14:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:35.149 14:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:35.149 "name": "BaseBdev1", 00:20:35.149 "aliases": [ 00:20:35.149 "e8a191b7-2b48-4dff-af52-85a297265698" 00:20:35.149 ], 00:20:35.149 "product_name": "Malloc disk", 00:20:35.149 "block_size": 512, 00:20:35.149 "num_blocks": 65536, 00:20:35.149 "uuid": "e8a191b7-2b48-4dff-af52-85a297265698", 00:20:35.149 "assigned_rate_limits": { 00:20:35.149 "rw_ios_per_sec": 0, 00:20:35.149 "rw_mbytes_per_sec": 0, 00:20:35.149 "r_mbytes_per_sec": 0, 00:20:35.149 "w_mbytes_per_sec": 0 00:20:35.149 }, 00:20:35.149 "claimed": true, 00:20:35.149 "claim_type": "exclusive_write", 00:20:35.149 "zoned": false, 00:20:35.149 "supported_io_types": { 00:20:35.149 "read": true, 00:20:35.149 "write": true, 00:20:35.149 "unmap": true, 00:20:35.149 "flush": true, 00:20:35.149 "reset": true, 00:20:35.149 "nvme_admin": false, 00:20:35.149 "nvme_io": false, 00:20:35.149 "nvme_io_md": false, 00:20:35.149 "write_zeroes": true, 00:20:35.149 "zcopy": true, 00:20:35.149 "get_zone_info": false, 00:20:35.149 "zone_management": false, 00:20:35.149 "zone_append": false, 00:20:35.149 "compare": false, 00:20:35.149 "compare_and_write": false, 00:20:35.149 "abort": true, 00:20:35.149 "seek_hole": false, 00:20:35.149 "seek_data": false, 00:20:35.149 "copy": true, 00:20:35.149 "nvme_iov_md": false 00:20:35.149 }, 00:20:35.149 "memory_domains": [ 00:20:35.149 { 00:20:35.149 "dma_device_id": "system", 00:20:35.149 "dma_device_type": 1 00:20:35.149 }, 00:20:35.149 { 00:20:35.149 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:35.149 "dma_device_type": 2 00:20:35.149 } 00:20:35.149 ], 00:20:35.149 "driver_specific": {} 00:20:35.149 }' 00:20:35.149 14:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:35.149 14:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:35.149 14:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:35.149 14:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:35.149 14:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:35.407 14:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:35.407 14:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:35.407 14:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:35.407 14:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:35.407 14:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:35.407 14:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:35.407 14:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:35.407 14:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:35.407 14:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:20:35.407 14:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:35.667 14:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:35.667 "name": "BaseBdev2", 00:20:35.667 "aliases": [ 00:20:35.667 "7f1ca74d-a68f-44e7-86a5-9727cec4d008" 00:20:35.667 ], 00:20:35.667 "product_name": "Malloc disk", 00:20:35.667 "block_size": 512, 00:20:35.667 "num_blocks": 65536, 00:20:35.667 "uuid": "7f1ca74d-a68f-44e7-86a5-9727cec4d008", 00:20:35.667 "assigned_rate_limits": { 00:20:35.667 "rw_ios_per_sec": 0, 00:20:35.667 "rw_mbytes_per_sec": 0, 00:20:35.667 "r_mbytes_per_sec": 0, 00:20:35.667 "w_mbytes_per_sec": 0 00:20:35.667 }, 00:20:35.667 "claimed": true, 00:20:35.667 "claim_type": "exclusive_write", 00:20:35.667 "zoned": false, 00:20:35.667 "supported_io_types": { 00:20:35.667 "read": true, 00:20:35.667 "write": true, 00:20:35.667 "unmap": true, 00:20:35.667 "flush": true, 00:20:35.667 "reset": true, 00:20:35.667 "nvme_admin": false, 00:20:35.667 "nvme_io": false, 00:20:35.667 "nvme_io_md": false, 00:20:35.667 "write_zeroes": true, 00:20:35.667 "zcopy": true, 00:20:35.667 "get_zone_info": false, 00:20:35.667 "zone_management": false, 00:20:35.667 "zone_append": false, 00:20:35.667 "compare": false, 00:20:35.667 "compare_and_write": false, 00:20:35.667 "abort": true, 00:20:35.667 "seek_hole": false, 00:20:35.667 "seek_data": false, 00:20:35.667 "copy": true, 00:20:35.667 "nvme_iov_md": false 00:20:35.667 }, 00:20:35.667 "memory_domains": [ 00:20:35.667 { 00:20:35.667 "dma_device_id": "system", 00:20:35.667 "dma_device_type": 1 00:20:35.667 }, 00:20:35.667 { 00:20:35.667 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:35.667 "dma_device_type": 2 00:20:35.667 } 00:20:35.667 ], 00:20:35.667 "driver_specific": {} 00:20:35.667 }' 00:20:35.667 14:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:35.667 14:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:35.925 14:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:35.925 14:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:35.925 14:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:35.925 14:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:35.925 14:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:35.925 14:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:35.925 14:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:35.925 14:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:36.183 14:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:36.183 14:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:36.183 14:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:36.183 14:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:20:36.183 14:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:36.442 14:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:36.442 "name": "BaseBdev3", 00:20:36.442 "aliases": [ 00:20:36.442 "53a40e10-ca51-4ee9-aaaa-3a455ece8e10" 00:20:36.442 ], 00:20:36.442 "product_name": "Malloc disk", 00:20:36.442 "block_size": 512, 00:20:36.442 "num_blocks": 65536, 00:20:36.442 "uuid": "53a40e10-ca51-4ee9-aaaa-3a455ece8e10", 00:20:36.442 "assigned_rate_limits": { 00:20:36.442 "rw_ios_per_sec": 0, 00:20:36.442 "rw_mbytes_per_sec": 0, 00:20:36.442 "r_mbytes_per_sec": 0, 00:20:36.442 "w_mbytes_per_sec": 0 00:20:36.442 }, 00:20:36.442 "claimed": true, 00:20:36.442 "claim_type": "exclusive_write", 00:20:36.442 "zoned": false, 00:20:36.442 "supported_io_types": { 00:20:36.442 "read": true, 00:20:36.442 "write": true, 00:20:36.442 "unmap": true, 00:20:36.442 "flush": true, 00:20:36.442 "reset": true, 00:20:36.442 "nvme_admin": false, 00:20:36.442 "nvme_io": false, 00:20:36.442 "nvme_io_md": false, 00:20:36.442 "write_zeroes": true, 00:20:36.442 "zcopy": true, 00:20:36.442 "get_zone_info": false, 00:20:36.442 "zone_management": false, 00:20:36.442 "zone_append": false, 00:20:36.442 "compare": false, 00:20:36.442 "compare_and_write": false, 00:20:36.442 "abort": true, 00:20:36.442 "seek_hole": false, 00:20:36.442 "seek_data": false, 00:20:36.442 "copy": true, 00:20:36.442 "nvme_iov_md": false 00:20:36.442 }, 00:20:36.442 "memory_domains": [ 00:20:36.442 { 00:20:36.442 "dma_device_id": "system", 00:20:36.442 "dma_device_type": 1 00:20:36.442 }, 00:20:36.442 { 00:20:36.442 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:36.442 "dma_device_type": 2 00:20:36.442 } 00:20:36.442 ], 00:20:36.442 "driver_specific": {} 00:20:36.442 }' 00:20:36.442 14:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:36.442 14:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:36.442 14:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:36.442 14:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:36.442 14:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:36.700 14:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:36.700 14:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:36.700 14:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:36.700 14:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:36.700 14:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:36.700 14:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:36.700 14:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:36.700 14:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:36.959 [2024-07-15 14:13:22.891409] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:37.218 14:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:20:37.218 14:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:20:37.218 14:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:37.218 14:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:20:37.218 14:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:20:37.218 14:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:20:37.218 14:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:37.218 14:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:37.218 14:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:37.218 14:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:37.218 14:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:20:37.218 14:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:37.218 14:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:37.218 14:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:37.218 14:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:37.218 14:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:37.218 14:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:37.477 14:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:37.477 "name": "Existed_Raid", 00:20:37.477 "uuid": "3142752c-7aab-48b2-8590-c7d7176d3c6a", 00:20:37.477 "strip_size_kb": 0, 00:20:37.477 "state": "online", 00:20:37.477 "raid_level": "raid1", 00:20:37.477 "superblock": true, 00:20:37.477 "num_base_bdevs": 3, 00:20:37.477 "num_base_bdevs_discovered": 2, 00:20:37.477 "num_base_bdevs_operational": 2, 00:20:37.477 "base_bdevs_list": [ 00:20:37.477 { 00:20:37.477 "name": null, 00:20:37.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:37.477 "is_configured": false, 00:20:37.477 "data_offset": 2048, 00:20:37.477 "data_size": 63488 00:20:37.477 }, 00:20:37.477 { 00:20:37.477 "name": "BaseBdev2", 00:20:37.477 "uuid": "7f1ca74d-a68f-44e7-86a5-9727cec4d008", 00:20:37.477 "is_configured": true, 00:20:37.477 "data_offset": 2048, 00:20:37.477 "data_size": 63488 00:20:37.477 }, 00:20:37.477 { 00:20:37.477 "name": "BaseBdev3", 00:20:37.477 "uuid": "53a40e10-ca51-4ee9-aaaa-3a455ece8e10", 00:20:37.477 "is_configured": true, 00:20:37.477 "data_offset": 2048, 00:20:37.477 "data_size": 63488 00:20:37.477 } 00:20:37.477 ] 00:20:37.477 }' 00:20:37.477 14:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:37.477 14:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:38.043 14:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:20:38.043 14:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:38.043 14:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:20:38.043 14:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:38.301 14:13:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:20:38.301 14:13:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:38.301 14:13:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:20:38.556 [2024-07-15 14:13:24.536828] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:38.813 14:13:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:20:38.813 14:13:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:38.813 14:13:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:38.813 14:13:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:20:39.071 14:13:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:20:39.071 14:13:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:39.071 14:13:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:20:39.328 [2024-07-15 14:13:25.278846] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:39.328 [2024-07-15 14:13:25.279212] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:39.632 [2024-07-15 14:13:25.365789] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:39.632 [2024-07-15 14:13:25.365982] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:39.632 [2024-07-15 14:13:25.366094] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:20:39.632 14:13:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:20:39.632 14:13:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:39.632 14:13:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:39.632 14:13:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:20:39.632 14:13:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:20:39.632 14:13:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:20:39.632 14:13:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:20:39.632 14:13:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:20:39.632 14:13:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:39.632 14:13:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:39.890 BaseBdev2 00:20:40.147 14:13:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:20:40.148 14:13:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:20:40.148 14:13:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:40.148 14:13:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:20:40.148 14:13:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:40.148 14:13:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:40.148 14:13:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:40.406 14:13:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:40.665 [ 00:20:40.665 { 00:20:40.665 "name": "BaseBdev2", 00:20:40.665 "aliases": [ 00:20:40.665 "afc7627d-1bc5-4225-8d2e-9bc1566add2a" 00:20:40.665 ], 00:20:40.665 "product_name": "Malloc disk", 00:20:40.665 "block_size": 512, 00:20:40.665 "num_blocks": 65536, 00:20:40.665 "uuid": "afc7627d-1bc5-4225-8d2e-9bc1566add2a", 00:20:40.665 "assigned_rate_limits": { 00:20:40.665 "rw_ios_per_sec": 0, 00:20:40.665 "rw_mbytes_per_sec": 0, 00:20:40.665 "r_mbytes_per_sec": 0, 00:20:40.665 "w_mbytes_per_sec": 0 00:20:40.665 }, 00:20:40.665 "claimed": false, 00:20:40.665 "zoned": false, 00:20:40.665 "supported_io_types": { 00:20:40.665 "read": true, 00:20:40.665 "write": true, 00:20:40.665 "unmap": true, 00:20:40.665 "flush": true, 00:20:40.665 "reset": true, 00:20:40.665 "nvme_admin": false, 00:20:40.665 "nvme_io": false, 00:20:40.665 "nvme_io_md": false, 00:20:40.665 "write_zeroes": true, 00:20:40.666 "zcopy": true, 00:20:40.666 "get_zone_info": false, 00:20:40.666 "zone_management": false, 00:20:40.666 "zone_append": false, 00:20:40.666 "compare": false, 00:20:40.666 "compare_and_write": false, 00:20:40.666 "abort": true, 00:20:40.666 "seek_hole": false, 00:20:40.666 "seek_data": false, 00:20:40.666 "copy": true, 00:20:40.666 "nvme_iov_md": false 00:20:40.666 }, 00:20:40.666 "memory_domains": [ 00:20:40.666 { 00:20:40.666 "dma_device_id": "system", 00:20:40.666 "dma_device_type": 1 00:20:40.666 }, 00:20:40.666 { 00:20:40.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:40.666 "dma_device_type": 2 00:20:40.666 } 00:20:40.666 ], 00:20:40.666 "driver_specific": {} 00:20:40.666 } 00:20:40.666 ] 00:20:40.666 14:13:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:20:40.666 14:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:20:40.666 14:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:40.666 14:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:40.924 BaseBdev3 00:20:40.924 14:13:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:20:40.924 14:13:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:20:40.924 14:13:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:40.924 14:13:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:20:40.924 14:13:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:40.924 14:13:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:40.924 14:13:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:41.183 14:13:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:41.441 [ 00:20:41.441 { 00:20:41.441 "name": "BaseBdev3", 00:20:41.441 "aliases": [ 00:20:41.441 "8cfafdd8-04c0-4f75-8044-e3a7006a9857" 00:20:41.441 ], 00:20:41.441 "product_name": "Malloc disk", 00:20:41.441 "block_size": 512, 00:20:41.441 "num_blocks": 65536, 00:20:41.441 "uuid": "8cfafdd8-04c0-4f75-8044-e3a7006a9857", 00:20:41.441 "assigned_rate_limits": { 00:20:41.441 "rw_ios_per_sec": 0, 00:20:41.441 "rw_mbytes_per_sec": 0, 00:20:41.441 "r_mbytes_per_sec": 0, 00:20:41.441 "w_mbytes_per_sec": 0 00:20:41.441 }, 00:20:41.441 "claimed": false, 00:20:41.441 "zoned": false, 00:20:41.441 "supported_io_types": { 00:20:41.441 "read": true, 00:20:41.441 "write": true, 00:20:41.441 "unmap": true, 00:20:41.441 "flush": true, 00:20:41.441 "reset": true, 00:20:41.441 "nvme_admin": false, 00:20:41.441 "nvme_io": false, 00:20:41.441 "nvme_io_md": false, 00:20:41.441 "write_zeroes": true, 00:20:41.441 "zcopy": true, 00:20:41.441 "get_zone_info": false, 00:20:41.441 "zone_management": false, 00:20:41.441 "zone_append": false, 00:20:41.441 "compare": false, 00:20:41.441 "compare_and_write": false, 00:20:41.441 "abort": true, 00:20:41.441 "seek_hole": false, 00:20:41.441 "seek_data": false, 00:20:41.441 "copy": true, 00:20:41.441 "nvme_iov_md": false 00:20:41.441 }, 00:20:41.441 "memory_domains": [ 00:20:41.441 { 00:20:41.441 "dma_device_id": "system", 00:20:41.441 "dma_device_type": 1 00:20:41.441 }, 00:20:41.441 { 00:20:41.441 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:41.441 "dma_device_type": 2 00:20:41.442 } 00:20:41.442 ], 00:20:41.442 "driver_specific": {} 00:20:41.442 } 00:20:41.442 ] 00:20:41.442 14:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:20:41.442 14:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:20:41.442 14:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:41.442 14:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:41.701 [2024-07-15 14:13:27.450692] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:41.701 [2024-07-15 14:13:27.451068] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:41.701 [2024-07-15 14:13:27.451215] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:41.701 [2024-07-15 14:13:27.452705] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:41.701 14:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:41.701 14:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:41.701 14:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:41.701 14:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:41.701 14:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:41.701 14:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:41.701 14:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:41.701 14:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:41.701 14:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:41.701 14:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:41.701 14:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:41.701 14:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:41.960 14:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:41.960 "name": "Existed_Raid", 00:20:41.960 "uuid": "a5656d2e-ece5-4189-9b57-ddf9ff3e8575", 00:20:41.960 "strip_size_kb": 0, 00:20:41.960 "state": "configuring", 00:20:41.960 "raid_level": "raid1", 00:20:41.960 "superblock": true, 00:20:41.960 "num_base_bdevs": 3, 00:20:41.960 "num_base_bdevs_discovered": 2, 00:20:41.960 "num_base_bdevs_operational": 3, 00:20:41.960 "base_bdevs_list": [ 00:20:41.960 { 00:20:41.960 "name": "BaseBdev1", 00:20:41.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:41.960 "is_configured": false, 00:20:41.960 "data_offset": 0, 00:20:41.960 "data_size": 0 00:20:41.960 }, 00:20:41.960 { 00:20:41.960 "name": "BaseBdev2", 00:20:41.960 "uuid": "afc7627d-1bc5-4225-8d2e-9bc1566add2a", 00:20:41.960 "is_configured": true, 00:20:41.960 "data_offset": 2048, 00:20:41.960 "data_size": 63488 00:20:41.960 }, 00:20:41.960 { 00:20:41.960 "name": "BaseBdev3", 00:20:41.960 "uuid": "8cfafdd8-04c0-4f75-8044-e3a7006a9857", 00:20:41.960 "is_configured": true, 00:20:41.960 "data_offset": 2048, 00:20:41.960 "data_size": 63488 00:20:41.960 } 00:20:41.960 ] 00:20:41.960 }' 00:20:41.960 14:13:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:41.960 14:13:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:42.528 14:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:20:42.787 [2024-07-15 14:13:28.655121] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:42.787 14:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:42.787 14:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:42.787 14:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:42.787 14:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:42.787 14:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:42.787 14:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:42.787 14:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:42.787 14:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:42.787 14:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:42.787 14:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:42.787 14:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:42.787 14:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:43.075 14:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:43.075 "name": "Existed_Raid", 00:20:43.075 "uuid": "a5656d2e-ece5-4189-9b57-ddf9ff3e8575", 00:20:43.075 "strip_size_kb": 0, 00:20:43.075 "state": "configuring", 00:20:43.075 "raid_level": "raid1", 00:20:43.075 "superblock": true, 00:20:43.075 "num_base_bdevs": 3, 00:20:43.075 "num_base_bdevs_discovered": 1, 00:20:43.075 "num_base_bdevs_operational": 3, 00:20:43.075 "base_bdevs_list": [ 00:20:43.075 { 00:20:43.075 "name": "BaseBdev1", 00:20:43.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:43.075 "is_configured": false, 00:20:43.075 "data_offset": 0, 00:20:43.075 "data_size": 0 00:20:43.075 }, 00:20:43.075 { 00:20:43.075 "name": null, 00:20:43.075 "uuid": "afc7627d-1bc5-4225-8d2e-9bc1566add2a", 00:20:43.075 "is_configured": false, 00:20:43.075 "data_offset": 2048, 00:20:43.075 "data_size": 63488 00:20:43.075 }, 00:20:43.075 { 00:20:43.075 "name": "BaseBdev3", 00:20:43.075 "uuid": "8cfafdd8-04c0-4f75-8044-e3a7006a9857", 00:20:43.075 "is_configured": true, 00:20:43.075 "data_offset": 2048, 00:20:43.075 "data_size": 63488 00:20:43.075 } 00:20:43.075 ] 00:20:43.075 }' 00:20:43.075 14:13:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:43.075 14:13:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:43.662 14:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:43.662 14:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:43.922 14:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:20:43.922 14:13:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:44.180 [2024-07-15 14:13:30.045343] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:44.180 BaseBdev1 00:20:44.180 14:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:20:44.180 14:13:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:20:44.180 14:13:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:44.180 14:13:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:20:44.180 14:13:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:44.180 14:13:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:44.180 14:13:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:44.439 14:13:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:44.698 [ 00:20:44.698 { 00:20:44.698 "name": "BaseBdev1", 00:20:44.698 "aliases": [ 00:20:44.698 "1040df75-bcc0-467e-a0aa-30f8e6de05e5" 00:20:44.698 ], 00:20:44.698 "product_name": "Malloc disk", 00:20:44.698 "block_size": 512, 00:20:44.698 "num_blocks": 65536, 00:20:44.698 "uuid": "1040df75-bcc0-467e-a0aa-30f8e6de05e5", 00:20:44.698 "assigned_rate_limits": { 00:20:44.698 "rw_ios_per_sec": 0, 00:20:44.698 "rw_mbytes_per_sec": 0, 00:20:44.698 "r_mbytes_per_sec": 0, 00:20:44.698 "w_mbytes_per_sec": 0 00:20:44.698 }, 00:20:44.698 "claimed": true, 00:20:44.698 "claim_type": "exclusive_write", 00:20:44.698 "zoned": false, 00:20:44.698 "supported_io_types": { 00:20:44.698 "read": true, 00:20:44.698 "write": true, 00:20:44.698 "unmap": true, 00:20:44.698 "flush": true, 00:20:44.698 "reset": true, 00:20:44.698 "nvme_admin": false, 00:20:44.698 "nvme_io": false, 00:20:44.698 "nvme_io_md": false, 00:20:44.698 "write_zeroes": true, 00:20:44.698 "zcopy": true, 00:20:44.698 "get_zone_info": false, 00:20:44.698 "zone_management": false, 00:20:44.698 "zone_append": false, 00:20:44.698 "compare": false, 00:20:44.698 "compare_and_write": false, 00:20:44.698 "abort": true, 00:20:44.698 "seek_hole": false, 00:20:44.698 "seek_data": false, 00:20:44.698 "copy": true, 00:20:44.698 "nvme_iov_md": false 00:20:44.698 }, 00:20:44.698 "memory_domains": [ 00:20:44.698 { 00:20:44.698 "dma_device_id": "system", 00:20:44.698 "dma_device_type": 1 00:20:44.698 }, 00:20:44.698 { 00:20:44.698 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:44.698 "dma_device_type": 2 00:20:44.698 } 00:20:44.698 ], 00:20:44.698 "driver_specific": {} 00:20:44.698 } 00:20:44.698 ] 00:20:44.698 14:13:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:20:44.698 14:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:44.698 14:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:44.698 14:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:44.698 14:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:44.698 14:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:44.698 14:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:44.698 14:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:44.698 14:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:44.698 14:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:44.698 14:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:44.698 14:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:44.698 14:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:44.956 14:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:44.956 "name": "Existed_Raid", 00:20:44.956 "uuid": "a5656d2e-ece5-4189-9b57-ddf9ff3e8575", 00:20:44.956 "strip_size_kb": 0, 00:20:44.956 "state": "configuring", 00:20:44.956 "raid_level": "raid1", 00:20:44.956 "superblock": true, 00:20:44.956 "num_base_bdevs": 3, 00:20:44.956 "num_base_bdevs_discovered": 2, 00:20:44.956 "num_base_bdevs_operational": 3, 00:20:44.956 "base_bdevs_list": [ 00:20:44.956 { 00:20:44.956 "name": "BaseBdev1", 00:20:44.956 "uuid": "1040df75-bcc0-467e-a0aa-30f8e6de05e5", 00:20:44.956 "is_configured": true, 00:20:44.956 "data_offset": 2048, 00:20:44.956 "data_size": 63488 00:20:44.956 }, 00:20:44.956 { 00:20:44.956 "name": null, 00:20:44.956 "uuid": "afc7627d-1bc5-4225-8d2e-9bc1566add2a", 00:20:44.956 "is_configured": false, 00:20:44.956 "data_offset": 2048, 00:20:44.956 "data_size": 63488 00:20:44.956 }, 00:20:44.956 { 00:20:44.956 "name": "BaseBdev3", 00:20:44.956 "uuid": "8cfafdd8-04c0-4f75-8044-e3a7006a9857", 00:20:44.956 "is_configured": true, 00:20:44.956 "data_offset": 2048, 00:20:44.956 "data_size": 63488 00:20:44.956 } 00:20:44.956 ] 00:20:44.956 }' 00:20:44.956 14:13:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:44.956 14:13:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:45.523 14:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:45.523 14:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:46.091 14:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:20:46.091 14:13:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:20:46.091 [2024-07-15 14:13:32.017694] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:46.091 14:13:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:46.091 14:13:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:46.091 14:13:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:46.091 14:13:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:46.091 14:13:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:46.091 14:13:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:46.091 14:13:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:46.091 14:13:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:46.091 14:13:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:46.091 14:13:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:46.091 14:13:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:46.091 14:13:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:46.408 14:13:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:46.408 "name": "Existed_Raid", 00:20:46.408 "uuid": "a5656d2e-ece5-4189-9b57-ddf9ff3e8575", 00:20:46.408 "strip_size_kb": 0, 00:20:46.408 "state": "configuring", 00:20:46.408 "raid_level": "raid1", 00:20:46.408 "superblock": true, 00:20:46.408 "num_base_bdevs": 3, 00:20:46.408 "num_base_bdevs_discovered": 1, 00:20:46.408 "num_base_bdevs_operational": 3, 00:20:46.408 "base_bdevs_list": [ 00:20:46.408 { 00:20:46.408 "name": "BaseBdev1", 00:20:46.408 "uuid": "1040df75-bcc0-467e-a0aa-30f8e6de05e5", 00:20:46.408 "is_configured": true, 00:20:46.408 "data_offset": 2048, 00:20:46.408 "data_size": 63488 00:20:46.408 }, 00:20:46.408 { 00:20:46.408 "name": null, 00:20:46.408 "uuid": "afc7627d-1bc5-4225-8d2e-9bc1566add2a", 00:20:46.408 "is_configured": false, 00:20:46.408 "data_offset": 2048, 00:20:46.408 "data_size": 63488 00:20:46.408 }, 00:20:46.408 { 00:20:46.408 "name": null, 00:20:46.408 "uuid": "8cfafdd8-04c0-4f75-8044-e3a7006a9857", 00:20:46.408 "is_configured": false, 00:20:46.408 "data_offset": 2048, 00:20:46.408 "data_size": 63488 00:20:46.408 } 00:20:46.408 ] 00:20:46.408 }' 00:20:46.408 14:13:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:46.408 14:13:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:46.991 14:13:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:46.991 14:13:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:47.251 14:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:20:47.251 14:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:20:47.509 [2024-07-15 14:13:33.453962] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:47.509 14:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:47.509 14:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:47.509 14:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:47.509 14:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:47.509 14:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:47.509 14:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:47.509 14:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:47.509 14:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:47.509 14:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:47.509 14:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:47.509 14:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:47.509 14:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:47.767 14:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:47.767 "name": "Existed_Raid", 00:20:47.767 "uuid": "a5656d2e-ece5-4189-9b57-ddf9ff3e8575", 00:20:47.767 "strip_size_kb": 0, 00:20:47.767 "state": "configuring", 00:20:47.767 "raid_level": "raid1", 00:20:47.767 "superblock": true, 00:20:47.767 "num_base_bdevs": 3, 00:20:47.767 "num_base_bdevs_discovered": 2, 00:20:47.767 "num_base_bdevs_operational": 3, 00:20:47.767 "base_bdevs_list": [ 00:20:47.767 { 00:20:47.767 "name": "BaseBdev1", 00:20:47.767 "uuid": "1040df75-bcc0-467e-a0aa-30f8e6de05e5", 00:20:47.767 "is_configured": true, 00:20:47.767 "data_offset": 2048, 00:20:47.767 "data_size": 63488 00:20:47.767 }, 00:20:47.767 { 00:20:47.767 "name": null, 00:20:47.767 "uuid": "afc7627d-1bc5-4225-8d2e-9bc1566add2a", 00:20:47.767 "is_configured": false, 00:20:47.767 "data_offset": 2048, 00:20:47.767 "data_size": 63488 00:20:47.767 }, 00:20:47.767 { 00:20:47.767 "name": "BaseBdev3", 00:20:47.767 "uuid": "8cfafdd8-04c0-4f75-8044-e3a7006a9857", 00:20:47.767 "is_configured": true, 00:20:47.767 "data_offset": 2048, 00:20:47.767 "data_size": 63488 00:20:47.767 } 00:20:47.767 ] 00:20:47.767 }' 00:20:47.767 14:13:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:47.767 14:13:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:48.702 14:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:48.702 14:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:48.961 14:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:20:48.961 14:13:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:48.961 [2024-07-15 14:13:34.954180] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:49.218 14:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:49.218 14:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:49.218 14:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:49.218 14:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:49.218 14:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:49.218 14:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:49.218 14:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:49.218 14:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:49.218 14:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:49.218 14:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:49.218 14:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:49.218 14:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:49.476 14:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:49.476 "name": "Existed_Raid", 00:20:49.476 "uuid": "a5656d2e-ece5-4189-9b57-ddf9ff3e8575", 00:20:49.476 "strip_size_kb": 0, 00:20:49.476 "state": "configuring", 00:20:49.476 "raid_level": "raid1", 00:20:49.476 "superblock": true, 00:20:49.476 "num_base_bdevs": 3, 00:20:49.476 "num_base_bdevs_discovered": 1, 00:20:49.476 "num_base_bdevs_operational": 3, 00:20:49.476 "base_bdevs_list": [ 00:20:49.476 { 00:20:49.476 "name": null, 00:20:49.476 "uuid": "1040df75-bcc0-467e-a0aa-30f8e6de05e5", 00:20:49.476 "is_configured": false, 00:20:49.476 "data_offset": 2048, 00:20:49.476 "data_size": 63488 00:20:49.476 }, 00:20:49.476 { 00:20:49.476 "name": null, 00:20:49.476 "uuid": "afc7627d-1bc5-4225-8d2e-9bc1566add2a", 00:20:49.476 "is_configured": false, 00:20:49.476 "data_offset": 2048, 00:20:49.476 "data_size": 63488 00:20:49.476 }, 00:20:49.476 { 00:20:49.476 "name": "BaseBdev3", 00:20:49.476 "uuid": "8cfafdd8-04c0-4f75-8044-e3a7006a9857", 00:20:49.476 "is_configured": true, 00:20:49.476 "data_offset": 2048, 00:20:49.476 "data_size": 63488 00:20:49.476 } 00:20:49.476 ] 00:20:49.476 }' 00:20:49.476 14:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:49.476 14:13:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:50.042 14:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:50.042 14:13:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:50.300 14:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:20:50.300 14:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:20:50.558 [2024-07-15 14:13:36.538382] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:50.559 14:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:50.559 14:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:50.559 14:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:50.559 14:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:50.559 14:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:50.559 14:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:50.559 14:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:50.559 14:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:50.559 14:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:50.559 14:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:50.816 14:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:50.817 14:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:51.075 14:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:51.075 "name": "Existed_Raid", 00:20:51.075 "uuid": "a5656d2e-ece5-4189-9b57-ddf9ff3e8575", 00:20:51.075 "strip_size_kb": 0, 00:20:51.075 "state": "configuring", 00:20:51.075 "raid_level": "raid1", 00:20:51.075 "superblock": true, 00:20:51.075 "num_base_bdevs": 3, 00:20:51.075 "num_base_bdevs_discovered": 2, 00:20:51.075 "num_base_bdevs_operational": 3, 00:20:51.075 "base_bdevs_list": [ 00:20:51.075 { 00:20:51.075 "name": null, 00:20:51.075 "uuid": "1040df75-bcc0-467e-a0aa-30f8e6de05e5", 00:20:51.075 "is_configured": false, 00:20:51.075 "data_offset": 2048, 00:20:51.075 "data_size": 63488 00:20:51.075 }, 00:20:51.075 { 00:20:51.075 "name": "BaseBdev2", 00:20:51.075 "uuid": "afc7627d-1bc5-4225-8d2e-9bc1566add2a", 00:20:51.075 "is_configured": true, 00:20:51.075 "data_offset": 2048, 00:20:51.075 "data_size": 63488 00:20:51.075 }, 00:20:51.075 { 00:20:51.075 "name": "BaseBdev3", 00:20:51.075 "uuid": "8cfafdd8-04c0-4f75-8044-e3a7006a9857", 00:20:51.075 "is_configured": true, 00:20:51.075 "data_offset": 2048, 00:20:51.075 "data_size": 63488 00:20:51.075 } 00:20:51.075 ] 00:20:51.075 }' 00:20:51.075 14:13:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:51.075 14:13:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:51.643 14:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:51.643 14:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:51.901 14:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:20:51.901 14:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:51.901 14:13:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:20:52.159 14:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 1040df75-bcc0-467e-a0aa-30f8e6de05e5 00:20:52.417 [2024-07-15 14:13:38.382159] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:20:52.417 [2024-07-15 14:13:38.382354] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:20:52.417 [2024-07-15 14:13:38.382370] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:52.417 [2024-07-15 14:13:38.382451] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:52.417 [2024-07-15 14:13:38.382671] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:20:52.417 [2024-07-15 14:13:38.382692] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000008a80 00:20:52.417 [2024-07-15 14:13:38.382823] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:52.417 NewBaseBdev 00:20:52.417 14:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:20:52.417 14:13:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:20:52.417 14:13:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:20:52.417 14:13:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:20:52.417 14:13:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:20:52.417 14:13:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:20:52.417 14:13:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:52.985 14:13:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:20:52.985 [ 00:20:52.985 { 00:20:52.985 "name": "NewBaseBdev", 00:20:52.985 "aliases": [ 00:20:52.985 "1040df75-bcc0-467e-a0aa-30f8e6de05e5" 00:20:52.985 ], 00:20:52.985 "product_name": "Malloc disk", 00:20:52.985 "block_size": 512, 00:20:52.985 "num_blocks": 65536, 00:20:52.985 "uuid": "1040df75-bcc0-467e-a0aa-30f8e6de05e5", 00:20:52.985 "assigned_rate_limits": { 00:20:52.985 "rw_ios_per_sec": 0, 00:20:52.985 "rw_mbytes_per_sec": 0, 00:20:52.985 "r_mbytes_per_sec": 0, 00:20:52.985 "w_mbytes_per_sec": 0 00:20:52.985 }, 00:20:52.985 "claimed": true, 00:20:52.985 "claim_type": "exclusive_write", 00:20:52.985 "zoned": false, 00:20:52.985 "supported_io_types": { 00:20:52.985 "read": true, 00:20:52.985 "write": true, 00:20:52.985 "unmap": true, 00:20:52.985 "flush": true, 00:20:52.985 "reset": true, 00:20:52.985 "nvme_admin": false, 00:20:52.985 "nvme_io": false, 00:20:52.985 "nvme_io_md": false, 00:20:52.985 "write_zeroes": true, 00:20:52.985 "zcopy": true, 00:20:52.985 "get_zone_info": false, 00:20:52.985 "zone_management": false, 00:20:52.985 "zone_append": false, 00:20:52.985 "compare": false, 00:20:52.985 "compare_and_write": false, 00:20:52.985 "abort": true, 00:20:52.985 "seek_hole": false, 00:20:52.985 "seek_data": false, 00:20:52.985 "copy": true, 00:20:52.985 "nvme_iov_md": false 00:20:52.985 }, 00:20:52.985 "memory_domains": [ 00:20:52.985 { 00:20:52.985 "dma_device_id": "system", 00:20:52.985 "dma_device_type": 1 00:20:52.985 }, 00:20:52.985 { 00:20:52.985 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:52.985 "dma_device_type": 2 00:20:52.985 } 00:20:52.985 ], 00:20:52.985 "driver_specific": {} 00:20:52.985 } 00:20:52.985 ] 00:20:52.985 14:13:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:20:52.985 14:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:20:52.985 14:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:52.985 14:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:52.985 14:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:52.985 14:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:52.985 14:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:52.985 14:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:52.985 14:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:52.985 14:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:52.985 14:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:52.985 14:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:52.985 14:13:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:53.243 14:13:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:53.243 "name": "Existed_Raid", 00:20:53.243 "uuid": "a5656d2e-ece5-4189-9b57-ddf9ff3e8575", 00:20:53.243 "strip_size_kb": 0, 00:20:53.243 "state": "online", 00:20:53.243 "raid_level": "raid1", 00:20:53.243 "superblock": true, 00:20:53.243 "num_base_bdevs": 3, 00:20:53.243 "num_base_bdevs_discovered": 3, 00:20:53.243 "num_base_bdevs_operational": 3, 00:20:53.243 "base_bdevs_list": [ 00:20:53.243 { 00:20:53.243 "name": "NewBaseBdev", 00:20:53.243 "uuid": "1040df75-bcc0-467e-a0aa-30f8e6de05e5", 00:20:53.243 "is_configured": true, 00:20:53.243 "data_offset": 2048, 00:20:53.243 "data_size": 63488 00:20:53.243 }, 00:20:53.243 { 00:20:53.243 "name": "BaseBdev2", 00:20:53.243 "uuid": "afc7627d-1bc5-4225-8d2e-9bc1566add2a", 00:20:53.243 "is_configured": true, 00:20:53.243 "data_offset": 2048, 00:20:53.243 "data_size": 63488 00:20:53.243 }, 00:20:53.243 { 00:20:53.243 "name": "BaseBdev3", 00:20:53.243 "uuid": "8cfafdd8-04c0-4f75-8044-e3a7006a9857", 00:20:53.243 "is_configured": true, 00:20:53.243 "data_offset": 2048, 00:20:53.243 "data_size": 63488 00:20:53.243 } 00:20:53.243 ] 00:20:53.243 }' 00:20:53.243 14:13:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:53.243 14:13:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:53.812 14:13:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:20:53.812 14:13:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:20:53.812 14:13:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:53.812 14:13:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:53.812 14:13:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:53.812 14:13:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:20:53.812 14:13:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:53.812 14:13:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:20:54.071 [2024-07-15 14:13:40.034676] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:54.071 14:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:54.071 "name": "Existed_Raid", 00:20:54.071 "aliases": [ 00:20:54.071 "a5656d2e-ece5-4189-9b57-ddf9ff3e8575" 00:20:54.071 ], 00:20:54.071 "product_name": "Raid Volume", 00:20:54.071 "block_size": 512, 00:20:54.071 "num_blocks": 63488, 00:20:54.071 "uuid": "a5656d2e-ece5-4189-9b57-ddf9ff3e8575", 00:20:54.071 "assigned_rate_limits": { 00:20:54.071 "rw_ios_per_sec": 0, 00:20:54.071 "rw_mbytes_per_sec": 0, 00:20:54.071 "r_mbytes_per_sec": 0, 00:20:54.071 "w_mbytes_per_sec": 0 00:20:54.071 }, 00:20:54.071 "claimed": false, 00:20:54.071 "zoned": false, 00:20:54.071 "supported_io_types": { 00:20:54.071 "read": true, 00:20:54.071 "write": true, 00:20:54.071 "unmap": false, 00:20:54.071 "flush": false, 00:20:54.071 "reset": true, 00:20:54.071 "nvme_admin": false, 00:20:54.071 "nvme_io": false, 00:20:54.071 "nvme_io_md": false, 00:20:54.071 "write_zeroes": true, 00:20:54.071 "zcopy": false, 00:20:54.071 "get_zone_info": false, 00:20:54.071 "zone_management": false, 00:20:54.071 "zone_append": false, 00:20:54.071 "compare": false, 00:20:54.071 "compare_and_write": false, 00:20:54.071 "abort": false, 00:20:54.071 "seek_hole": false, 00:20:54.071 "seek_data": false, 00:20:54.071 "copy": false, 00:20:54.071 "nvme_iov_md": false 00:20:54.071 }, 00:20:54.071 "memory_domains": [ 00:20:54.071 { 00:20:54.071 "dma_device_id": "system", 00:20:54.071 "dma_device_type": 1 00:20:54.071 }, 00:20:54.071 { 00:20:54.071 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:54.071 "dma_device_type": 2 00:20:54.071 }, 00:20:54.071 { 00:20:54.071 "dma_device_id": "system", 00:20:54.071 "dma_device_type": 1 00:20:54.071 }, 00:20:54.071 { 00:20:54.071 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:54.071 "dma_device_type": 2 00:20:54.071 }, 00:20:54.071 { 00:20:54.072 "dma_device_id": "system", 00:20:54.072 "dma_device_type": 1 00:20:54.072 }, 00:20:54.072 { 00:20:54.072 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:54.072 "dma_device_type": 2 00:20:54.072 } 00:20:54.072 ], 00:20:54.072 "driver_specific": { 00:20:54.072 "raid": { 00:20:54.072 "uuid": "a5656d2e-ece5-4189-9b57-ddf9ff3e8575", 00:20:54.072 "strip_size_kb": 0, 00:20:54.072 "state": "online", 00:20:54.072 "raid_level": "raid1", 00:20:54.072 "superblock": true, 00:20:54.072 "num_base_bdevs": 3, 00:20:54.072 "num_base_bdevs_discovered": 3, 00:20:54.072 "num_base_bdevs_operational": 3, 00:20:54.072 "base_bdevs_list": [ 00:20:54.072 { 00:20:54.072 "name": "NewBaseBdev", 00:20:54.072 "uuid": "1040df75-bcc0-467e-a0aa-30f8e6de05e5", 00:20:54.072 "is_configured": true, 00:20:54.072 "data_offset": 2048, 00:20:54.072 "data_size": 63488 00:20:54.072 }, 00:20:54.072 { 00:20:54.072 "name": "BaseBdev2", 00:20:54.072 "uuid": "afc7627d-1bc5-4225-8d2e-9bc1566add2a", 00:20:54.072 "is_configured": true, 00:20:54.072 "data_offset": 2048, 00:20:54.072 "data_size": 63488 00:20:54.072 }, 00:20:54.072 { 00:20:54.072 "name": "BaseBdev3", 00:20:54.072 "uuid": "8cfafdd8-04c0-4f75-8044-e3a7006a9857", 00:20:54.072 "is_configured": true, 00:20:54.072 "data_offset": 2048, 00:20:54.072 "data_size": 63488 00:20:54.072 } 00:20:54.072 ] 00:20:54.072 } 00:20:54.072 } 00:20:54.072 }' 00:20:54.072 14:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:54.331 14:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:20:54.331 BaseBdev2 00:20:54.331 BaseBdev3' 00:20:54.331 14:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:54.331 14:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:20:54.331 14:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:54.590 14:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:54.590 "name": "NewBaseBdev", 00:20:54.590 "aliases": [ 00:20:54.590 "1040df75-bcc0-467e-a0aa-30f8e6de05e5" 00:20:54.590 ], 00:20:54.590 "product_name": "Malloc disk", 00:20:54.590 "block_size": 512, 00:20:54.590 "num_blocks": 65536, 00:20:54.590 "uuid": "1040df75-bcc0-467e-a0aa-30f8e6de05e5", 00:20:54.590 "assigned_rate_limits": { 00:20:54.590 "rw_ios_per_sec": 0, 00:20:54.590 "rw_mbytes_per_sec": 0, 00:20:54.590 "r_mbytes_per_sec": 0, 00:20:54.590 "w_mbytes_per_sec": 0 00:20:54.590 }, 00:20:54.590 "claimed": true, 00:20:54.590 "claim_type": "exclusive_write", 00:20:54.590 "zoned": false, 00:20:54.590 "supported_io_types": { 00:20:54.590 "read": true, 00:20:54.590 "write": true, 00:20:54.590 "unmap": true, 00:20:54.590 "flush": true, 00:20:54.590 "reset": true, 00:20:54.590 "nvme_admin": false, 00:20:54.590 "nvme_io": false, 00:20:54.590 "nvme_io_md": false, 00:20:54.590 "write_zeroes": true, 00:20:54.590 "zcopy": true, 00:20:54.590 "get_zone_info": false, 00:20:54.590 "zone_management": false, 00:20:54.590 "zone_append": false, 00:20:54.590 "compare": false, 00:20:54.590 "compare_and_write": false, 00:20:54.590 "abort": true, 00:20:54.590 "seek_hole": false, 00:20:54.590 "seek_data": false, 00:20:54.590 "copy": true, 00:20:54.590 "nvme_iov_md": false 00:20:54.590 }, 00:20:54.590 "memory_domains": [ 00:20:54.590 { 00:20:54.590 "dma_device_id": "system", 00:20:54.590 "dma_device_type": 1 00:20:54.590 }, 00:20:54.590 { 00:20:54.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:54.590 "dma_device_type": 2 00:20:54.590 } 00:20:54.590 ], 00:20:54.590 "driver_specific": {} 00:20:54.590 }' 00:20:54.590 14:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:54.590 14:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:54.590 14:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:54.590 14:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:54.590 14:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:54.849 14:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:54.849 14:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:54.849 14:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:54.849 14:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:54.849 14:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:54.849 14:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:54.849 14:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:54.849 14:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:54.849 14:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:20:54.849 14:13:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:55.107 14:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:55.107 "name": "BaseBdev2", 00:20:55.107 "aliases": [ 00:20:55.107 "afc7627d-1bc5-4225-8d2e-9bc1566add2a" 00:20:55.107 ], 00:20:55.107 "product_name": "Malloc disk", 00:20:55.107 "block_size": 512, 00:20:55.107 "num_blocks": 65536, 00:20:55.107 "uuid": "afc7627d-1bc5-4225-8d2e-9bc1566add2a", 00:20:55.107 "assigned_rate_limits": { 00:20:55.107 "rw_ios_per_sec": 0, 00:20:55.107 "rw_mbytes_per_sec": 0, 00:20:55.107 "r_mbytes_per_sec": 0, 00:20:55.107 "w_mbytes_per_sec": 0 00:20:55.107 }, 00:20:55.107 "claimed": true, 00:20:55.107 "claim_type": "exclusive_write", 00:20:55.107 "zoned": false, 00:20:55.107 "supported_io_types": { 00:20:55.107 "read": true, 00:20:55.107 "write": true, 00:20:55.107 "unmap": true, 00:20:55.107 "flush": true, 00:20:55.107 "reset": true, 00:20:55.107 "nvme_admin": false, 00:20:55.107 "nvme_io": false, 00:20:55.107 "nvme_io_md": false, 00:20:55.107 "write_zeroes": true, 00:20:55.107 "zcopy": true, 00:20:55.107 "get_zone_info": false, 00:20:55.107 "zone_management": false, 00:20:55.107 "zone_append": false, 00:20:55.107 "compare": false, 00:20:55.107 "compare_and_write": false, 00:20:55.107 "abort": true, 00:20:55.107 "seek_hole": false, 00:20:55.107 "seek_data": false, 00:20:55.107 "copy": true, 00:20:55.107 "nvme_iov_md": false 00:20:55.107 }, 00:20:55.107 "memory_domains": [ 00:20:55.107 { 00:20:55.107 "dma_device_id": "system", 00:20:55.107 "dma_device_type": 1 00:20:55.107 }, 00:20:55.107 { 00:20:55.107 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:55.107 "dma_device_type": 2 00:20:55.107 } 00:20:55.107 ], 00:20:55.107 "driver_specific": {} 00:20:55.107 }' 00:20:55.107 14:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:55.366 14:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:55.366 14:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:55.366 14:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:55.367 14:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:55.367 14:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:55.367 14:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:55.367 14:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:55.625 14:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:55.625 14:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:55.625 14:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:55.625 14:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:55.625 14:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:55.625 14:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:55.625 14:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:20:55.883 14:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:55.883 "name": "BaseBdev3", 00:20:55.883 "aliases": [ 00:20:55.883 "8cfafdd8-04c0-4f75-8044-e3a7006a9857" 00:20:55.883 ], 00:20:55.883 "product_name": "Malloc disk", 00:20:55.883 "block_size": 512, 00:20:55.883 "num_blocks": 65536, 00:20:55.883 "uuid": "8cfafdd8-04c0-4f75-8044-e3a7006a9857", 00:20:55.883 "assigned_rate_limits": { 00:20:55.883 "rw_ios_per_sec": 0, 00:20:55.883 "rw_mbytes_per_sec": 0, 00:20:55.883 "r_mbytes_per_sec": 0, 00:20:55.883 "w_mbytes_per_sec": 0 00:20:55.883 }, 00:20:55.883 "claimed": true, 00:20:55.883 "claim_type": "exclusive_write", 00:20:55.883 "zoned": false, 00:20:55.883 "supported_io_types": { 00:20:55.883 "read": true, 00:20:55.883 "write": true, 00:20:55.883 "unmap": true, 00:20:55.883 "flush": true, 00:20:55.883 "reset": true, 00:20:55.883 "nvme_admin": false, 00:20:55.883 "nvme_io": false, 00:20:55.883 "nvme_io_md": false, 00:20:55.883 "write_zeroes": true, 00:20:55.883 "zcopy": true, 00:20:55.883 "get_zone_info": false, 00:20:55.883 "zone_management": false, 00:20:55.883 "zone_append": false, 00:20:55.883 "compare": false, 00:20:55.883 "compare_and_write": false, 00:20:55.883 "abort": true, 00:20:55.883 "seek_hole": false, 00:20:55.883 "seek_data": false, 00:20:55.883 "copy": true, 00:20:55.883 "nvme_iov_md": false 00:20:55.883 }, 00:20:55.883 "memory_domains": [ 00:20:55.883 { 00:20:55.883 "dma_device_id": "system", 00:20:55.883 "dma_device_type": 1 00:20:55.883 }, 00:20:55.884 { 00:20:55.884 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:55.884 "dma_device_type": 2 00:20:55.884 } 00:20:55.884 ], 00:20:55.884 "driver_specific": {} 00:20:55.884 }' 00:20:55.884 14:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:55.884 14:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:56.143 14:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:56.143 14:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:56.143 14:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:56.143 14:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:56.143 14:13:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:56.143 14:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:56.143 14:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:56.143 14:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:56.143 14:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:56.401 14:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:56.401 14:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:56.660 [2024-07-15 14:13:42.438825] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:56.660 [2024-07-15 14:13:42.438871] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:56.660 [2024-07-15 14:13:42.438980] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:56.660 [2024-07-15 14:13:42.439165] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:56.660 [2024-07-15 14:13:42.439178] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name Existed_Raid, state offline 00:20:56.660 14:13:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 198159 00:20:56.660 14:13:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 198159 ']' 00:20:56.660 14:13:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 198159 00:20:56.660 14:13:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:20:56.660 14:13:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:56.660 14:13:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 198159 00:20:56.660 14:13:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:56.660 14:13:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:56.660 14:13:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 198159' 00:20:56.660 killing process with pid 198159 00:20:56.660 14:13:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 198159 00:20:56.660 [2024-07-15 14:13:42.477661] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:56.660 14:13:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 198159 00:20:56.919 [2024-07-15 14:13:42.726141] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:58.294 14:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:20:58.294 00:20:58.294 real 0m32.713s 00:20:58.294 user 1m0.292s 00:20:58.294 sys 0m3.774s 00:20:58.294 14:13:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:58.294 14:13:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:58.294 ************************************ 00:20:58.294 END TEST raid_state_function_test_sb 00:20:58.294 ************************************ 00:20:58.294 14:13:43 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:20:58.294 14:13:43 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:20:58.294 14:13:43 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:20:58.294 14:13:43 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:58.294 14:13:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:58.294 ************************************ 00:20:58.294 START TEST raid_superblock_test 00:20:58.294 ************************************ 00:20:58.294 14:13:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 3 00:20:58.294 14:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:20:58.294 14:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:20:58.294 14:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:20:58.294 14:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:20:58.294 14:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:20:58.294 14:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:20:58.294 14:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:20:58.294 14:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:20:58.295 14:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:20:58.295 14:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:20:58.295 14:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:20:58.295 14:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:20:58.295 14:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:20:58.295 14:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:20:58.295 14:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:20:58.295 14:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=199163 00:20:58.295 14:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:20:58.295 14:13:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 199163 /var/tmp/spdk-raid.sock 00:20:58.295 14:13:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 199163 ']' 00:20:58.295 14:13:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:58.295 14:13:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:58.295 14:13:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:58.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:58.295 14:13:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:58.295 14:13:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:58.295 [2024-07-15 14:13:43.972574] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:20:58.295 [2024-07-15 14:13:43.972851] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid199163 ] 00:20:58.295 [2024-07-15 14:13:44.138693] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:58.553 [2024-07-15 14:13:44.393587] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:58.814 [2024-07-15 14:13:44.597258] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:59.073 14:13:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:59.073 14:13:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:20:59.073 14:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:20:59.073 14:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:20:59.073 14:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:20:59.073 14:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:20:59.073 14:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:20:59.073 14:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:59.073 14:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:20:59.073 14:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:59.073 14:13:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:20:59.331 malloc1 00:20:59.331 14:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:59.589 [2024-07-15 14:13:45.491073] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:59.589 [2024-07-15 14:13:45.491526] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:59.589 [2024-07-15 14:13:45.491658] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:20:59.589 [2024-07-15 14:13:45.491820] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:59.589 [2024-07-15 14:13:45.493797] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:59.589 [2024-07-15 14:13:45.493923] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:59.589 pt1 00:20:59.589 14:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:20:59.589 14:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:20:59.589 14:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:20:59.589 14:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:20:59.589 14:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:20:59.589 14:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:59.589 14:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:20:59.589 14:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:59.589 14:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:20:59.847 malloc2 00:20:59.847 14:13:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:00.105 [2024-07-15 14:13:46.063557] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:00.105 [2024-07-15 14:13:46.063856] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:00.105 [2024-07-15 14:13:46.063963] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:21:00.105 [2024-07-15 14:13:46.064042] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:00.105 [2024-07-15 14:13:46.065886] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:00.105 [2024-07-15 14:13:46.066007] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:00.105 pt2 00:21:00.105 14:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:21:00.105 14:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:21:00.106 14:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:21:00.106 14:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:21:00.106 14:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:21:00.106 14:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:21:00.106 14:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:21:00.106 14:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:21:00.106 14:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:21:00.364 malloc3 00:21:00.364 14:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:00.929 [2024-07-15 14:13:46.645688] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:00.929 [2024-07-15 14:13:46.645825] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:00.929 [2024-07-15 14:13:46.645861] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:21:00.929 [2024-07-15 14:13:46.645891] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:00.929 [2024-07-15 14:13:46.647597] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:00.929 [2024-07-15 14:13:46.647655] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:00.929 pt3 00:21:00.929 14:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:21:00.929 14:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:21:00.929 14:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:21:00.929 [2024-07-15 14:13:46.881744] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:00.929 [2024-07-15 14:13:46.883215] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:00.929 [2024-07-15 14:13:46.883280] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:00.929 [2024-07-15 14:13:46.883473] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:21:00.929 [2024-07-15 14:13:46.883498] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:00.929 [2024-07-15 14:13:46.883622] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:21:00.929 [2024-07-15 14:13:46.883919] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:21:00.929 [2024-07-15 14:13:46.883942] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:21:00.929 [2024-07-15 14:13:46.884061] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:00.929 14:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:00.929 14:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:00.929 14:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:00.929 14:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:00.929 14:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:00.929 14:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:00.929 14:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:00.929 14:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:00.929 14:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:00.929 14:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:00.929 14:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:00.929 14:13:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:01.191 14:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:01.191 "name": "raid_bdev1", 00:21:01.191 "uuid": "740e710d-6bd6-4fb7-a76a-cdc8a4bfd1ad", 00:21:01.191 "strip_size_kb": 0, 00:21:01.191 "state": "online", 00:21:01.191 "raid_level": "raid1", 00:21:01.191 "superblock": true, 00:21:01.192 "num_base_bdevs": 3, 00:21:01.192 "num_base_bdevs_discovered": 3, 00:21:01.192 "num_base_bdevs_operational": 3, 00:21:01.192 "base_bdevs_list": [ 00:21:01.192 { 00:21:01.192 "name": "pt1", 00:21:01.192 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:01.192 "is_configured": true, 00:21:01.192 "data_offset": 2048, 00:21:01.192 "data_size": 63488 00:21:01.192 }, 00:21:01.192 { 00:21:01.192 "name": "pt2", 00:21:01.192 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:01.192 "is_configured": true, 00:21:01.192 "data_offset": 2048, 00:21:01.192 "data_size": 63488 00:21:01.192 }, 00:21:01.192 { 00:21:01.192 "name": "pt3", 00:21:01.192 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:01.192 "is_configured": true, 00:21:01.192 "data_offset": 2048, 00:21:01.192 "data_size": 63488 00:21:01.192 } 00:21:01.192 ] 00:21:01.192 }' 00:21:01.192 14:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:01.192 14:13:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:02.127 14:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:21:02.127 14:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:21:02.127 14:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:02.127 14:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:02.127 14:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:02.127 14:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:21:02.127 14:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:02.127 14:13:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:02.127 [2024-07-15 14:13:48.122091] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:02.385 14:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:02.385 "name": "raid_bdev1", 00:21:02.385 "aliases": [ 00:21:02.385 "740e710d-6bd6-4fb7-a76a-cdc8a4bfd1ad" 00:21:02.385 ], 00:21:02.385 "product_name": "Raid Volume", 00:21:02.385 "block_size": 512, 00:21:02.385 "num_blocks": 63488, 00:21:02.385 "uuid": "740e710d-6bd6-4fb7-a76a-cdc8a4bfd1ad", 00:21:02.385 "assigned_rate_limits": { 00:21:02.385 "rw_ios_per_sec": 0, 00:21:02.385 "rw_mbytes_per_sec": 0, 00:21:02.385 "r_mbytes_per_sec": 0, 00:21:02.385 "w_mbytes_per_sec": 0 00:21:02.385 }, 00:21:02.385 "claimed": false, 00:21:02.385 "zoned": false, 00:21:02.385 "supported_io_types": { 00:21:02.385 "read": true, 00:21:02.385 "write": true, 00:21:02.385 "unmap": false, 00:21:02.385 "flush": false, 00:21:02.385 "reset": true, 00:21:02.385 "nvme_admin": false, 00:21:02.385 "nvme_io": false, 00:21:02.385 "nvme_io_md": false, 00:21:02.385 "write_zeroes": true, 00:21:02.385 "zcopy": false, 00:21:02.385 "get_zone_info": false, 00:21:02.385 "zone_management": false, 00:21:02.386 "zone_append": false, 00:21:02.386 "compare": false, 00:21:02.386 "compare_and_write": false, 00:21:02.386 "abort": false, 00:21:02.386 "seek_hole": false, 00:21:02.386 "seek_data": false, 00:21:02.386 "copy": false, 00:21:02.386 "nvme_iov_md": false 00:21:02.386 }, 00:21:02.386 "memory_domains": [ 00:21:02.386 { 00:21:02.386 "dma_device_id": "system", 00:21:02.386 "dma_device_type": 1 00:21:02.386 }, 00:21:02.386 { 00:21:02.386 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:02.386 "dma_device_type": 2 00:21:02.386 }, 00:21:02.386 { 00:21:02.386 "dma_device_id": "system", 00:21:02.386 "dma_device_type": 1 00:21:02.386 }, 00:21:02.386 { 00:21:02.386 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:02.386 "dma_device_type": 2 00:21:02.386 }, 00:21:02.386 { 00:21:02.386 "dma_device_id": "system", 00:21:02.386 "dma_device_type": 1 00:21:02.386 }, 00:21:02.386 { 00:21:02.386 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:02.386 "dma_device_type": 2 00:21:02.386 } 00:21:02.386 ], 00:21:02.386 "driver_specific": { 00:21:02.386 "raid": { 00:21:02.386 "uuid": "740e710d-6bd6-4fb7-a76a-cdc8a4bfd1ad", 00:21:02.386 "strip_size_kb": 0, 00:21:02.386 "state": "online", 00:21:02.386 "raid_level": "raid1", 00:21:02.386 "superblock": true, 00:21:02.386 "num_base_bdevs": 3, 00:21:02.386 "num_base_bdevs_discovered": 3, 00:21:02.386 "num_base_bdevs_operational": 3, 00:21:02.386 "base_bdevs_list": [ 00:21:02.386 { 00:21:02.386 "name": "pt1", 00:21:02.386 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:02.386 "is_configured": true, 00:21:02.386 "data_offset": 2048, 00:21:02.386 "data_size": 63488 00:21:02.386 }, 00:21:02.386 { 00:21:02.386 "name": "pt2", 00:21:02.386 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:02.386 "is_configured": true, 00:21:02.386 "data_offset": 2048, 00:21:02.386 "data_size": 63488 00:21:02.386 }, 00:21:02.386 { 00:21:02.386 "name": "pt3", 00:21:02.386 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:02.386 "is_configured": true, 00:21:02.386 "data_offset": 2048, 00:21:02.386 "data_size": 63488 00:21:02.386 } 00:21:02.386 ] 00:21:02.386 } 00:21:02.386 } 00:21:02.386 }' 00:21:02.386 14:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:02.386 14:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:21:02.386 pt2 00:21:02.386 pt3' 00:21:02.386 14:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:02.386 14:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:21:02.386 14:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:02.716 14:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:02.716 "name": "pt1", 00:21:02.716 "aliases": [ 00:21:02.716 "00000000-0000-0000-0000-000000000001" 00:21:02.716 ], 00:21:02.716 "product_name": "passthru", 00:21:02.716 "block_size": 512, 00:21:02.716 "num_blocks": 65536, 00:21:02.716 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:02.716 "assigned_rate_limits": { 00:21:02.716 "rw_ios_per_sec": 0, 00:21:02.716 "rw_mbytes_per_sec": 0, 00:21:02.716 "r_mbytes_per_sec": 0, 00:21:02.716 "w_mbytes_per_sec": 0 00:21:02.716 }, 00:21:02.716 "claimed": true, 00:21:02.716 "claim_type": "exclusive_write", 00:21:02.716 "zoned": false, 00:21:02.716 "supported_io_types": { 00:21:02.716 "read": true, 00:21:02.716 "write": true, 00:21:02.716 "unmap": true, 00:21:02.716 "flush": true, 00:21:02.716 "reset": true, 00:21:02.716 "nvme_admin": false, 00:21:02.716 "nvme_io": false, 00:21:02.716 "nvme_io_md": false, 00:21:02.716 "write_zeroes": true, 00:21:02.716 "zcopy": true, 00:21:02.716 "get_zone_info": false, 00:21:02.716 "zone_management": false, 00:21:02.716 "zone_append": false, 00:21:02.716 "compare": false, 00:21:02.716 "compare_and_write": false, 00:21:02.716 "abort": true, 00:21:02.716 "seek_hole": false, 00:21:02.716 "seek_data": false, 00:21:02.716 "copy": true, 00:21:02.716 "nvme_iov_md": false 00:21:02.716 }, 00:21:02.716 "memory_domains": [ 00:21:02.716 { 00:21:02.716 "dma_device_id": "system", 00:21:02.716 "dma_device_type": 1 00:21:02.716 }, 00:21:02.716 { 00:21:02.716 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:02.716 "dma_device_type": 2 00:21:02.716 } 00:21:02.716 ], 00:21:02.716 "driver_specific": { 00:21:02.716 "passthru": { 00:21:02.716 "name": "pt1", 00:21:02.716 "base_bdev_name": "malloc1" 00:21:02.716 } 00:21:02.716 } 00:21:02.716 }' 00:21:02.716 14:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:02.716 14:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:02.716 14:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:02.716 14:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:02.716 14:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:02.716 14:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:02.716 14:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:02.974 14:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:02.974 14:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:02.974 14:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:02.974 14:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:02.974 14:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:02.974 14:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:02.974 14:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:21:02.974 14:13:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:03.232 14:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:03.232 "name": "pt2", 00:21:03.232 "aliases": [ 00:21:03.232 "00000000-0000-0000-0000-000000000002" 00:21:03.232 ], 00:21:03.232 "product_name": "passthru", 00:21:03.232 "block_size": 512, 00:21:03.232 "num_blocks": 65536, 00:21:03.232 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:03.232 "assigned_rate_limits": { 00:21:03.232 "rw_ios_per_sec": 0, 00:21:03.232 "rw_mbytes_per_sec": 0, 00:21:03.232 "r_mbytes_per_sec": 0, 00:21:03.232 "w_mbytes_per_sec": 0 00:21:03.232 }, 00:21:03.232 "claimed": true, 00:21:03.232 "claim_type": "exclusive_write", 00:21:03.232 "zoned": false, 00:21:03.232 "supported_io_types": { 00:21:03.232 "read": true, 00:21:03.232 "write": true, 00:21:03.232 "unmap": true, 00:21:03.232 "flush": true, 00:21:03.232 "reset": true, 00:21:03.232 "nvme_admin": false, 00:21:03.232 "nvme_io": false, 00:21:03.232 "nvme_io_md": false, 00:21:03.232 "write_zeroes": true, 00:21:03.232 "zcopy": true, 00:21:03.232 "get_zone_info": false, 00:21:03.232 "zone_management": false, 00:21:03.232 "zone_append": false, 00:21:03.232 "compare": false, 00:21:03.232 "compare_and_write": false, 00:21:03.232 "abort": true, 00:21:03.232 "seek_hole": false, 00:21:03.232 "seek_data": false, 00:21:03.232 "copy": true, 00:21:03.232 "nvme_iov_md": false 00:21:03.232 }, 00:21:03.232 "memory_domains": [ 00:21:03.232 { 00:21:03.232 "dma_device_id": "system", 00:21:03.232 "dma_device_type": 1 00:21:03.232 }, 00:21:03.232 { 00:21:03.232 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:03.232 "dma_device_type": 2 00:21:03.232 } 00:21:03.232 ], 00:21:03.232 "driver_specific": { 00:21:03.232 "passthru": { 00:21:03.232 "name": "pt2", 00:21:03.232 "base_bdev_name": "malloc2" 00:21:03.232 } 00:21:03.232 } 00:21:03.232 }' 00:21:03.232 14:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:03.232 14:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:03.490 14:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:03.490 14:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:03.490 14:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:03.490 14:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:03.490 14:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:03.490 14:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:03.490 14:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:03.490 14:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:03.749 14:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:03.749 14:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:03.749 14:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:03.749 14:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:21:03.749 14:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:04.009 14:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:04.009 "name": "pt3", 00:21:04.009 "aliases": [ 00:21:04.009 "00000000-0000-0000-0000-000000000003" 00:21:04.009 ], 00:21:04.009 "product_name": "passthru", 00:21:04.009 "block_size": 512, 00:21:04.009 "num_blocks": 65536, 00:21:04.009 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:04.009 "assigned_rate_limits": { 00:21:04.009 "rw_ios_per_sec": 0, 00:21:04.009 "rw_mbytes_per_sec": 0, 00:21:04.009 "r_mbytes_per_sec": 0, 00:21:04.009 "w_mbytes_per_sec": 0 00:21:04.009 }, 00:21:04.009 "claimed": true, 00:21:04.009 "claim_type": "exclusive_write", 00:21:04.009 "zoned": false, 00:21:04.009 "supported_io_types": { 00:21:04.009 "read": true, 00:21:04.009 "write": true, 00:21:04.009 "unmap": true, 00:21:04.009 "flush": true, 00:21:04.009 "reset": true, 00:21:04.009 "nvme_admin": false, 00:21:04.009 "nvme_io": false, 00:21:04.009 "nvme_io_md": false, 00:21:04.009 "write_zeroes": true, 00:21:04.009 "zcopy": true, 00:21:04.009 "get_zone_info": false, 00:21:04.009 "zone_management": false, 00:21:04.009 "zone_append": false, 00:21:04.009 "compare": false, 00:21:04.009 "compare_and_write": false, 00:21:04.009 "abort": true, 00:21:04.009 "seek_hole": false, 00:21:04.009 "seek_data": false, 00:21:04.009 "copy": true, 00:21:04.009 "nvme_iov_md": false 00:21:04.009 }, 00:21:04.009 "memory_domains": [ 00:21:04.009 { 00:21:04.009 "dma_device_id": "system", 00:21:04.009 "dma_device_type": 1 00:21:04.009 }, 00:21:04.009 { 00:21:04.009 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:04.009 "dma_device_type": 2 00:21:04.009 } 00:21:04.009 ], 00:21:04.009 "driver_specific": { 00:21:04.009 "passthru": { 00:21:04.009 "name": "pt3", 00:21:04.009 "base_bdev_name": "malloc3" 00:21:04.009 } 00:21:04.009 } 00:21:04.009 }' 00:21:04.009 14:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:04.009 14:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:04.009 14:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:04.009 14:13:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:04.268 14:13:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:04.268 14:13:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:04.268 14:13:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:04.268 14:13:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:04.268 14:13:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:04.268 14:13:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:04.268 14:13:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:04.268 14:13:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:04.268 14:13:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:04.268 14:13:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:21:04.527 [2024-07-15 14:13:50.526456] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:04.786 14:13:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=740e710d-6bd6-4fb7-a76a-cdc8a4bfd1ad 00:21:04.786 14:13:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 740e710d-6bd6-4fb7-a76a-cdc8a4bfd1ad ']' 00:21:04.786 14:13:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:04.786 [2024-07-15 14:13:50.786316] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:04.786 [2024-07-15 14:13:50.786373] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:04.786 [2024-07-15 14:13:50.786473] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:04.786 [2024-07-15 14:13:50.786531] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:04.786 [2024-07-15 14:13:50.786543] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:21:05.044 14:13:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:05.044 14:13:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:21:05.302 14:13:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:21:05.302 14:13:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:21:05.302 14:13:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:21:05.302 14:13:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:21:05.560 14:13:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:21:05.560 14:13:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:21:05.818 14:13:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:21:05.818 14:13:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:21:06.076 14:13:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:21:06.076 14:13:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:21:06.335 14:13:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:21:06.335 14:13:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:21:06.335 14:13:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:21:06.335 14:13:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:21:06.335 14:13:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:06.335 14:13:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:06.335 14:13:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:06.335 14:13:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:06.335 14:13:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:06.335 14:13:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:06.335 14:13:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:06.335 14:13:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:21:06.335 14:13:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:21:06.593 [2024-07-15 14:13:52.438998] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:21:06.593 [2024-07-15 14:13:52.440608] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:21:06.593 [2024-07-15 14:13:52.440675] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:21:06.593 [2024-07-15 14:13:52.440721] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:21:06.593 [2024-07-15 14:13:52.440832] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:21:06.593 [2024-07-15 14:13:52.440867] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:21:06.593 [2024-07-15 14:13:52.440899] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:06.593 [2024-07-15 14:13:52.440912] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state configuring 00:21:06.593 request: 00:21:06.593 { 00:21:06.593 "name": "raid_bdev1", 00:21:06.593 "raid_level": "raid1", 00:21:06.593 "base_bdevs": [ 00:21:06.593 "malloc1", 00:21:06.593 "malloc2", 00:21:06.593 "malloc3" 00:21:06.593 ], 00:21:06.593 "superblock": false, 00:21:06.593 "method": "bdev_raid_create", 00:21:06.593 "req_id": 1 00:21:06.593 } 00:21:06.593 Got JSON-RPC error response 00:21:06.593 response: 00:21:06.593 { 00:21:06.593 "code": -17, 00:21:06.593 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:21:06.593 } 00:21:06.593 14:13:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:21:06.593 14:13:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:06.593 14:13:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:06.593 14:13:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:06.593 14:13:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:06.593 14:13:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:21:06.850 14:13:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:21:06.850 14:13:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:21:06.850 14:13:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:07.108 [2024-07-15 14:13:52.985198] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:07.108 [2024-07-15 14:13:52.985328] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:07.108 [2024-07-15 14:13:52.985374] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:21:07.108 [2024-07-15 14:13:52.985401] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:07.108 [2024-07-15 14:13:52.987416] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:07.108 [2024-07-15 14:13:52.987471] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:07.108 [2024-07-15 14:13:52.987581] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:07.108 [2024-07-15 14:13:52.987633] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:07.108 pt1 00:21:07.108 14:13:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:21:07.108 14:13:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:07.108 14:13:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:07.108 14:13:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:07.108 14:13:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:07.108 14:13:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:07.108 14:13:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:07.108 14:13:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:07.108 14:13:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:07.108 14:13:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:07.108 14:13:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:07.108 14:13:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:07.366 14:13:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:07.366 "name": "raid_bdev1", 00:21:07.366 "uuid": "740e710d-6bd6-4fb7-a76a-cdc8a4bfd1ad", 00:21:07.366 "strip_size_kb": 0, 00:21:07.366 "state": "configuring", 00:21:07.366 "raid_level": "raid1", 00:21:07.366 "superblock": true, 00:21:07.366 "num_base_bdevs": 3, 00:21:07.366 "num_base_bdevs_discovered": 1, 00:21:07.366 "num_base_bdevs_operational": 3, 00:21:07.366 "base_bdevs_list": [ 00:21:07.366 { 00:21:07.366 "name": "pt1", 00:21:07.366 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:07.366 "is_configured": true, 00:21:07.366 "data_offset": 2048, 00:21:07.366 "data_size": 63488 00:21:07.366 }, 00:21:07.366 { 00:21:07.366 "name": null, 00:21:07.366 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:07.366 "is_configured": false, 00:21:07.366 "data_offset": 2048, 00:21:07.366 "data_size": 63488 00:21:07.366 }, 00:21:07.366 { 00:21:07.366 "name": null, 00:21:07.366 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:07.366 "is_configured": false, 00:21:07.366 "data_offset": 2048, 00:21:07.366 "data_size": 63488 00:21:07.366 } 00:21:07.366 ] 00:21:07.366 }' 00:21:07.366 14:13:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:07.366 14:13:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:07.934 14:13:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:21:07.934 14:13:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:08.279 [2024-07-15 14:13:54.185262] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:08.279 [2024-07-15 14:13:54.185395] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:08.279 [2024-07-15 14:13:54.185442] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:21:08.279 [2024-07-15 14:13:54.185467] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:08.279 [2024-07-15 14:13:54.185886] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:08.279 [2024-07-15 14:13:54.185937] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:08.279 [2024-07-15 14:13:54.186030] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:08.279 [2024-07-15 14:13:54.186062] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:08.279 pt2 00:21:08.279 14:13:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:21:08.571 [2024-07-15 14:13:54.489370] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:21:08.571 14:13:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:21:08.571 14:13:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:08.571 14:13:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:08.571 14:13:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:08.571 14:13:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:08.571 14:13:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:08.571 14:13:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:08.571 14:13:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:08.571 14:13:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:08.571 14:13:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:08.571 14:13:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:08.571 14:13:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:08.832 14:13:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:08.832 "name": "raid_bdev1", 00:21:08.832 "uuid": "740e710d-6bd6-4fb7-a76a-cdc8a4bfd1ad", 00:21:08.832 "strip_size_kb": 0, 00:21:08.832 "state": "configuring", 00:21:08.832 "raid_level": "raid1", 00:21:08.832 "superblock": true, 00:21:08.832 "num_base_bdevs": 3, 00:21:08.832 "num_base_bdevs_discovered": 1, 00:21:08.832 "num_base_bdevs_operational": 3, 00:21:08.832 "base_bdevs_list": [ 00:21:08.832 { 00:21:08.832 "name": "pt1", 00:21:08.832 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:08.832 "is_configured": true, 00:21:08.832 "data_offset": 2048, 00:21:08.832 "data_size": 63488 00:21:08.832 }, 00:21:08.832 { 00:21:08.832 "name": null, 00:21:08.832 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:08.832 "is_configured": false, 00:21:08.832 "data_offset": 2048, 00:21:08.832 "data_size": 63488 00:21:08.832 }, 00:21:08.832 { 00:21:08.832 "name": null, 00:21:08.832 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:08.832 "is_configured": false, 00:21:08.832 "data_offset": 2048, 00:21:08.832 "data_size": 63488 00:21:08.832 } 00:21:08.832 ] 00:21:08.832 }' 00:21:08.832 14:13:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:08.832 14:13:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:09.400 14:13:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:21:09.400 14:13:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:21:09.401 14:13:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:09.660 [2024-07-15 14:13:55.621573] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:09.660 [2024-07-15 14:13:55.621692] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:09.660 [2024-07-15 14:13:55.621740] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:21:09.660 [2024-07-15 14:13:55.621773] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:09.660 [2024-07-15 14:13:55.622150] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:09.660 [2024-07-15 14:13:55.622200] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:09.660 [2024-07-15 14:13:55.622292] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:09.660 [2024-07-15 14:13:55.622317] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:09.660 pt2 00:21:09.660 14:13:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:21:09.660 14:13:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:21:09.660 14:13:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:09.919 [2024-07-15 14:13:55.909563] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:09.919 [2024-07-15 14:13:55.909684] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:09.919 [2024-07-15 14:13:55.909720] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:21:09.919 [2024-07-15 14:13:55.909772] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:09.919 [2024-07-15 14:13:55.910170] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:09.919 [2024-07-15 14:13:55.910230] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:09.919 [2024-07-15 14:13:55.910324] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:21:09.919 [2024-07-15 14:13:55.910349] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:09.919 [2024-07-15 14:13:55.910441] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:21:09.919 [2024-07-15 14:13:55.910454] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:09.919 [2024-07-15 14:13:55.910528] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:21:09.919 [2024-07-15 14:13:55.910769] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:21:09.919 [2024-07-15 14:13:55.910794] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:21:09.919 [2024-07-15 14:13:55.910904] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:09.919 pt3 00:21:10.177 14:13:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:21:10.177 14:13:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:21:10.177 14:13:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:10.177 14:13:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:10.178 14:13:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:10.178 14:13:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:10.178 14:13:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:10.178 14:13:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:10.178 14:13:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:10.178 14:13:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:10.178 14:13:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:10.178 14:13:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:10.178 14:13:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:10.178 14:13:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:10.436 14:13:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:10.436 "name": "raid_bdev1", 00:21:10.436 "uuid": "740e710d-6bd6-4fb7-a76a-cdc8a4bfd1ad", 00:21:10.436 "strip_size_kb": 0, 00:21:10.436 "state": "online", 00:21:10.436 "raid_level": "raid1", 00:21:10.436 "superblock": true, 00:21:10.436 "num_base_bdevs": 3, 00:21:10.436 "num_base_bdevs_discovered": 3, 00:21:10.436 "num_base_bdevs_operational": 3, 00:21:10.436 "base_bdevs_list": [ 00:21:10.436 { 00:21:10.436 "name": "pt1", 00:21:10.436 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:10.436 "is_configured": true, 00:21:10.436 "data_offset": 2048, 00:21:10.436 "data_size": 63488 00:21:10.436 }, 00:21:10.436 { 00:21:10.436 "name": "pt2", 00:21:10.436 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:10.436 "is_configured": true, 00:21:10.436 "data_offset": 2048, 00:21:10.436 "data_size": 63488 00:21:10.436 }, 00:21:10.436 { 00:21:10.436 "name": "pt3", 00:21:10.436 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:10.436 "is_configured": true, 00:21:10.436 "data_offset": 2048, 00:21:10.436 "data_size": 63488 00:21:10.436 } 00:21:10.436 ] 00:21:10.436 }' 00:21:10.436 14:13:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:10.436 14:13:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:11.004 14:13:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:21:11.004 14:13:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:21:11.004 14:13:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:11.004 14:13:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:11.004 14:13:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:11.004 14:13:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:21:11.004 14:13:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:11.004 14:13:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:11.263 [2024-07-15 14:13:57.122027] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:11.263 14:13:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:11.263 "name": "raid_bdev1", 00:21:11.263 "aliases": [ 00:21:11.263 "740e710d-6bd6-4fb7-a76a-cdc8a4bfd1ad" 00:21:11.263 ], 00:21:11.263 "product_name": "Raid Volume", 00:21:11.263 "block_size": 512, 00:21:11.263 "num_blocks": 63488, 00:21:11.263 "uuid": "740e710d-6bd6-4fb7-a76a-cdc8a4bfd1ad", 00:21:11.263 "assigned_rate_limits": { 00:21:11.263 "rw_ios_per_sec": 0, 00:21:11.263 "rw_mbytes_per_sec": 0, 00:21:11.263 "r_mbytes_per_sec": 0, 00:21:11.263 "w_mbytes_per_sec": 0 00:21:11.263 }, 00:21:11.263 "claimed": false, 00:21:11.263 "zoned": false, 00:21:11.263 "supported_io_types": { 00:21:11.263 "read": true, 00:21:11.263 "write": true, 00:21:11.263 "unmap": false, 00:21:11.263 "flush": false, 00:21:11.263 "reset": true, 00:21:11.263 "nvme_admin": false, 00:21:11.263 "nvme_io": false, 00:21:11.263 "nvme_io_md": false, 00:21:11.263 "write_zeroes": true, 00:21:11.263 "zcopy": false, 00:21:11.263 "get_zone_info": false, 00:21:11.263 "zone_management": false, 00:21:11.263 "zone_append": false, 00:21:11.263 "compare": false, 00:21:11.263 "compare_and_write": false, 00:21:11.263 "abort": false, 00:21:11.263 "seek_hole": false, 00:21:11.263 "seek_data": false, 00:21:11.263 "copy": false, 00:21:11.263 "nvme_iov_md": false 00:21:11.263 }, 00:21:11.263 "memory_domains": [ 00:21:11.263 { 00:21:11.263 "dma_device_id": "system", 00:21:11.263 "dma_device_type": 1 00:21:11.263 }, 00:21:11.263 { 00:21:11.263 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:11.263 "dma_device_type": 2 00:21:11.263 }, 00:21:11.263 { 00:21:11.263 "dma_device_id": "system", 00:21:11.263 "dma_device_type": 1 00:21:11.263 }, 00:21:11.263 { 00:21:11.263 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:11.263 "dma_device_type": 2 00:21:11.263 }, 00:21:11.263 { 00:21:11.263 "dma_device_id": "system", 00:21:11.263 "dma_device_type": 1 00:21:11.263 }, 00:21:11.263 { 00:21:11.263 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:11.263 "dma_device_type": 2 00:21:11.263 } 00:21:11.263 ], 00:21:11.263 "driver_specific": { 00:21:11.263 "raid": { 00:21:11.263 "uuid": "740e710d-6bd6-4fb7-a76a-cdc8a4bfd1ad", 00:21:11.263 "strip_size_kb": 0, 00:21:11.263 "state": "online", 00:21:11.263 "raid_level": "raid1", 00:21:11.263 "superblock": true, 00:21:11.263 "num_base_bdevs": 3, 00:21:11.263 "num_base_bdevs_discovered": 3, 00:21:11.263 "num_base_bdevs_operational": 3, 00:21:11.263 "base_bdevs_list": [ 00:21:11.263 { 00:21:11.263 "name": "pt1", 00:21:11.263 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:11.264 "is_configured": true, 00:21:11.264 "data_offset": 2048, 00:21:11.264 "data_size": 63488 00:21:11.264 }, 00:21:11.264 { 00:21:11.264 "name": "pt2", 00:21:11.264 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:11.264 "is_configured": true, 00:21:11.264 "data_offset": 2048, 00:21:11.264 "data_size": 63488 00:21:11.264 }, 00:21:11.264 { 00:21:11.264 "name": "pt3", 00:21:11.264 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:11.264 "is_configured": true, 00:21:11.264 "data_offset": 2048, 00:21:11.264 "data_size": 63488 00:21:11.264 } 00:21:11.264 ] 00:21:11.264 } 00:21:11.264 } 00:21:11.264 }' 00:21:11.264 14:13:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:11.264 14:13:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:21:11.264 pt2 00:21:11.264 pt3' 00:21:11.264 14:13:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:11.264 14:13:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:21:11.264 14:13:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:11.523 14:13:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:11.523 "name": "pt1", 00:21:11.523 "aliases": [ 00:21:11.523 "00000000-0000-0000-0000-000000000001" 00:21:11.523 ], 00:21:11.523 "product_name": "passthru", 00:21:11.523 "block_size": 512, 00:21:11.523 "num_blocks": 65536, 00:21:11.523 "uuid": "00000000-0000-0000-0000-000000000001", 00:21:11.523 "assigned_rate_limits": { 00:21:11.523 "rw_ios_per_sec": 0, 00:21:11.523 "rw_mbytes_per_sec": 0, 00:21:11.523 "r_mbytes_per_sec": 0, 00:21:11.523 "w_mbytes_per_sec": 0 00:21:11.523 }, 00:21:11.523 "claimed": true, 00:21:11.523 "claim_type": "exclusive_write", 00:21:11.523 "zoned": false, 00:21:11.523 "supported_io_types": { 00:21:11.523 "read": true, 00:21:11.523 "write": true, 00:21:11.523 "unmap": true, 00:21:11.523 "flush": true, 00:21:11.523 "reset": true, 00:21:11.523 "nvme_admin": false, 00:21:11.523 "nvme_io": false, 00:21:11.523 "nvme_io_md": false, 00:21:11.523 "write_zeroes": true, 00:21:11.523 "zcopy": true, 00:21:11.523 "get_zone_info": false, 00:21:11.523 "zone_management": false, 00:21:11.523 "zone_append": false, 00:21:11.523 "compare": false, 00:21:11.523 "compare_and_write": false, 00:21:11.523 "abort": true, 00:21:11.523 "seek_hole": false, 00:21:11.523 "seek_data": false, 00:21:11.523 "copy": true, 00:21:11.523 "nvme_iov_md": false 00:21:11.523 }, 00:21:11.523 "memory_domains": [ 00:21:11.523 { 00:21:11.523 "dma_device_id": "system", 00:21:11.523 "dma_device_type": 1 00:21:11.523 }, 00:21:11.523 { 00:21:11.523 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:11.523 "dma_device_type": 2 00:21:11.523 } 00:21:11.523 ], 00:21:11.523 "driver_specific": { 00:21:11.523 "passthru": { 00:21:11.523 "name": "pt1", 00:21:11.523 "base_bdev_name": "malloc1" 00:21:11.523 } 00:21:11.523 } 00:21:11.523 }' 00:21:11.523 14:13:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:11.523 14:13:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:11.782 14:13:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:11.782 14:13:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:11.782 14:13:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:11.782 14:13:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:11.782 14:13:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:11.782 14:13:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:11.782 14:13:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:11.782 14:13:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:11.782 14:13:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:12.041 14:13:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:12.041 14:13:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:12.041 14:13:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:21:12.041 14:13:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:12.300 14:13:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:12.300 "name": "pt2", 00:21:12.300 "aliases": [ 00:21:12.300 "00000000-0000-0000-0000-000000000002" 00:21:12.300 ], 00:21:12.300 "product_name": "passthru", 00:21:12.300 "block_size": 512, 00:21:12.300 "num_blocks": 65536, 00:21:12.300 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:12.300 "assigned_rate_limits": { 00:21:12.300 "rw_ios_per_sec": 0, 00:21:12.300 "rw_mbytes_per_sec": 0, 00:21:12.300 "r_mbytes_per_sec": 0, 00:21:12.300 "w_mbytes_per_sec": 0 00:21:12.300 }, 00:21:12.300 "claimed": true, 00:21:12.300 "claim_type": "exclusive_write", 00:21:12.300 "zoned": false, 00:21:12.300 "supported_io_types": { 00:21:12.300 "read": true, 00:21:12.300 "write": true, 00:21:12.300 "unmap": true, 00:21:12.300 "flush": true, 00:21:12.300 "reset": true, 00:21:12.300 "nvme_admin": false, 00:21:12.300 "nvme_io": false, 00:21:12.300 "nvme_io_md": false, 00:21:12.300 "write_zeroes": true, 00:21:12.300 "zcopy": true, 00:21:12.300 "get_zone_info": false, 00:21:12.300 "zone_management": false, 00:21:12.300 "zone_append": false, 00:21:12.300 "compare": false, 00:21:12.300 "compare_and_write": false, 00:21:12.300 "abort": true, 00:21:12.300 "seek_hole": false, 00:21:12.300 "seek_data": false, 00:21:12.300 "copy": true, 00:21:12.300 "nvme_iov_md": false 00:21:12.300 }, 00:21:12.300 "memory_domains": [ 00:21:12.300 { 00:21:12.300 "dma_device_id": "system", 00:21:12.300 "dma_device_type": 1 00:21:12.300 }, 00:21:12.300 { 00:21:12.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:12.300 "dma_device_type": 2 00:21:12.300 } 00:21:12.300 ], 00:21:12.300 "driver_specific": { 00:21:12.300 "passthru": { 00:21:12.300 "name": "pt2", 00:21:12.300 "base_bdev_name": "malloc2" 00:21:12.300 } 00:21:12.300 } 00:21:12.300 }' 00:21:12.300 14:13:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:12.300 14:13:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:12.300 14:13:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:12.300 14:13:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:12.300 14:13:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:12.559 14:13:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:12.559 14:13:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:12.559 14:13:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:12.559 14:13:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:12.559 14:13:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:12.559 14:13:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:12.559 14:13:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:12.559 14:13:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:12.559 14:13:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:21:12.559 14:13:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:13.126 14:13:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:13.126 "name": "pt3", 00:21:13.126 "aliases": [ 00:21:13.126 "00000000-0000-0000-0000-000000000003" 00:21:13.126 ], 00:21:13.126 "product_name": "passthru", 00:21:13.126 "block_size": 512, 00:21:13.126 "num_blocks": 65536, 00:21:13.126 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:13.126 "assigned_rate_limits": { 00:21:13.126 "rw_ios_per_sec": 0, 00:21:13.126 "rw_mbytes_per_sec": 0, 00:21:13.126 "r_mbytes_per_sec": 0, 00:21:13.126 "w_mbytes_per_sec": 0 00:21:13.126 }, 00:21:13.126 "claimed": true, 00:21:13.126 "claim_type": "exclusive_write", 00:21:13.126 "zoned": false, 00:21:13.126 "supported_io_types": { 00:21:13.126 "read": true, 00:21:13.126 "write": true, 00:21:13.126 "unmap": true, 00:21:13.126 "flush": true, 00:21:13.126 "reset": true, 00:21:13.126 "nvme_admin": false, 00:21:13.126 "nvme_io": false, 00:21:13.126 "nvme_io_md": false, 00:21:13.126 "write_zeroes": true, 00:21:13.126 "zcopy": true, 00:21:13.126 "get_zone_info": false, 00:21:13.126 "zone_management": false, 00:21:13.126 "zone_append": false, 00:21:13.126 "compare": false, 00:21:13.126 "compare_and_write": false, 00:21:13.126 "abort": true, 00:21:13.126 "seek_hole": false, 00:21:13.126 "seek_data": false, 00:21:13.126 "copy": true, 00:21:13.126 "nvme_iov_md": false 00:21:13.126 }, 00:21:13.126 "memory_domains": [ 00:21:13.126 { 00:21:13.126 "dma_device_id": "system", 00:21:13.126 "dma_device_type": 1 00:21:13.126 }, 00:21:13.126 { 00:21:13.126 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:13.126 "dma_device_type": 2 00:21:13.126 } 00:21:13.126 ], 00:21:13.126 "driver_specific": { 00:21:13.126 "passthru": { 00:21:13.126 "name": "pt3", 00:21:13.126 "base_bdev_name": "malloc3" 00:21:13.126 } 00:21:13.126 } 00:21:13.126 }' 00:21:13.126 14:13:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:13.126 14:13:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:13.126 14:13:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:13.126 14:13:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:13.126 14:13:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:13.126 14:13:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:13.126 14:13:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:13.127 14:13:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:13.127 14:13:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:13.127 14:13:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:13.385 14:13:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:13.385 14:13:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:13.385 14:13:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:13.385 14:13:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:21:13.642 [2024-07-15 14:13:59.461364] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:13.642 14:13:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 740e710d-6bd6-4fb7-a76a-cdc8a4bfd1ad '!=' 740e710d-6bd6-4fb7-a76a-cdc8a4bfd1ad ']' 00:21:13.642 14:13:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:21:13.642 14:13:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:21:13.642 14:13:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:21:13.642 14:13:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:21:13.899 [2024-07-15 14:13:59.725242] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:21:13.899 14:13:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:13.899 14:13:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:13.899 14:13:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:13.899 14:13:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:13.899 14:13:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:13.899 14:13:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:13.899 14:13:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:13.899 14:13:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:13.899 14:13:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:13.899 14:13:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:13.899 14:13:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:13.899 14:13:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:14.168 14:14:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:14.168 "name": "raid_bdev1", 00:21:14.168 "uuid": "740e710d-6bd6-4fb7-a76a-cdc8a4bfd1ad", 00:21:14.168 "strip_size_kb": 0, 00:21:14.168 "state": "online", 00:21:14.168 "raid_level": "raid1", 00:21:14.168 "superblock": true, 00:21:14.168 "num_base_bdevs": 3, 00:21:14.168 "num_base_bdevs_discovered": 2, 00:21:14.168 "num_base_bdevs_operational": 2, 00:21:14.168 "base_bdevs_list": [ 00:21:14.168 { 00:21:14.168 "name": null, 00:21:14.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:14.168 "is_configured": false, 00:21:14.168 "data_offset": 2048, 00:21:14.168 "data_size": 63488 00:21:14.168 }, 00:21:14.168 { 00:21:14.168 "name": "pt2", 00:21:14.168 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:14.168 "is_configured": true, 00:21:14.168 "data_offset": 2048, 00:21:14.168 "data_size": 63488 00:21:14.168 }, 00:21:14.168 { 00:21:14.168 "name": "pt3", 00:21:14.169 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:14.169 "is_configured": true, 00:21:14.169 "data_offset": 2048, 00:21:14.169 "data_size": 63488 00:21:14.169 } 00:21:14.169 ] 00:21:14.169 }' 00:21:14.169 14:14:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:14.169 14:14:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:14.772 14:14:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:15.339 [2024-07-15 14:14:01.041382] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:15.339 [2024-07-15 14:14:01.041628] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:15.339 [2024-07-15 14:14:01.041817] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:15.339 [2024-07-15 14:14:01.041972] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:15.339 [2024-07-15 14:14:01.042096] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:21:15.339 14:14:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:15.339 14:14:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:21:15.339 14:14:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:21:15.339 14:14:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:21:15.339 14:14:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:21:15.339 14:14:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:21:15.339 14:14:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:21:15.905 14:14:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:21:15.905 14:14:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:21:15.905 14:14:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:21:15.905 14:14:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:21:15.905 14:14:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:21:15.905 14:14:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:21:15.905 14:14:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:21:15.905 14:14:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:21:16.162 [2024-07-15 14:14:02.045506] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:21:16.162 [2024-07-15 14:14:02.045812] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:16.162 [2024-07-15 14:14:02.045992] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:21:16.162 [2024-07-15 14:14:02.046133] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:16.162 [2024-07-15 14:14:02.047980] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:16.162 [2024-07-15 14:14:02.048151] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:21:16.162 [2024-07-15 14:14:02.048354] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:21:16.162 [2024-07-15 14:14:02.048520] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:16.162 pt2 00:21:16.162 14:14:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:21:16.162 14:14:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:16.162 14:14:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:16.162 14:14:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:16.162 14:14:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:16.162 14:14:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:16.162 14:14:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:16.162 14:14:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:16.162 14:14:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:16.162 14:14:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:16.162 14:14:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:16.162 14:14:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:16.426 14:14:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:16.426 "name": "raid_bdev1", 00:21:16.426 "uuid": "740e710d-6bd6-4fb7-a76a-cdc8a4bfd1ad", 00:21:16.426 "strip_size_kb": 0, 00:21:16.426 "state": "configuring", 00:21:16.426 "raid_level": "raid1", 00:21:16.426 "superblock": true, 00:21:16.426 "num_base_bdevs": 3, 00:21:16.426 "num_base_bdevs_discovered": 1, 00:21:16.426 "num_base_bdevs_operational": 2, 00:21:16.426 "base_bdevs_list": [ 00:21:16.426 { 00:21:16.426 "name": null, 00:21:16.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:16.426 "is_configured": false, 00:21:16.426 "data_offset": 2048, 00:21:16.426 "data_size": 63488 00:21:16.426 }, 00:21:16.426 { 00:21:16.426 "name": "pt2", 00:21:16.426 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:16.426 "is_configured": true, 00:21:16.426 "data_offset": 2048, 00:21:16.426 "data_size": 63488 00:21:16.426 }, 00:21:16.426 { 00:21:16.426 "name": null, 00:21:16.426 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:16.426 "is_configured": false, 00:21:16.426 "data_offset": 2048, 00:21:16.426 "data_size": 63488 00:21:16.426 } 00:21:16.426 ] 00:21:16.426 }' 00:21:16.426 14:14:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:16.426 14:14:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:17.030 14:14:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:21:17.030 14:14:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:21:17.030 14:14:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@518 -- # i=2 00:21:17.030 14:14:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:17.288 [2024-07-15 14:14:03.189703] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:17.288 [2024-07-15 14:14:03.189992] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:17.288 [2024-07-15 14:14:03.190180] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:21:17.288 [2024-07-15 14:14:03.190319] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:17.288 [2024-07-15 14:14:03.190832] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:17.288 [2024-07-15 14:14:03.190993] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:17.288 [2024-07-15 14:14:03.191215] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:21:17.288 [2024-07-15 14:14:03.191353] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:17.288 [2024-07-15 14:14:03.191549] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ae80 00:21:17.288 [2024-07-15 14:14:03.191669] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:17.288 [2024-07-15 14:14:03.191812] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:21:17.288 [2024-07-15 14:14:03.192153] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ae80 00:21:17.288 [2024-07-15 14:14:03.192274] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ae80 00:21:17.288 [2024-07-15 14:14:03.192479] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:17.288 pt3 00:21:17.288 14:14:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:17.288 14:14:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:17.288 14:14:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:17.288 14:14:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:17.288 14:14:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:17.288 14:14:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:17.288 14:14:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:17.288 14:14:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:17.288 14:14:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:17.288 14:14:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:17.288 14:14:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:17.288 14:14:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:17.545 14:14:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:17.545 "name": "raid_bdev1", 00:21:17.545 "uuid": "740e710d-6bd6-4fb7-a76a-cdc8a4bfd1ad", 00:21:17.545 "strip_size_kb": 0, 00:21:17.545 "state": "online", 00:21:17.545 "raid_level": "raid1", 00:21:17.545 "superblock": true, 00:21:17.545 "num_base_bdevs": 3, 00:21:17.545 "num_base_bdevs_discovered": 2, 00:21:17.545 "num_base_bdevs_operational": 2, 00:21:17.545 "base_bdevs_list": [ 00:21:17.545 { 00:21:17.545 "name": null, 00:21:17.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:17.545 "is_configured": false, 00:21:17.545 "data_offset": 2048, 00:21:17.545 "data_size": 63488 00:21:17.545 }, 00:21:17.545 { 00:21:17.545 "name": "pt2", 00:21:17.545 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:17.545 "is_configured": true, 00:21:17.545 "data_offset": 2048, 00:21:17.545 "data_size": 63488 00:21:17.545 }, 00:21:17.545 { 00:21:17.545 "name": "pt3", 00:21:17.545 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:17.545 "is_configured": true, 00:21:17.545 "data_offset": 2048, 00:21:17.545 "data_size": 63488 00:21:17.545 } 00:21:17.545 ] 00:21:17.545 }' 00:21:17.545 14:14:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:17.545 14:14:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:18.475 14:14:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:18.475 [2024-07-15 14:14:04.433851] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:18.475 [2024-07-15 14:14:04.434031] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:18.475 [2024-07-15 14:14:04.434230] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:18.475 [2024-07-15 14:14:04.434382] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:18.475 [2024-07-15 14:14:04.434490] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ae80 name raid_bdev1, state offline 00:21:18.475 14:14:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:18.475 14:14:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:21:19.038 14:14:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:21:19.038 14:14:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:21:19.038 14:14:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 3 -gt 2 ']' 00:21:19.038 14:14:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@533 -- # i=2 00:21:19.038 14:14:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:21:19.038 14:14:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:21:19.295 [2024-07-15 14:14:05.209947] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:21:19.295 [2024-07-15 14:14:05.210224] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:19.295 [2024-07-15 14:14:05.210382] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:21:19.295 [2024-07-15 14:14:05.210514] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:19.295 [2024-07-15 14:14:05.212448] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:19.295 [2024-07-15 14:14:05.212622] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:21:19.295 [2024-07-15 14:14:05.212904] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:21:19.295 [2024-07-15 14:14:05.213090] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:21:19.295 [2024-07-15 14:14:05.213298] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:21:19.295 [2024-07-15 14:14:05.213424] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:19.295 [2024-07-15 14:14:05.213572] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ba80 name raid_bdev1, state configuring 00:21:19.295 [2024-07-15 14:14:05.213792] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:21:19.295 pt1 00:21:19.295 14:14:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 3 -gt 2 ']' 00:21:19.295 14:14:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@544 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:21:19.295 14:14:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:19.295 14:14:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:19.295 14:14:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:19.295 14:14:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:19.295 14:14:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:19.295 14:14:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:19.295 14:14:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:19.295 14:14:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:19.295 14:14:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:19.295 14:14:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:19.295 14:14:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:19.552 14:14:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:19.552 "name": "raid_bdev1", 00:21:19.552 "uuid": "740e710d-6bd6-4fb7-a76a-cdc8a4bfd1ad", 00:21:19.552 "strip_size_kb": 0, 00:21:19.552 "state": "configuring", 00:21:19.552 "raid_level": "raid1", 00:21:19.552 "superblock": true, 00:21:19.552 "num_base_bdevs": 3, 00:21:19.552 "num_base_bdevs_discovered": 1, 00:21:19.552 "num_base_bdevs_operational": 2, 00:21:19.552 "base_bdevs_list": [ 00:21:19.552 { 00:21:19.552 "name": null, 00:21:19.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:19.552 "is_configured": false, 00:21:19.552 "data_offset": 2048, 00:21:19.552 "data_size": 63488 00:21:19.552 }, 00:21:19.552 { 00:21:19.552 "name": "pt2", 00:21:19.552 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:19.552 "is_configured": true, 00:21:19.552 "data_offset": 2048, 00:21:19.552 "data_size": 63488 00:21:19.552 }, 00:21:19.552 { 00:21:19.552 "name": null, 00:21:19.552 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:19.552 "is_configured": false, 00:21:19.552 "data_offset": 2048, 00:21:19.552 "data_size": 63488 00:21:19.552 } 00:21:19.552 ] 00:21:19.552 }' 00:21:19.552 14:14:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:19.552 14:14:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:20.485 14:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:21:20.485 14:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:21:20.485 14:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # [[ false == \f\a\l\s\e ]] 00:21:20.485 14:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@548 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:21:20.744 [2024-07-15 14:14:06.694298] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:21:20.744 [2024-07-15 14:14:06.694576] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:20.744 [2024-07-15 14:14:06.694663] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:21:20.744 [2024-07-15 14:14:06.694879] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:20.744 [2024-07-15 14:14:06.695291] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:20.744 [2024-07-15 14:14:06.695456] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:21:20.744 [2024-07-15 14:14:06.695666] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:21:20.744 [2024-07-15 14:14:06.695820] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:21:20.744 [2024-07-15 14:14:06.696031] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000bd80 00:21:20.744 [2024-07-15 14:14:06.696156] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:20.744 [2024-07-15 14:14:06.696298] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:21:20.744 [2024-07-15 14:14:06.696623] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000bd80 00:21:20.744 [2024-07-15 14:14:06.696758] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000bd80 00:21:20.744 [2024-07-15 14:14:06.696980] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:20.744 pt3 00:21:20.744 14:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:20.744 14:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:20.744 14:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:20.744 14:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:20.744 14:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:20.744 14:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:20.744 14:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:20.744 14:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:20.744 14:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:20.744 14:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:20.744 14:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:20.744 14:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:21.002 14:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:21.002 "name": "raid_bdev1", 00:21:21.002 "uuid": "740e710d-6bd6-4fb7-a76a-cdc8a4bfd1ad", 00:21:21.002 "strip_size_kb": 0, 00:21:21.002 "state": "online", 00:21:21.002 "raid_level": "raid1", 00:21:21.002 "superblock": true, 00:21:21.002 "num_base_bdevs": 3, 00:21:21.002 "num_base_bdevs_discovered": 2, 00:21:21.002 "num_base_bdevs_operational": 2, 00:21:21.002 "base_bdevs_list": [ 00:21:21.002 { 00:21:21.002 "name": null, 00:21:21.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:21.002 "is_configured": false, 00:21:21.002 "data_offset": 2048, 00:21:21.002 "data_size": 63488 00:21:21.002 }, 00:21:21.002 { 00:21:21.002 "name": "pt2", 00:21:21.002 "uuid": "00000000-0000-0000-0000-000000000002", 00:21:21.002 "is_configured": true, 00:21:21.002 "data_offset": 2048, 00:21:21.002 "data_size": 63488 00:21:21.002 }, 00:21:21.002 { 00:21:21.002 "name": "pt3", 00:21:21.002 "uuid": "00000000-0000-0000-0000-000000000003", 00:21:21.002 "is_configured": true, 00:21:21.002 "data_offset": 2048, 00:21:21.002 "data_size": 63488 00:21:21.002 } 00:21:21.002 ] 00:21:21.002 }' 00:21:21.002 14:14:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:21.002 14:14:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.937 14:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:21:21.937 14:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:21:21.937 14:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:21:21.937 14:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:21.937 14:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:21:22.196 [2024-07-15 14:14:08.173512] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:22.196 14:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 740e710d-6bd6-4fb7-a76a-cdc8a4bfd1ad '!=' 740e710d-6bd6-4fb7-a76a-cdc8a4bfd1ad ']' 00:21:22.196 14:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 199163 00:21:22.196 14:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 199163 ']' 00:21:22.196 14:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 199163 00:21:22.196 14:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:21:22.455 14:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:22.455 14:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 199163 00:21:22.455 14:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:22.455 14:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:22.455 14:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 199163' 00:21:22.455 killing process with pid 199163 00:21:22.455 14:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 199163 00:21:22.455 [2024-07-15 14:14:08.224149] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:22.455 14:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 199163 00:21:22.455 [2024-07-15 14:14:08.224347] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:22.455 [2024-07-15 14:14:08.224404] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:22.455 [2024-07-15 14:14:08.224416] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000bd80 name raid_bdev1, state offline 00:21:22.714 [2024-07-15 14:14:08.479975] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:23.713 14:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:21:23.713 00:21:23.713 real 0m25.680s 00:21:23.713 user 0m47.211s 00:21:23.713 sys 0m2.992s 00:21:23.713 14:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:23.713 14:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:21:23.713 ************************************ 00:21:23.713 END TEST raid_superblock_test 00:21:23.713 ************************************ 00:21:23.713 14:14:09 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:21:23.713 14:14:09 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:21:23.713 14:14:09 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:21:23.713 14:14:09 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:23.713 14:14:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:23.713 ************************************ 00:21:23.713 START TEST raid_read_error_test 00:21:23.713 ************************************ 00:21:23.713 14:14:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 3 read 00:21:23.713 14:14:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:21:23.713 14:14:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:21:23.713 14:14:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:21:23.713 14:14:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:21:23.713 14:14:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:23.713 14:14:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:21:23.713 14:14:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:21:23.713 14:14:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:23.713 14:14:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:21:23.713 14:14:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:21:23.713 14:14:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:23.713 14:14:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:21:23.713 14:14:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:21:23.713 14:14:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:23.713 14:14:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:21:23.713 14:14:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:21:23.713 14:14:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:21:23.713 14:14:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:21:23.713 14:14:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:21:23.713 14:14:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:21:23.713 14:14:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:21:23.713 14:14:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:21:23.713 14:14:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:21:23.713 14:14:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:21:23.713 14:14:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.TyvNjzrSUP 00:21:23.713 14:14:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=199933 00:21:23.713 14:14:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:21:23.713 14:14:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 199933 /var/tmp/spdk-raid.sock 00:21:23.713 14:14:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 199933 ']' 00:21:23.713 14:14:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:23.713 14:14:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:23.713 14:14:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:23.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:23.713 14:14:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:23.713 14:14:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:23.972 [2024-07-15 14:14:09.721376] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:21:23.972 [2024-07-15 14:14:09.722286] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid199933 ] 00:21:23.972 [2024-07-15 14:14:09.879406] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:24.230 [2024-07-15 14:14:10.111809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:24.489 [2024-07-15 14:14:10.310707] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:24.749 14:14:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:24.749 14:14:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:21:25.008 14:14:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:21:25.008 14:14:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:25.266 BaseBdev1_malloc 00:21:25.266 14:14:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:21:25.524 true 00:21:25.524 14:14:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:21:25.783 [2024-07-15 14:14:11.546352] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:21:25.783 [2024-07-15 14:14:11.546984] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:25.783 [2024-07-15 14:14:11.547224] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:21:25.783 [2024-07-15 14:14:11.547443] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:25.783 [2024-07-15 14:14:11.549578] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:25.783 [2024-07-15 14:14:11.549869] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:25.783 BaseBdev1 00:21:25.783 14:14:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:21:25.783 14:14:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:26.041 BaseBdev2_malloc 00:21:26.041 14:14:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:21:26.300 true 00:21:26.300 14:14:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:21:26.558 [2024-07-15 14:14:12.344880] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:21:26.558 [2024-07-15 14:14:12.345356] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:26.558 [2024-07-15 14:14:12.345592] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:26.558 [2024-07-15 14:14:12.345816] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:26.558 [2024-07-15 14:14:12.347842] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:26.558 [2024-07-15 14:14:12.348081] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:26.558 BaseBdev2 00:21:26.558 14:14:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:21:26.558 14:14:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:26.816 BaseBdev3_malloc 00:21:26.816 14:14:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:21:27.074 true 00:21:27.074 14:14:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:21:27.333 [2024-07-15 14:14:13.160163] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:21:27.333 [2024-07-15 14:14:13.160911] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:27.333 [2024-07-15 14:14:13.161174] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:21:27.333 [2024-07-15 14:14:13.161398] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:27.333 [2024-07-15 14:14:13.163463] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:27.333 [2024-07-15 14:14:13.163706] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:27.333 BaseBdev3 00:21:27.333 14:14:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:21:27.591 [2024-07-15 14:14:13.400377] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:27.591 [2024-07-15 14:14:13.402264] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:27.591 [2024-07-15 14:14:13.402483] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:27.591 [2024-07-15 14:14:13.402797] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:21:27.591 [2024-07-15 14:14:13.402937] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:27.591 [2024-07-15 14:14:13.403123] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:21:27.591 [2024-07-15 14:14:13.403508] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:21:27.591 [2024-07-15 14:14:13.403633] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009380 00:21:27.591 [2024-07-15 14:14:13.403880] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:27.591 14:14:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:27.591 14:14:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:27.591 14:14:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:27.591 14:14:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:27.591 14:14:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:27.591 14:14:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:27.591 14:14:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:27.591 14:14:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:27.591 14:14:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:27.591 14:14:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:27.591 14:14:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:27.591 14:14:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:27.849 14:14:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:27.849 "name": "raid_bdev1", 00:21:27.849 "uuid": "9ce90803-665e-4b52-919e-0358620e05d4", 00:21:27.849 "strip_size_kb": 0, 00:21:27.849 "state": "online", 00:21:27.849 "raid_level": "raid1", 00:21:27.849 "superblock": true, 00:21:27.849 "num_base_bdevs": 3, 00:21:27.849 "num_base_bdevs_discovered": 3, 00:21:27.849 "num_base_bdevs_operational": 3, 00:21:27.849 "base_bdevs_list": [ 00:21:27.849 { 00:21:27.849 "name": "BaseBdev1", 00:21:27.849 "uuid": "b1458b26-dbc6-5e8c-97b9-5adb86c0bc23", 00:21:27.849 "is_configured": true, 00:21:27.849 "data_offset": 2048, 00:21:27.849 "data_size": 63488 00:21:27.849 }, 00:21:27.849 { 00:21:27.849 "name": "BaseBdev2", 00:21:27.849 "uuid": "dc4ea333-0534-5884-8561-c91e2895ba7c", 00:21:27.849 "is_configured": true, 00:21:27.849 "data_offset": 2048, 00:21:27.849 "data_size": 63488 00:21:27.849 }, 00:21:27.849 { 00:21:27.849 "name": "BaseBdev3", 00:21:27.849 "uuid": "067bd33a-d943-5165-be17-6b0c9708584b", 00:21:27.849 "is_configured": true, 00:21:27.849 "data_offset": 2048, 00:21:27.849 "data_size": 63488 00:21:27.849 } 00:21:27.849 ] 00:21:27.849 }' 00:21:27.849 14:14:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:27.849 14:14:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:28.416 14:14:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:21:28.416 14:14:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:21:28.674 [2024-07-15 14:14:14.433675] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:21:29.631 14:14:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:21:29.890 14:14:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:21:29.890 14:14:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:21:29.890 14:14:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ read = \w\r\i\t\e ]] 00:21:29.890 14:14:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:21:29.890 14:14:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:29.890 14:14:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:29.890 14:14:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:29.890 14:14:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:29.890 14:14:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:29.890 14:14:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:29.890 14:14:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:29.890 14:14:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:29.890 14:14:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:29.890 14:14:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:29.890 14:14:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:29.890 14:14:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:30.149 14:14:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:30.149 "name": "raid_bdev1", 00:21:30.149 "uuid": "9ce90803-665e-4b52-919e-0358620e05d4", 00:21:30.149 "strip_size_kb": 0, 00:21:30.149 "state": "online", 00:21:30.149 "raid_level": "raid1", 00:21:30.149 "superblock": true, 00:21:30.149 "num_base_bdevs": 3, 00:21:30.149 "num_base_bdevs_discovered": 3, 00:21:30.149 "num_base_bdevs_operational": 3, 00:21:30.149 "base_bdevs_list": [ 00:21:30.149 { 00:21:30.149 "name": "BaseBdev1", 00:21:30.149 "uuid": "b1458b26-dbc6-5e8c-97b9-5adb86c0bc23", 00:21:30.149 "is_configured": true, 00:21:30.149 "data_offset": 2048, 00:21:30.149 "data_size": 63488 00:21:30.149 }, 00:21:30.149 { 00:21:30.149 "name": "BaseBdev2", 00:21:30.149 "uuid": "dc4ea333-0534-5884-8561-c91e2895ba7c", 00:21:30.149 "is_configured": true, 00:21:30.149 "data_offset": 2048, 00:21:30.149 "data_size": 63488 00:21:30.149 }, 00:21:30.149 { 00:21:30.149 "name": "BaseBdev3", 00:21:30.149 "uuid": "067bd33a-d943-5165-be17-6b0c9708584b", 00:21:30.149 "is_configured": true, 00:21:30.149 "data_offset": 2048, 00:21:30.149 "data_size": 63488 00:21:30.149 } 00:21:30.149 ] 00:21:30.149 }' 00:21:30.149 14:14:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:30.149 14:14:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:30.716 14:14:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:30.974 [2024-07-15 14:14:16.874337] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:30.974 [2024-07-15 14:14:16.874573] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:30.974 [2024-07-15 14:14:16.876029] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:30.974 [2024-07-15 14:14:16.876190] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:30.974 [2024-07-15 14:14:16.876366] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:30.974 [2024-07-15 14:14:16.876477] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state offline 00:21:30.974 0 00:21:30.975 14:14:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 199933 00:21:30.975 14:14:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 199933 ']' 00:21:30.975 14:14:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 199933 00:21:30.975 14:14:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:21:30.975 14:14:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:30.975 14:14:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 199933 00:21:30.975 14:14:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:30.975 14:14:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:30.975 14:14:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 199933' 00:21:30.975 killing process with pid 199933 00:21:30.975 14:14:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 199933 00:21:30.975 [2024-07-15 14:14:16.917595] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:30.975 14:14:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 199933 00:21:31.234 [2024-07-15 14:14:17.115498] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:32.609 14:14:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.TyvNjzrSUP 00:21:32.609 14:14:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:21:32.609 14:14:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:21:32.609 14:14:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:21:32.609 14:14:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:21:32.609 14:14:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:21:32.609 14:14:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:21:32.609 14:14:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:21:32.609 00:21:32.609 real 0m8.649s 00:21:32.609 user 0m13.311s 00:21:32.609 sys 0m0.973s 00:21:32.609 14:14:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:32.609 14:14:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:32.609 ************************************ 00:21:32.609 END TEST raid_read_error_test 00:21:32.609 ************************************ 00:21:32.609 14:14:18 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:21:32.609 14:14:18 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:21:32.609 14:14:18 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:21:32.609 14:14:18 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:32.609 14:14:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:32.609 ************************************ 00:21:32.609 START TEST raid_write_error_test 00:21:32.609 ************************************ 00:21:32.609 14:14:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 3 write 00:21:32.609 14:14:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:21:32.609 14:14:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:21:32.609 14:14:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:21:32.609 14:14:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:21:32.609 14:14:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:32.609 14:14:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:21:32.609 14:14:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:21:32.609 14:14:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:32.609 14:14:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:21:32.609 14:14:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:21:32.609 14:14:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:32.609 14:14:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:21:32.609 14:14:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:21:32.609 14:14:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:21:32.609 14:14:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:21:32.609 14:14:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:21:32.609 14:14:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:21:32.609 14:14:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:21:32.609 14:14:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:21:32.609 14:14:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:21:32.609 14:14:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:21:32.609 14:14:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:21:32.609 14:14:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:21:32.609 14:14:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:21:32.609 14:14:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.tOXnfcFj6A 00:21:32.609 14:14:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=200138 00:21:32.609 14:14:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 200138 /var/tmp/spdk-raid.sock 00:21:32.609 14:14:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:21:32.609 14:14:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 200138 ']' 00:21:32.609 14:14:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:32.609 14:14:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:32.609 14:14:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:32.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:32.609 14:14:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:32.609 14:14:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:32.609 [2024-07-15 14:14:18.424108] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:21:32.609 [2024-07-15 14:14:18.424409] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid200138 ] 00:21:32.609 [2024-07-15 14:14:18.578024] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:32.868 [2024-07-15 14:14:18.790800] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:33.126 [2024-07-15 14:14:18.989202] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:33.692 14:14:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:33.692 14:14:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:21:33.693 14:14:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:21:33.693 14:14:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:33.693 BaseBdev1_malloc 00:21:33.693 14:14:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:21:33.951 true 00:21:33.951 14:14:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:21:34.209 [2024-07-15 14:14:20.176820] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:21:34.209 [2024-07-15 14:14:20.177459] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:34.209 [2024-07-15 14:14:20.177762] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:21:34.209 [2024-07-15 14:14:20.178027] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:34.209 [2024-07-15 14:14:20.179999] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:34.209 [2024-07-15 14:14:20.180287] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:34.209 BaseBdev1 00:21:34.209 14:14:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:21:34.209 14:14:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:34.467 BaseBdev2_malloc 00:21:34.467 14:14:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:21:34.725 true 00:21:34.725 14:14:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:21:34.982 [2024-07-15 14:14:20.924382] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:21:34.982 [2024-07-15 14:14:20.924739] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:34.982 [2024-07-15 14:14:20.924960] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:21:34.982 [2024-07-15 14:14:20.925176] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:34.982 [2024-07-15 14:14:20.927138] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:34.982 [2024-07-15 14:14:20.927327] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:34.982 BaseBdev2 00:21:34.982 14:14:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:21:34.982 14:14:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:35.549 BaseBdev3_malloc 00:21:35.549 14:14:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:21:35.549 true 00:21:35.808 14:14:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:21:36.067 [2024-07-15 14:14:21.834884] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:21:36.067 [2024-07-15 14:14:21.835199] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:36.067 [2024-07-15 14:14:21.835393] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:21:36.067 [2024-07-15 14:14:21.835561] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:36.067 [2024-07-15 14:14:21.837489] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:36.067 [2024-07-15 14:14:21.837690] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:36.067 BaseBdev3 00:21:36.067 14:14:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:21:36.326 [2024-07-15 14:14:22.074967] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:36.326 [2024-07-15 14:14:22.076811] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:36.326 [2024-07-15 14:14:22.077029] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:36.326 [2024-07-15 14:14:22.077401] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:21:36.326 [2024-07-15 14:14:22.077539] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:36.326 [2024-07-15 14:14:22.077845] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:21:36.326 [2024-07-15 14:14:22.078297] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:21:36.326 [2024-07-15 14:14:22.078428] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009380 00:21:36.326 [2024-07-15 14:14:22.078719] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:36.326 14:14:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:36.326 14:14:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:36.326 14:14:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:36.326 14:14:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:36.326 14:14:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:36.326 14:14:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:36.326 14:14:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:36.326 14:14:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:36.326 14:14:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:36.326 14:14:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:36.326 14:14:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:36.326 14:14:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:36.586 14:14:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:36.586 "name": "raid_bdev1", 00:21:36.586 "uuid": "f6e1f4d8-2c7b-4229-929c-bd1b4f4a6204", 00:21:36.586 "strip_size_kb": 0, 00:21:36.586 "state": "online", 00:21:36.586 "raid_level": "raid1", 00:21:36.586 "superblock": true, 00:21:36.586 "num_base_bdevs": 3, 00:21:36.586 "num_base_bdevs_discovered": 3, 00:21:36.586 "num_base_bdevs_operational": 3, 00:21:36.586 "base_bdevs_list": [ 00:21:36.586 { 00:21:36.586 "name": "BaseBdev1", 00:21:36.586 "uuid": "a809ff25-166c-55c1-8421-6ef6391eca54", 00:21:36.586 "is_configured": true, 00:21:36.586 "data_offset": 2048, 00:21:36.586 "data_size": 63488 00:21:36.586 }, 00:21:36.586 { 00:21:36.586 "name": "BaseBdev2", 00:21:36.586 "uuid": "417a96fb-924d-5ddd-ac52-d3a5501cd0f8", 00:21:36.586 "is_configured": true, 00:21:36.586 "data_offset": 2048, 00:21:36.586 "data_size": 63488 00:21:36.586 }, 00:21:36.586 { 00:21:36.586 "name": "BaseBdev3", 00:21:36.586 "uuid": "1305037f-d097-5602-83ac-15d11abbf8b9", 00:21:36.586 "is_configured": true, 00:21:36.586 "data_offset": 2048, 00:21:36.586 "data_size": 63488 00:21:36.586 } 00:21:36.586 ] 00:21:36.586 }' 00:21:36.586 14:14:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:36.586 14:14:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:37.153 14:14:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:21:37.153 14:14:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:21:37.153 [2024-07-15 14:14:23.092313] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:21:38.088 14:14:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:21:38.348 [2024-07-15 14:14:24.226856] bdev_raid.c:2221:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:21:38.348 [2024-07-15 14:14:24.226982] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:38.348 [2024-07-15 14:14:24.227200] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ee0 00:21:38.348 14:14:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:21:38.348 14:14:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:21:38.348 14:14:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ write = \w\r\i\t\e ]] 00:21:38.348 14:14:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # expected_num_base_bdevs=2 00:21:38.348 14:14:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:38.348 14:14:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:21:38.348 14:14:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:38.348 14:14:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:21:38.348 14:14:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:21:38.348 14:14:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:21:38.348 14:14:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:38.348 14:14:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:38.348 14:14:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:38.348 14:14:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:38.348 14:14:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:38.348 14:14:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:38.607 14:14:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:38.607 "name": "raid_bdev1", 00:21:38.607 "uuid": "f6e1f4d8-2c7b-4229-929c-bd1b4f4a6204", 00:21:38.607 "strip_size_kb": 0, 00:21:38.607 "state": "online", 00:21:38.607 "raid_level": "raid1", 00:21:38.608 "superblock": true, 00:21:38.608 "num_base_bdevs": 3, 00:21:38.608 "num_base_bdevs_discovered": 2, 00:21:38.608 "num_base_bdevs_operational": 2, 00:21:38.608 "base_bdevs_list": [ 00:21:38.608 { 00:21:38.608 "name": null, 00:21:38.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:38.608 "is_configured": false, 00:21:38.608 "data_offset": 2048, 00:21:38.608 "data_size": 63488 00:21:38.608 }, 00:21:38.608 { 00:21:38.608 "name": "BaseBdev2", 00:21:38.608 "uuid": "417a96fb-924d-5ddd-ac52-d3a5501cd0f8", 00:21:38.608 "is_configured": true, 00:21:38.608 "data_offset": 2048, 00:21:38.608 "data_size": 63488 00:21:38.608 }, 00:21:38.608 { 00:21:38.608 "name": "BaseBdev3", 00:21:38.608 "uuid": "1305037f-d097-5602-83ac-15d11abbf8b9", 00:21:38.608 "is_configured": true, 00:21:38.608 "data_offset": 2048, 00:21:38.608 "data_size": 63488 00:21:38.608 } 00:21:38.608 ] 00:21:38.608 }' 00:21:38.608 14:14:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:38.608 14:14:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:39.543 14:14:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:39.543 [2024-07-15 14:14:25.465981] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:39.543 [2024-07-15 14:14:25.466030] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:39.543 [2024-07-15 14:14:25.467421] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:39.543 [2024-07-15 14:14:25.467461] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:39.543 [2024-07-15 14:14:25.467516] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:39.543 [2024-07-15 14:14:25.467526] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state offline 00:21:39.543 0 00:21:39.543 14:14:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 200138 00:21:39.543 14:14:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 200138 ']' 00:21:39.543 14:14:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 200138 00:21:39.543 14:14:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:21:39.543 14:14:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:39.543 14:14:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 200138 00:21:39.543 14:14:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:39.543 14:14:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:39.543 14:14:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 200138' 00:21:39.543 killing process with pid 200138 00:21:39.543 14:14:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 200138 00:21:39.543 [2024-07-15 14:14:25.514336] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:39.543 14:14:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 200138 00:21:39.801 [2024-07-15 14:14:25.713942] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:41.180 14:14:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.tOXnfcFj6A 00:21:41.180 14:14:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:21:41.180 14:14:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:21:41.181 14:14:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:21:41.181 14:14:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:21:41.181 14:14:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:21:41.181 14:14:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:21:41.181 14:14:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:21:41.181 00:21:41.181 real 0m8.538s 00:21:41.181 user 0m13.110s 00:21:41.181 sys 0m0.939s 00:21:41.181 14:14:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:41.181 14:14:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:41.181 ************************************ 00:21:41.181 END TEST raid_write_error_test 00:21:41.181 ************************************ 00:21:41.181 14:14:26 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:21:41.181 14:14:26 bdev_raid -- bdev/bdev_raid.sh@865 -- # for n in {2..4} 00:21:41.181 14:14:26 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:21:41.181 14:14:26 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:21:41.181 14:14:26 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:21:41.181 14:14:26 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:41.181 14:14:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:41.181 ************************************ 00:21:41.181 START TEST raid_state_function_test 00:21:41.181 ************************************ 00:21:41.181 14:14:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 4 false 00:21:41.181 14:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:21:41.181 14:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:21:41.181 14:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:21:41.181 14:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:21:41.181 14:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:21:41.181 14:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:41.181 14:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:21:41.181 14:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:41.181 14:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:41.181 14:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:21:41.181 14:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:41.181 14:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:41.181 14:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:21:41.181 14:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:41.181 14:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:41.181 14:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev4 00:21:41.181 14:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:41.181 14:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:41.181 14:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:41.181 14:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:21:41.181 14:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:21:41.181 14:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:21:41.181 14:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:21:41.181 14:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:21:41.181 14:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:21:41.181 14:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:21:41.181 14:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:21:41.181 14:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:21:41.181 14:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:21:41.181 14:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=200348 00:21:41.181 14:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:21:41.181 Process raid pid: 200348 00:21:41.181 14:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 200348' 00:21:41.181 14:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 200348 /var/tmp/spdk-raid.sock 00:21:41.181 14:14:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 200348 ']' 00:21:41.181 14:14:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:41.181 14:14:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:41.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:41.181 14:14:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:41.181 14:14:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:41.181 14:14:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:41.181 [2024-07-15 14:14:27.000151] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:21:41.181 [2024-07-15 14:14:27.000332] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:41.181 [2024-07-15 14:14:27.150979] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:41.439 [2024-07-15 14:14:27.376424] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:41.696 [2024-07-15 14:14:27.581270] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:42.263 14:14:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:42.263 14:14:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:21:42.263 14:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:42.521 [2024-07-15 14:14:28.328260] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:42.521 [2024-07-15 14:14:28.328372] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:42.521 [2024-07-15 14:14:28.328389] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:42.521 [2024-07-15 14:14:28.328416] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:42.521 [2024-07-15 14:14:28.328427] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:42.521 [2024-07-15 14:14:28.328447] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:42.521 [2024-07-15 14:14:28.328455] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:42.521 [2024-07-15 14:14:28.328480] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:42.521 14:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:42.521 14:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:42.521 14:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:42.521 14:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:42.521 14:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:42.521 14:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:42.521 14:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:42.521 14:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:42.521 14:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:42.521 14:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:42.521 14:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:42.521 14:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:42.780 14:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:42.780 "name": "Existed_Raid", 00:21:42.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:42.780 "strip_size_kb": 64, 00:21:42.780 "state": "configuring", 00:21:42.780 "raid_level": "raid0", 00:21:42.780 "superblock": false, 00:21:42.780 "num_base_bdevs": 4, 00:21:42.780 "num_base_bdevs_discovered": 0, 00:21:42.780 "num_base_bdevs_operational": 4, 00:21:42.780 "base_bdevs_list": [ 00:21:42.780 { 00:21:42.780 "name": "BaseBdev1", 00:21:42.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:42.780 "is_configured": false, 00:21:42.780 "data_offset": 0, 00:21:42.780 "data_size": 0 00:21:42.780 }, 00:21:42.780 { 00:21:42.780 "name": "BaseBdev2", 00:21:42.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:42.780 "is_configured": false, 00:21:42.780 "data_offset": 0, 00:21:42.780 "data_size": 0 00:21:42.780 }, 00:21:42.780 { 00:21:42.780 "name": "BaseBdev3", 00:21:42.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:42.780 "is_configured": false, 00:21:42.780 "data_offset": 0, 00:21:42.780 "data_size": 0 00:21:42.780 }, 00:21:42.780 { 00:21:42.780 "name": "BaseBdev4", 00:21:42.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:42.780 "is_configured": false, 00:21:42.780 "data_offset": 0, 00:21:42.780 "data_size": 0 00:21:42.780 } 00:21:42.780 ] 00:21:42.780 }' 00:21:42.780 14:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:42.780 14:14:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:43.347 14:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:43.606 [2024-07-15 14:14:29.456343] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:43.606 [2024-07-15 14:14:29.456670] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:21:43.606 14:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:43.864 [2024-07-15 14:14:29.748414] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:43.864 [2024-07-15 14:14:29.748708] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:43.864 [2024-07-15 14:14:29.748848] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:43.864 [2024-07-15 14:14:29.748999] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:43.864 [2024-07-15 14:14:29.749107] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:43.864 [2024-07-15 14:14:29.749192] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:43.864 [2024-07-15 14:14:29.749344] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:43.864 [2024-07-15 14:14:29.749487] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:43.864 14:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:44.123 [2024-07-15 14:14:30.059304] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:44.123 BaseBdev1 00:21:44.123 14:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:21:44.123 14:14:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:21:44.123 14:14:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:44.123 14:14:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:21:44.123 14:14:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:44.123 14:14:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:44.123 14:14:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:44.381 14:14:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:44.641 [ 00:21:44.641 { 00:21:44.641 "name": "BaseBdev1", 00:21:44.641 "aliases": [ 00:21:44.641 "3561d0ad-32eb-4c20-a550-8fdac5a82620" 00:21:44.641 ], 00:21:44.641 "product_name": "Malloc disk", 00:21:44.641 "block_size": 512, 00:21:44.641 "num_blocks": 65536, 00:21:44.641 "uuid": "3561d0ad-32eb-4c20-a550-8fdac5a82620", 00:21:44.641 "assigned_rate_limits": { 00:21:44.641 "rw_ios_per_sec": 0, 00:21:44.641 "rw_mbytes_per_sec": 0, 00:21:44.641 "r_mbytes_per_sec": 0, 00:21:44.641 "w_mbytes_per_sec": 0 00:21:44.641 }, 00:21:44.641 "claimed": true, 00:21:44.641 "claim_type": "exclusive_write", 00:21:44.641 "zoned": false, 00:21:44.641 "supported_io_types": { 00:21:44.641 "read": true, 00:21:44.641 "write": true, 00:21:44.641 "unmap": true, 00:21:44.641 "flush": true, 00:21:44.641 "reset": true, 00:21:44.641 "nvme_admin": false, 00:21:44.641 "nvme_io": false, 00:21:44.641 "nvme_io_md": false, 00:21:44.641 "write_zeroes": true, 00:21:44.641 "zcopy": true, 00:21:44.641 "get_zone_info": false, 00:21:44.641 "zone_management": false, 00:21:44.641 "zone_append": false, 00:21:44.641 "compare": false, 00:21:44.641 "compare_and_write": false, 00:21:44.641 "abort": true, 00:21:44.641 "seek_hole": false, 00:21:44.641 "seek_data": false, 00:21:44.641 "copy": true, 00:21:44.641 "nvme_iov_md": false 00:21:44.641 }, 00:21:44.641 "memory_domains": [ 00:21:44.641 { 00:21:44.641 "dma_device_id": "system", 00:21:44.641 "dma_device_type": 1 00:21:44.641 }, 00:21:44.641 { 00:21:44.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:44.641 "dma_device_type": 2 00:21:44.641 } 00:21:44.641 ], 00:21:44.641 "driver_specific": {} 00:21:44.641 } 00:21:44.641 ] 00:21:44.641 14:14:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:21:44.641 14:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:44.641 14:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:44.641 14:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:44.641 14:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:44.641 14:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:44.902 14:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:44.902 14:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:44.902 14:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:44.902 14:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:44.902 14:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:44.902 14:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:44.902 14:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:44.902 14:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:44.902 "name": "Existed_Raid", 00:21:44.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:44.902 "strip_size_kb": 64, 00:21:44.902 "state": "configuring", 00:21:44.902 "raid_level": "raid0", 00:21:44.902 "superblock": false, 00:21:44.902 "num_base_bdevs": 4, 00:21:44.902 "num_base_bdevs_discovered": 1, 00:21:44.902 "num_base_bdevs_operational": 4, 00:21:44.902 "base_bdevs_list": [ 00:21:44.902 { 00:21:44.902 "name": "BaseBdev1", 00:21:44.902 "uuid": "3561d0ad-32eb-4c20-a550-8fdac5a82620", 00:21:44.902 "is_configured": true, 00:21:44.902 "data_offset": 0, 00:21:44.902 "data_size": 65536 00:21:44.902 }, 00:21:44.902 { 00:21:44.902 "name": "BaseBdev2", 00:21:44.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:44.902 "is_configured": false, 00:21:44.902 "data_offset": 0, 00:21:44.902 "data_size": 0 00:21:44.902 }, 00:21:44.902 { 00:21:44.902 "name": "BaseBdev3", 00:21:44.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:44.902 "is_configured": false, 00:21:44.902 "data_offset": 0, 00:21:44.902 "data_size": 0 00:21:44.902 }, 00:21:44.902 { 00:21:44.902 "name": "BaseBdev4", 00:21:44.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:44.902 "is_configured": false, 00:21:44.902 "data_offset": 0, 00:21:44.902 "data_size": 0 00:21:44.902 } 00:21:44.902 ] 00:21:44.902 }' 00:21:44.902 14:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:44.902 14:14:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:45.836 14:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:45.836 [2024-07-15 14:14:31.731575] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:45.836 [2024-07-15 14:14:31.731901] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:21:45.836 14:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:46.094 [2024-07-15 14:14:31.971642] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:46.094 [2024-07-15 14:14:31.973421] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:46.094 [2024-07-15 14:14:31.973604] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:46.094 [2024-07-15 14:14:31.973720] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:46.094 [2024-07-15 14:14:31.973872] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:46.094 [2024-07-15 14:14:31.973990] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:46.094 [2024-07-15 14:14:31.974058] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:46.094 14:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:21:46.094 14:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:46.094 14:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:46.094 14:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:46.094 14:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:46.094 14:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:46.094 14:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:46.094 14:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:46.094 14:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:46.094 14:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:46.094 14:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:46.094 14:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:46.094 14:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:46.094 14:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:46.352 14:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:46.352 "name": "Existed_Raid", 00:21:46.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:46.352 "strip_size_kb": 64, 00:21:46.352 "state": "configuring", 00:21:46.352 "raid_level": "raid0", 00:21:46.352 "superblock": false, 00:21:46.352 "num_base_bdevs": 4, 00:21:46.352 "num_base_bdevs_discovered": 1, 00:21:46.352 "num_base_bdevs_operational": 4, 00:21:46.352 "base_bdevs_list": [ 00:21:46.352 { 00:21:46.352 "name": "BaseBdev1", 00:21:46.352 "uuid": "3561d0ad-32eb-4c20-a550-8fdac5a82620", 00:21:46.352 "is_configured": true, 00:21:46.352 "data_offset": 0, 00:21:46.352 "data_size": 65536 00:21:46.352 }, 00:21:46.352 { 00:21:46.352 "name": "BaseBdev2", 00:21:46.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:46.352 "is_configured": false, 00:21:46.352 "data_offset": 0, 00:21:46.352 "data_size": 0 00:21:46.352 }, 00:21:46.352 { 00:21:46.352 "name": "BaseBdev3", 00:21:46.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:46.352 "is_configured": false, 00:21:46.352 "data_offset": 0, 00:21:46.352 "data_size": 0 00:21:46.352 }, 00:21:46.352 { 00:21:46.352 "name": "BaseBdev4", 00:21:46.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:46.352 "is_configured": false, 00:21:46.352 "data_offset": 0, 00:21:46.352 "data_size": 0 00:21:46.352 } 00:21:46.352 ] 00:21:46.352 }' 00:21:46.352 14:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:46.352 14:14:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:46.917 14:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:47.482 [2024-07-15 14:14:33.187123] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:47.482 BaseBdev2 00:21:47.482 14:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:21:47.482 14:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:21:47.482 14:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:47.482 14:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:21:47.482 14:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:47.482 14:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:47.482 14:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:47.482 14:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:47.740 [ 00:21:47.740 { 00:21:47.740 "name": "BaseBdev2", 00:21:47.740 "aliases": [ 00:21:47.740 "a8888d8c-5833-43cf-82c9-fe1ae602d42f" 00:21:47.740 ], 00:21:47.740 "product_name": "Malloc disk", 00:21:47.740 "block_size": 512, 00:21:47.740 "num_blocks": 65536, 00:21:47.740 "uuid": "a8888d8c-5833-43cf-82c9-fe1ae602d42f", 00:21:47.740 "assigned_rate_limits": { 00:21:47.740 "rw_ios_per_sec": 0, 00:21:47.740 "rw_mbytes_per_sec": 0, 00:21:47.740 "r_mbytes_per_sec": 0, 00:21:47.740 "w_mbytes_per_sec": 0 00:21:47.740 }, 00:21:47.740 "claimed": true, 00:21:47.740 "claim_type": "exclusive_write", 00:21:47.740 "zoned": false, 00:21:47.740 "supported_io_types": { 00:21:47.740 "read": true, 00:21:47.740 "write": true, 00:21:47.740 "unmap": true, 00:21:47.740 "flush": true, 00:21:47.740 "reset": true, 00:21:47.740 "nvme_admin": false, 00:21:47.740 "nvme_io": false, 00:21:47.740 "nvme_io_md": false, 00:21:47.740 "write_zeroes": true, 00:21:47.740 "zcopy": true, 00:21:47.740 "get_zone_info": false, 00:21:47.740 "zone_management": false, 00:21:47.740 "zone_append": false, 00:21:47.740 "compare": false, 00:21:47.740 "compare_and_write": false, 00:21:47.740 "abort": true, 00:21:47.740 "seek_hole": false, 00:21:47.740 "seek_data": false, 00:21:47.740 "copy": true, 00:21:47.740 "nvme_iov_md": false 00:21:47.740 }, 00:21:47.740 "memory_domains": [ 00:21:47.740 { 00:21:47.740 "dma_device_id": "system", 00:21:47.740 "dma_device_type": 1 00:21:47.740 }, 00:21:47.740 { 00:21:47.740 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:47.741 "dma_device_type": 2 00:21:47.741 } 00:21:47.741 ], 00:21:47.741 "driver_specific": {} 00:21:47.741 } 00:21:47.741 ] 00:21:47.741 14:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:21:47.741 14:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:21:47.741 14:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:47.741 14:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:47.741 14:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:47.741 14:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:47.741 14:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:47.741 14:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:47.741 14:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:47.741 14:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:47.741 14:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:47.741 14:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:47.741 14:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:47.741 14:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:47.741 14:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:48.000 14:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:48.000 "name": "Existed_Raid", 00:21:48.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:48.000 "strip_size_kb": 64, 00:21:48.000 "state": "configuring", 00:21:48.000 "raid_level": "raid0", 00:21:48.000 "superblock": false, 00:21:48.000 "num_base_bdevs": 4, 00:21:48.000 "num_base_bdevs_discovered": 2, 00:21:48.000 "num_base_bdevs_operational": 4, 00:21:48.000 "base_bdevs_list": [ 00:21:48.000 { 00:21:48.000 "name": "BaseBdev1", 00:21:48.000 "uuid": "3561d0ad-32eb-4c20-a550-8fdac5a82620", 00:21:48.000 "is_configured": true, 00:21:48.000 "data_offset": 0, 00:21:48.000 "data_size": 65536 00:21:48.000 }, 00:21:48.000 { 00:21:48.000 "name": "BaseBdev2", 00:21:48.000 "uuid": "a8888d8c-5833-43cf-82c9-fe1ae602d42f", 00:21:48.000 "is_configured": true, 00:21:48.000 "data_offset": 0, 00:21:48.000 "data_size": 65536 00:21:48.000 }, 00:21:48.000 { 00:21:48.000 "name": "BaseBdev3", 00:21:48.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:48.000 "is_configured": false, 00:21:48.000 "data_offset": 0, 00:21:48.000 "data_size": 0 00:21:48.000 }, 00:21:48.000 { 00:21:48.000 "name": "BaseBdev4", 00:21:48.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:48.000 "is_configured": false, 00:21:48.000 "data_offset": 0, 00:21:48.000 "data_size": 0 00:21:48.000 } 00:21:48.000 ] 00:21:48.000 }' 00:21:48.000 14:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:48.000 14:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:48.574 14:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:48.853 [2024-07-15 14:14:34.825064] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:48.853 BaseBdev3 00:21:48.853 14:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:21:48.853 14:14:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:21:48.853 14:14:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:48.853 14:14:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:21:48.853 14:14:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:48.853 14:14:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:48.853 14:14:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:49.123 14:14:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:49.382 [ 00:21:49.382 { 00:21:49.382 "name": "BaseBdev3", 00:21:49.382 "aliases": [ 00:21:49.382 "ccf52555-0a14-4d96-9801-076d2b98113d" 00:21:49.382 ], 00:21:49.382 "product_name": "Malloc disk", 00:21:49.382 "block_size": 512, 00:21:49.382 "num_blocks": 65536, 00:21:49.382 "uuid": "ccf52555-0a14-4d96-9801-076d2b98113d", 00:21:49.382 "assigned_rate_limits": { 00:21:49.382 "rw_ios_per_sec": 0, 00:21:49.382 "rw_mbytes_per_sec": 0, 00:21:49.382 "r_mbytes_per_sec": 0, 00:21:49.382 "w_mbytes_per_sec": 0 00:21:49.382 }, 00:21:49.382 "claimed": true, 00:21:49.382 "claim_type": "exclusive_write", 00:21:49.382 "zoned": false, 00:21:49.382 "supported_io_types": { 00:21:49.382 "read": true, 00:21:49.382 "write": true, 00:21:49.382 "unmap": true, 00:21:49.382 "flush": true, 00:21:49.382 "reset": true, 00:21:49.382 "nvme_admin": false, 00:21:49.382 "nvme_io": false, 00:21:49.382 "nvme_io_md": false, 00:21:49.382 "write_zeroes": true, 00:21:49.382 "zcopy": true, 00:21:49.382 "get_zone_info": false, 00:21:49.382 "zone_management": false, 00:21:49.382 "zone_append": false, 00:21:49.382 "compare": false, 00:21:49.382 "compare_and_write": false, 00:21:49.382 "abort": true, 00:21:49.382 "seek_hole": false, 00:21:49.382 "seek_data": false, 00:21:49.382 "copy": true, 00:21:49.382 "nvme_iov_md": false 00:21:49.382 }, 00:21:49.382 "memory_domains": [ 00:21:49.382 { 00:21:49.382 "dma_device_id": "system", 00:21:49.382 "dma_device_type": 1 00:21:49.382 }, 00:21:49.382 { 00:21:49.382 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:49.382 "dma_device_type": 2 00:21:49.382 } 00:21:49.382 ], 00:21:49.382 "driver_specific": {} 00:21:49.382 } 00:21:49.382 ] 00:21:49.382 14:14:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:21:49.382 14:14:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:21:49.382 14:14:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:49.382 14:14:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:49.382 14:14:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:49.382 14:14:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:49.382 14:14:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:49.382 14:14:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:49.382 14:14:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:49.382 14:14:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:49.382 14:14:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:49.382 14:14:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:49.382 14:14:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:49.642 14:14:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:49.642 14:14:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:49.642 14:14:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:49.642 "name": "Existed_Raid", 00:21:49.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:49.642 "strip_size_kb": 64, 00:21:49.642 "state": "configuring", 00:21:49.642 "raid_level": "raid0", 00:21:49.642 "superblock": false, 00:21:49.642 "num_base_bdevs": 4, 00:21:49.642 "num_base_bdevs_discovered": 3, 00:21:49.642 "num_base_bdevs_operational": 4, 00:21:49.642 "base_bdevs_list": [ 00:21:49.642 { 00:21:49.642 "name": "BaseBdev1", 00:21:49.642 "uuid": "3561d0ad-32eb-4c20-a550-8fdac5a82620", 00:21:49.642 "is_configured": true, 00:21:49.642 "data_offset": 0, 00:21:49.642 "data_size": 65536 00:21:49.642 }, 00:21:49.642 { 00:21:49.642 "name": "BaseBdev2", 00:21:49.642 "uuid": "a8888d8c-5833-43cf-82c9-fe1ae602d42f", 00:21:49.642 "is_configured": true, 00:21:49.642 "data_offset": 0, 00:21:49.642 "data_size": 65536 00:21:49.642 }, 00:21:49.642 { 00:21:49.642 "name": "BaseBdev3", 00:21:49.642 "uuid": "ccf52555-0a14-4d96-9801-076d2b98113d", 00:21:49.642 "is_configured": true, 00:21:49.642 "data_offset": 0, 00:21:49.642 "data_size": 65536 00:21:49.642 }, 00:21:49.642 { 00:21:49.642 "name": "BaseBdev4", 00:21:49.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:49.642 "is_configured": false, 00:21:49.642 "data_offset": 0, 00:21:49.642 "data_size": 0 00:21:49.642 } 00:21:49.642 ] 00:21:49.642 }' 00:21:49.642 14:14:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:49.642 14:14:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:50.578 14:14:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:21:50.578 [2024-07-15 14:14:36.536473] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:50.578 [2024-07-15 14:14:36.536697] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:21:50.578 [2024-07-15 14:14:36.536833] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:21:50.578 [2024-07-15 14:14:36.537025] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:21:50.578 [2024-07-15 14:14:36.537425] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:21:50.578 [2024-07-15 14:14:36.537555] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:21:50.578 [2024-07-15 14:14:36.537883] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:50.578 BaseBdev4 00:21:50.578 14:14:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:21:50.578 14:14:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:21:50.578 14:14:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:50.578 14:14:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:21:50.578 14:14:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:50.578 14:14:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:50.578 14:14:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:50.836 14:14:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:51.095 [ 00:21:51.095 { 00:21:51.095 "name": "BaseBdev4", 00:21:51.095 "aliases": [ 00:21:51.095 "df226191-9734-43af-b7ff-32856e718d1f" 00:21:51.095 ], 00:21:51.095 "product_name": "Malloc disk", 00:21:51.095 "block_size": 512, 00:21:51.095 "num_blocks": 65536, 00:21:51.095 "uuid": "df226191-9734-43af-b7ff-32856e718d1f", 00:21:51.095 "assigned_rate_limits": { 00:21:51.095 "rw_ios_per_sec": 0, 00:21:51.095 "rw_mbytes_per_sec": 0, 00:21:51.095 "r_mbytes_per_sec": 0, 00:21:51.095 "w_mbytes_per_sec": 0 00:21:51.095 }, 00:21:51.095 "claimed": true, 00:21:51.095 "claim_type": "exclusive_write", 00:21:51.095 "zoned": false, 00:21:51.095 "supported_io_types": { 00:21:51.095 "read": true, 00:21:51.095 "write": true, 00:21:51.095 "unmap": true, 00:21:51.095 "flush": true, 00:21:51.095 "reset": true, 00:21:51.095 "nvme_admin": false, 00:21:51.095 "nvme_io": false, 00:21:51.095 "nvme_io_md": false, 00:21:51.095 "write_zeroes": true, 00:21:51.095 "zcopy": true, 00:21:51.095 "get_zone_info": false, 00:21:51.095 "zone_management": false, 00:21:51.095 "zone_append": false, 00:21:51.095 "compare": false, 00:21:51.095 "compare_and_write": false, 00:21:51.095 "abort": true, 00:21:51.095 "seek_hole": false, 00:21:51.095 "seek_data": false, 00:21:51.095 "copy": true, 00:21:51.095 "nvme_iov_md": false 00:21:51.095 }, 00:21:51.095 "memory_domains": [ 00:21:51.095 { 00:21:51.095 "dma_device_id": "system", 00:21:51.095 "dma_device_type": 1 00:21:51.095 }, 00:21:51.095 { 00:21:51.095 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:51.095 "dma_device_type": 2 00:21:51.095 } 00:21:51.095 ], 00:21:51.095 "driver_specific": {} 00:21:51.095 } 00:21:51.095 ] 00:21:51.095 14:14:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:21:51.095 14:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:21:51.095 14:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:51.095 14:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:21:51.095 14:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:51.095 14:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:51.095 14:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:51.095 14:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:51.095 14:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:51.095 14:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:51.095 14:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:51.353 14:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:51.353 14:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:51.354 14:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:51.354 14:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:51.354 14:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:51.354 "name": "Existed_Raid", 00:21:51.354 "uuid": "f14a33d2-d50e-4a06-8293-6d0d4d1af3ac", 00:21:51.354 "strip_size_kb": 64, 00:21:51.354 "state": "online", 00:21:51.354 "raid_level": "raid0", 00:21:51.354 "superblock": false, 00:21:51.354 "num_base_bdevs": 4, 00:21:51.354 "num_base_bdevs_discovered": 4, 00:21:51.354 "num_base_bdevs_operational": 4, 00:21:51.354 "base_bdevs_list": [ 00:21:51.354 { 00:21:51.354 "name": "BaseBdev1", 00:21:51.354 "uuid": "3561d0ad-32eb-4c20-a550-8fdac5a82620", 00:21:51.354 "is_configured": true, 00:21:51.354 "data_offset": 0, 00:21:51.354 "data_size": 65536 00:21:51.354 }, 00:21:51.354 { 00:21:51.354 "name": "BaseBdev2", 00:21:51.354 "uuid": "a8888d8c-5833-43cf-82c9-fe1ae602d42f", 00:21:51.354 "is_configured": true, 00:21:51.354 "data_offset": 0, 00:21:51.354 "data_size": 65536 00:21:51.354 }, 00:21:51.354 { 00:21:51.354 "name": "BaseBdev3", 00:21:51.354 "uuid": "ccf52555-0a14-4d96-9801-076d2b98113d", 00:21:51.354 "is_configured": true, 00:21:51.354 "data_offset": 0, 00:21:51.354 "data_size": 65536 00:21:51.354 }, 00:21:51.354 { 00:21:51.354 "name": "BaseBdev4", 00:21:51.354 "uuid": "df226191-9734-43af-b7ff-32856e718d1f", 00:21:51.354 "is_configured": true, 00:21:51.354 "data_offset": 0, 00:21:51.354 "data_size": 65536 00:21:51.354 } 00:21:51.354 ] 00:21:51.354 }' 00:21:51.354 14:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:51.354 14:14:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:52.288 14:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:21:52.288 14:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:21:52.288 14:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:52.288 14:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:52.288 14:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:52.288 14:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:21:52.288 14:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:21:52.288 14:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:52.544 [2024-07-15 14:14:38.345020] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:52.544 14:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:52.544 "name": "Existed_Raid", 00:21:52.544 "aliases": [ 00:21:52.544 "f14a33d2-d50e-4a06-8293-6d0d4d1af3ac" 00:21:52.544 ], 00:21:52.544 "product_name": "Raid Volume", 00:21:52.544 "block_size": 512, 00:21:52.544 "num_blocks": 262144, 00:21:52.544 "uuid": "f14a33d2-d50e-4a06-8293-6d0d4d1af3ac", 00:21:52.544 "assigned_rate_limits": { 00:21:52.544 "rw_ios_per_sec": 0, 00:21:52.544 "rw_mbytes_per_sec": 0, 00:21:52.544 "r_mbytes_per_sec": 0, 00:21:52.544 "w_mbytes_per_sec": 0 00:21:52.544 }, 00:21:52.544 "claimed": false, 00:21:52.544 "zoned": false, 00:21:52.544 "supported_io_types": { 00:21:52.544 "read": true, 00:21:52.544 "write": true, 00:21:52.544 "unmap": true, 00:21:52.544 "flush": true, 00:21:52.544 "reset": true, 00:21:52.544 "nvme_admin": false, 00:21:52.544 "nvme_io": false, 00:21:52.544 "nvme_io_md": false, 00:21:52.544 "write_zeroes": true, 00:21:52.544 "zcopy": false, 00:21:52.544 "get_zone_info": false, 00:21:52.544 "zone_management": false, 00:21:52.544 "zone_append": false, 00:21:52.544 "compare": false, 00:21:52.544 "compare_and_write": false, 00:21:52.544 "abort": false, 00:21:52.544 "seek_hole": false, 00:21:52.544 "seek_data": false, 00:21:52.544 "copy": false, 00:21:52.544 "nvme_iov_md": false 00:21:52.544 }, 00:21:52.544 "memory_domains": [ 00:21:52.544 { 00:21:52.544 "dma_device_id": "system", 00:21:52.544 "dma_device_type": 1 00:21:52.544 }, 00:21:52.544 { 00:21:52.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:52.544 "dma_device_type": 2 00:21:52.544 }, 00:21:52.544 { 00:21:52.544 "dma_device_id": "system", 00:21:52.544 "dma_device_type": 1 00:21:52.544 }, 00:21:52.544 { 00:21:52.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:52.544 "dma_device_type": 2 00:21:52.544 }, 00:21:52.544 { 00:21:52.544 "dma_device_id": "system", 00:21:52.544 "dma_device_type": 1 00:21:52.544 }, 00:21:52.544 { 00:21:52.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:52.544 "dma_device_type": 2 00:21:52.544 }, 00:21:52.544 { 00:21:52.544 "dma_device_id": "system", 00:21:52.544 "dma_device_type": 1 00:21:52.544 }, 00:21:52.544 { 00:21:52.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:52.544 "dma_device_type": 2 00:21:52.544 } 00:21:52.544 ], 00:21:52.544 "driver_specific": { 00:21:52.544 "raid": { 00:21:52.544 "uuid": "f14a33d2-d50e-4a06-8293-6d0d4d1af3ac", 00:21:52.544 "strip_size_kb": 64, 00:21:52.544 "state": "online", 00:21:52.544 "raid_level": "raid0", 00:21:52.544 "superblock": false, 00:21:52.544 "num_base_bdevs": 4, 00:21:52.544 "num_base_bdevs_discovered": 4, 00:21:52.544 "num_base_bdevs_operational": 4, 00:21:52.544 "base_bdevs_list": [ 00:21:52.544 { 00:21:52.544 "name": "BaseBdev1", 00:21:52.544 "uuid": "3561d0ad-32eb-4c20-a550-8fdac5a82620", 00:21:52.544 "is_configured": true, 00:21:52.544 "data_offset": 0, 00:21:52.544 "data_size": 65536 00:21:52.544 }, 00:21:52.544 { 00:21:52.544 "name": "BaseBdev2", 00:21:52.544 "uuid": "a8888d8c-5833-43cf-82c9-fe1ae602d42f", 00:21:52.544 "is_configured": true, 00:21:52.544 "data_offset": 0, 00:21:52.544 "data_size": 65536 00:21:52.544 }, 00:21:52.544 { 00:21:52.544 "name": "BaseBdev3", 00:21:52.544 "uuid": "ccf52555-0a14-4d96-9801-076d2b98113d", 00:21:52.544 "is_configured": true, 00:21:52.544 "data_offset": 0, 00:21:52.544 "data_size": 65536 00:21:52.544 }, 00:21:52.544 { 00:21:52.544 "name": "BaseBdev4", 00:21:52.544 "uuid": "df226191-9734-43af-b7ff-32856e718d1f", 00:21:52.544 "is_configured": true, 00:21:52.544 "data_offset": 0, 00:21:52.544 "data_size": 65536 00:21:52.544 } 00:21:52.544 ] 00:21:52.544 } 00:21:52.544 } 00:21:52.544 }' 00:21:52.544 14:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:52.544 14:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:21:52.544 BaseBdev2 00:21:52.544 BaseBdev3 00:21:52.544 BaseBdev4' 00:21:52.544 14:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:52.544 14:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:52.544 14:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:21:52.802 14:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:52.802 "name": "BaseBdev1", 00:21:52.802 "aliases": [ 00:21:52.802 "3561d0ad-32eb-4c20-a550-8fdac5a82620" 00:21:52.802 ], 00:21:52.802 "product_name": "Malloc disk", 00:21:52.802 "block_size": 512, 00:21:52.802 "num_blocks": 65536, 00:21:52.802 "uuid": "3561d0ad-32eb-4c20-a550-8fdac5a82620", 00:21:52.802 "assigned_rate_limits": { 00:21:52.802 "rw_ios_per_sec": 0, 00:21:52.802 "rw_mbytes_per_sec": 0, 00:21:52.802 "r_mbytes_per_sec": 0, 00:21:52.802 "w_mbytes_per_sec": 0 00:21:52.802 }, 00:21:52.802 "claimed": true, 00:21:52.802 "claim_type": "exclusive_write", 00:21:52.802 "zoned": false, 00:21:52.802 "supported_io_types": { 00:21:52.802 "read": true, 00:21:52.802 "write": true, 00:21:52.802 "unmap": true, 00:21:52.802 "flush": true, 00:21:52.802 "reset": true, 00:21:52.802 "nvme_admin": false, 00:21:52.802 "nvme_io": false, 00:21:52.802 "nvme_io_md": false, 00:21:52.802 "write_zeroes": true, 00:21:52.802 "zcopy": true, 00:21:52.802 "get_zone_info": false, 00:21:52.802 "zone_management": false, 00:21:52.802 "zone_append": false, 00:21:52.802 "compare": false, 00:21:52.802 "compare_and_write": false, 00:21:52.802 "abort": true, 00:21:52.802 "seek_hole": false, 00:21:52.802 "seek_data": false, 00:21:52.802 "copy": true, 00:21:52.802 "nvme_iov_md": false 00:21:52.802 }, 00:21:52.802 "memory_domains": [ 00:21:52.802 { 00:21:52.802 "dma_device_id": "system", 00:21:52.802 "dma_device_type": 1 00:21:52.802 }, 00:21:52.802 { 00:21:52.802 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:52.802 "dma_device_type": 2 00:21:52.802 } 00:21:52.802 ], 00:21:52.802 "driver_specific": {} 00:21:52.802 }' 00:21:52.802 14:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:52.802 14:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:52.802 14:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:52.802 14:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:52.802 14:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:53.060 14:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:53.060 14:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:53.060 14:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:53.060 14:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:53.060 14:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:53.060 14:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:53.060 14:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:53.060 14:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:53.060 14:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:53.060 14:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:21:53.320 14:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:53.320 "name": "BaseBdev2", 00:21:53.320 "aliases": [ 00:21:53.320 "a8888d8c-5833-43cf-82c9-fe1ae602d42f" 00:21:53.320 ], 00:21:53.320 "product_name": "Malloc disk", 00:21:53.320 "block_size": 512, 00:21:53.320 "num_blocks": 65536, 00:21:53.320 "uuid": "a8888d8c-5833-43cf-82c9-fe1ae602d42f", 00:21:53.320 "assigned_rate_limits": { 00:21:53.320 "rw_ios_per_sec": 0, 00:21:53.320 "rw_mbytes_per_sec": 0, 00:21:53.320 "r_mbytes_per_sec": 0, 00:21:53.320 "w_mbytes_per_sec": 0 00:21:53.320 }, 00:21:53.320 "claimed": true, 00:21:53.320 "claim_type": "exclusive_write", 00:21:53.320 "zoned": false, 00:21:53.320 "supported_io_types": { 00:21:53.320 "read": true, 00:21:53.320 "write": true, 00:21:53.320 "unmap": true, 00:21:53.320 "flush": true, 00:21:53.320 "reset": true, 00:21:53.320 "nvme_admin": false, 00:21:53.320 "nvme_io": false, 00:21:53.320 "nvme_io_md": false, 00:21:53.320 "write_zeroes": true, 00:21:53.320 "zcopy": true, 00:21:53.320 "get_zone_info": false, 00:21:53.320 "zone_management": false, 00:21:53.320 "zone_append": false, 00:21:53.320 "compare": false, 00:21:53.320 "compare_and_write": false, 00:21:53.320 "abort": true, 00:21:53.320 "seek_hole": false, 00:21:53.320 "seek_data": false, 00:21:53.320 "copy": true, 00:21:53.320 "nvme_iov_md": false 00:21:53.320 }, 00:21:53.320 "memory_domains": [ 00:21:53.320 { 00:21:53.320 "dma_device_id": "system", 00:21:53.320 "dma_device_type": 1 00:21:53.320 }, 00:21:53.320 { 00:21:53.320 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:53.320 "dma_device_type": 2 00:21:53.320 } 00:21:53.320 ], 00:21:53.320 "driver_specific": {} 00:21:53.320 }' 00:21:53.320 14:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:53.582 14:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:53.582 14:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:53.582 14:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:53.582 14:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:53.582 14:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:53.582 14:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:53.582 14:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:53.582 14:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:53.582 14:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:53.846 14:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:53.846 14:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:53.846 14:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:53.846 14:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:21:53.846 14:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:54.106 14:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:54.106 "name": "BaseBdev3", 00:21:54.106 "aliases": [ 00:21:54.106 "ccf52555-0a14-4d96-9801-076d2b98113d" 00:21:54.106 ], 00:21:54.106 "product_name": "Malloc disk", 00:21:54.106 "block_size": 512, 00:21:54.106 "num_blocks": 65536, 00:21:54.106 "uuid": "ccf52555-0a14-4d96-9801-076d2b98113d", 00:21:54.106 "assigned_rate_limits": { 00:21:54.106 "rw_ios_per_sec": 0, 00:21:54.106 "rw_mbytes_per_sec": 0, 00:21:54.106 "r_mbytes_per_sec": 0, 00:21:54.106 "w_mbytes_per_sec": 0 00:21:54.106 }, 00:21:54.106 "claimed": true, 00:21:54.106 "claim_type": "exclusive_write", 00:21:54.106 "zoned": false, 00:21:54.106 "supported_io_types": { 00:21:54.106 "read": true, 00:21:54.106 "write": true, 00:21:54.106 "unmap": true, 00:21:54.106 "flush": true, 00:21:54.106 "reset": true, 00:21:54.106 "nvme_admin": false, 00:21:54.106 "nvme_io": false, 00:21:54.106 "nvme_io_md": false, 00:21:54.106 "write_zeroes": true, 00:21:54.106 "zcopy": true, 00:21:54.106 "get_zone_info": false, 00:21:54.106 "zone_management": false, 00:21:54.106 "zone_append": false, 00:21:54.106 "compare": false, 00:21:54.106 "compare_and_write": false, 00:21:54.106 "abort": true, 00:21:54.106 "seek_hole": false, 00:21:54.106 "seek_data": false, 00:21:54.106 "copy": true, 00:21:54.106 "nvme_iov_md": false 00:21:54.106 }, 00:21:54.106 "memory_domains": [ 00:21:54.106 { 00:21:54.106 "dma_device_id": "system", 00:21:54.106 "dma_device_type": 1 00:21:54.106 }, 00:21:54.106 { 00:21:54.106 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:54.106 "dma_device_type": 2 00:21:54.106 } 00:21:54.106 ], 00:21:54.106 "driver_specific": {} 00:21:54.106 }' 00:21:54.106 14:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:54.106 14:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:54.106 14:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:54.106 14:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:54.106 14:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:54.367 14:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:54.367 14:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:54.367 14:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:54.367 14:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:54.367 14:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:54.367 14:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:54.367 14:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:54.367 14:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:54.367 14:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:21:54.367 14:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:54.630 14:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:54.630 "name": "BaseBdev4", 00:21:54.630 "aliases": [ 00:21:54.630 "df226191-9734-43af-b7ff-32856e718d1f" 00:21:54.630 ], 00:21:54.630 "product_name": "Malloc disk", 00:21:54.630 "block_size": 512, 00:21:54.630 "num_blocks": 65536, 00:21:54.630 "uuid": "df226191-9734-43af-b7ff-32856e718d1f", 00:21:54.630 "assigned_rate_limits": { 00:21:54.630 "rw_ios_per_sec": 0, 00:21:54.630 "rw_mbytes_per_sec": 0, 00:21:54.630 "r_mbytes_per_sec": 0, 00:21:54.630 "w_mbytes_per_sec": 0 00:21:54.630 }, 00:21:54.630 "claimed": true, 00:21:54.630 "claim_type": "exclusive_write", 00:21:54.630 "zoned": false, 00:21:54.630 "supported_io_types": { 00:21:54.630 "read": true, 00:21:54.630 "write": true, 00:21:54.630 "unmap": true, 00:21:54.630 "flush": true, 00:21:54.630 "reset": true, 00:21:54.630 "nvme_admin": false, 00:21:54.630 "nvme_io": false, 00:21:54.630 "nvme_io_md": false, 00:21:54.630 "write_zeroes": true, 00:21:54.630 "zcopy": true, 00:21:54.630 "get_zone_info": false, 00:21:54.630 "zone_management": false, 00:21:54.630 "zone_append": false, 00:21:54.630 "compare": false, 00:21:54.630 "compare_and_write": false, 00:21:54.630 "abort": true, 00:21:54.630 "seek_hole": false, 00:21:54.630 "seek_data": false, 00:21:54.630 "copy": true, 00:21:54.630 "nvme_iov_md": false 00:21:54.630 }, 00:21:54.630 "memory_domains": [ 00:21:54.630 { 00:21:54.630 "dma_device_id": "system", 00:21:54.630 "dma_device_type": 1 00:21:54.630 }, 00:21:54.630 { 00:21:54.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:54.630 "dma_device_type": 2 00:21:54.630 } 00:21:54.630 ], 00:21:54.630 "driver_specific": {} 00:21:54.630 }' 00:21:54.630 14:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:54.630 14:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:54.630 14:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:54.630 14:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:54.896 14:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:54.896 14:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:54.896 14:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:54.896 14:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:54.896 14:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:54.896 14:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:54.896 14:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:55.163 14:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:55.163 14:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:55.164 [2024-07-15 14:14:41.149216] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:55.164 [2024-07-15 14:14:41.149497] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:55.164 [2024-07-15 14:14:41.149666] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:55.424 14:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:21:55.424 14:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:21:55.424 14:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:21:55.424 14:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:21:55.424 14:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:21:55.424 14:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:21:55.424 14:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:55.424 14:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:21:55.424 14:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:55.424 14:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:55.424 14:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:55.424 14:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:55.424 14:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:55.424 14:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:55.424 14:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:55.424 14:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:55.424 14:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:55.681 14:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:55.681 "name": "Existed_Raid", 00:21:55.681 "uuid": "f14a33d2-d50e-4a06-8293-6d0d4d1af3ac", 00:21:55.681 "strip_size_kb": 64, 00:21:55.681 "state": "offline", 00:21:55.681 "raid_level": "raid0", 00:21:55.681 "superblock": false, 00:21:55.681 "num_base_bdevs": 4, 00:21:55.681 "num_base_bdevs_discovered": 3, 00:21:55.681 "num_base_bdevs_operational": 3, 00:21:55.681 "base_bdevs_list": [ 00:21:55.681 { 00:21:55.681 "name": null, 00:21:55.681 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:55.681 "is_configured": false, 00:21:55.681 "data_offset": 0, 00:21:55.681 "data_size": 65536 00:21:55.681 }, 00:21:55.681 { 00:21:55.681 "name": "BaseBdev2", 00:21:55.681 "uuid": "a8888d8c-5833-43cf-82c9-fe1ae602d42f", 00:21:55.681 "is_configured": true, 00:21:55.681 "data_offset": 0, 00:21:55.681 "data_size": 65536 00:21:55.681 }, 00:21:55.681 { 00:21:55.681 "name": "BaseBdev3", 00:21:55.681 "uuid": "ccf52555-0a14-4d96-9801-076d2b98113d", 00:21:55.681 "is_configured": true, 00:21:55.681 "data_offset": 0, 00:21:55.681 "data_size": 65536 00:21:55.681 }, 00:21:55.681 { 00:21:55.681 "name": "BaseBdev4", 00:21:55.681 "uuid": "df226191-9734-43af-b7ff-32856e718d1f", 00:21:55.681 "is_configured": true, 00:21:55.681 "data_offset": 0, 00:21:55.681 "data_size": 65536 00:21:55.681 } 00:21:55.681 ] 00:21:55.681 }' 00:21:55.681 14:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:55.681 14:14:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:56.246 14:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:21:56.247 14:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:56.247 14:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:56.247 14:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:21:56.504 14:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:21:56.504 14:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:56.504 14:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:21:57.070 [2024-07-15 14:14:42.803970] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:57.070 14:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:21:57.070 14:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:57.070 14:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:57.070 14:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:21:57.329 14:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:21:57.329 14:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:57.329 14:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:21:57.587 [2024-07-15 14:14:43.533303] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:57.845 14:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:21:57.845 14:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:57.845 14:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:21:57.845 14:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:58.103 14:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:21:58.103 14:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:58.103 14:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:21:58.361 [2024-07-15 14:14:44.131479] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:21:58.361 [2024-07-15 14:14:44.131708] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:21:58.361 14:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:21:58.361 14:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:58.361 14:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:58.361 14:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:21:58.619 14:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:21:58.619 14:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:21:58.619 14:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:21:58.619 14:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:21:58.619 14:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:58.619 14:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:58.875 BaseBdev2 00:21:58.875 14:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:21:58.875 14:14:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:21:58.875 14:14:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:58.875 14:14:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:21:58.875 14:14:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:58.875 14:14:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:58.875 14:14:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:59.183 14:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:59.452 [ 00:21:59.452 { 00:21:59.452 "name": "BaseBdev2", 00:21:59.452 "aliases": [ 00:21:59.452 "2ae9a398-9b91-4565-b114-54f0008ec1ba" 00:21:59.452 ], 00:21:59.452 "product_name": "Malloc disk", 00:21:59.452 "block_size": 512, 00:21:59.452 "num_blocks": 65536, 00:21:59.452 "uuid": "2ae9a398-9b91-4565-b114-54f0008ec1ba", 00:21:59.452 "assigned_rate_limits": { 00:21:59.452 "rw_ios_per_sec": 0, 00:21:59.452 "rw_mbytes_per_sec": 0, 00:21:59.452 "r_mbytes_per_sec": 0, 00:21:59.452 "w_mbytes_per_sec": 0 00:21:59.452 }, 00:21:59.452 "claimed": false, 00:21:59.452 "zoned": false, 00:21:59.452 "supported_io_types": { 00:21:59.452 "read": true, 00:21:59.452 "write": true, 00:21:59.452 "unmap": true, 00:21:59.452 "flush": true, 00:21:59.452 "reset": true, 00:21:59.452 "nvme_admin": false, 00:21:59.452 "nvme_io": false, 00:21:59.452 "nvme_io_md": false, 00:21:59.452 "write_zeroes": true, 00:21:59.452 "zcopy": true, 00:21:59.452 "get_zone_info": false, 00:21:59.452 "zone_management": false, 00:21:59.452 "zone_append": false, 00:21:59.452 "compare": false, 00:21:59.452 "compare_and_write": false, 00:21:59.452 "abort": true, 00:21:59.452 "seek_hole": false, 00:21:59.452 "seek_data": false, 00:21:59.452 "copy": true, 00:21:59.452 "nvme_iov_md": false 00:21:59.452 }, 00:21:59.452 "memory_domains": [ 00:21:59.452 { 00:21:59.452 "dma_device_id": "system", 00:21:59.452 "dma_device_type": 1 00:21:59.452 }, 00:21:59.452 { 00:21:59.452 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:59.452 "dma_device_type": 2 00:21:59.452 } 00:21:59.452 ], 00:21:59.452 "driver_specific": {} 00:21:59.452 } 00:21:59.452 ] 00:21:59.452 14:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:21:59.452 14:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:21:59.452 14:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:59.452 14:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:59.767 BaseBdev3 00:21:59.767 14:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:21:59.767 14:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:21:59.767 14:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:59.767 14:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:21:59.767 14:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:59.767 14:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:59.767 14:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:00.025 14:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:00.283 [ 00:22:00.283 { 00:22:00.283 "name": "BaseBdev3", 00:22:00.283 "aliases": [ 00:22:00.283 "30c2442f-1c81-463e-bc77-af1a5eb5561e" 00:22:00.283 ], 00:22:00.283 "product_name": "Malloc disk", 00:22:00.283 "block_size": 512, 00:22:00.283 "num_blocks": 65536, 00:22:00.283 "uuid": "30c2442f-1c81-463e-bc77-af1a5eb5561e", 00:22:00.283 "assigned_rate_limits": { 00:22:00.283 "rw_ios_per_sec": 0, 00:22:00.283 "rw_mbytes_per_sec": 0, 00:22:00.283 "r_mbytes_per_sec": 0, 00:22:00.283 "w_mbytes_per_sec": 0 00:22:00.283 }, 00:22:00.283 "claimed": false, 00:22:00.283 "zoned": false, 00:22:00.283 "supported_io_types": { 00:22:00.283 "read": true, 00:22:00.283 "write": true, 00:22:00.283 "unmap": true, 00:22:00.283 "flush": true, 00:22:00.283 "reset": true, 00:22:00.283 "nvme_admin": false, 00:22:00.283 "nvme_io": false, 00:22:00.283 "nvme_io_md": false, 00:22:00.283 "write_zeroes": true, 00:22:00.283 "zcopy": true, 00:22:00.283 "get_zone_info": false, 00:22:00.283 "zone_management": false, 00:22:00.283 "zone_append": false, 00:22:00.283 "compare": false, 00:22:00.283 "compare_and_write": false, 00:22:00.283 "abort": true, 00:22:00.283 "seek_hole": false, 00:22:00.283 "seek_data": false, 00:22:00.283 "copy": true, 00:22:00.283 "nvme_iov_md": false 00:22:00.283 }, 00:22:00.283 "memory_domains": [ 00:22:00.283 { 00:22:00.283 "dma_device_id": "system", 00:22:00.283 "dma_device_type": 1 00:22:00.283 }, 00:22:00.283 { 00:22:00.283 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:00.283 "dma_device_type": 2 00:22:00.283 } 00:22:00.283 ], 00:22:00.283 "driver_specific": {} 00:22:00.283 } 00:22:00.283 ] 00:22:00.283 14:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:22:00.283 14:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:22:00.283 14:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:00.283 14:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:22:00.541 BaseBdev4 00:22:00.541 14:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:22:00.541 14:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:22:00.541 14:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:00.541 14:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:22:00.541 14:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:00.541 14:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:00.541 14:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:00.798 14:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:22:01.055 [ 00:22:01.055 { 00:22:01.055 "name": "BaseBdev4", 00:22:01.055 "aliases": [ 00:22:01.055 "fb861af0-cd5d-47f5-9cf2-37ff6b17e7d9" 00:22:01.055 ], 00:22:01.055 "product_name": "Malloc disk", 00:22:01.055 "block_size": 512, 00:22:01.055 "num_blocks": 65536, 00:22:01.055 "uuid": "fb861af0-cd5d-47f5-9cf2-37ff6b17e7d9", 00:22:01.055 "assigned_rate_limits": { 00:22:01.055 "rw_ios_per_sec": 0, 00:22:01.055 "rw_mbytes_per_sec": 0, 00:22:01.055 "r_mbytes_per_sec": 0, 00:22:01.055 "w_mbytes_per_sec": 0 00:22:01.055 }, 00:22:01.055 "claimed": false, 00:22:01.055 "zoned": false, 00:22:01.055 "supported_io_types": { 00:22:01.055 "read": true, 00:22:01.055 "write": true, 00:22:01.055 "unmap": true, 00:22:01.055 "flush": true, 00:22:01.055 "reset": true, 00:22:01.055 "nvme_admin": false, 00:22:01.055 "nvme_io": false, 00:22:01.055 "nvme_io_md": false, 00:22:01.055 "write_zeroes": true, 00:22:01.055 "zcopy": true, 00:22:01.055 "get_zone_info": false, 00:22:01.055 "zone_management": false, 00:22:01.055 "zone_append": false, 00:22:01.055 "compare": false, 00:22:01.055 "compare_and_write": false, 00:22:01.055 "abort": true, 00:22:01.055 "seek_hole": false, 00:22:01.055 "seek_data": false, 00:22:01.055 "copy": true, 00:22:01.055 "nvme_iov_md": false 00:22:01.055 }, 00:22:01.055 "memory_domains": [ 00:22:01.055 { 00:22:01.055 "dma_device_id": "system", 00:22:01.055 "dma_device_type": 1 00:22:01.055 }, 00:22:01.055 { 00:22:01.055 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:01.055 "dma_device_type": 2 00:22:01.055 } 00:22:01.055 ], 00:22:01.055 "driver_specific": {} 00:22:01.055 } 00:22:01.055 ] 00:22:01.055 14:14:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:22:01.055 14:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:22:01.055 14:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:01.055 14:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:01.313 [2024-07-15 14:14:47.216760] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:01.313 [2024-07-15 14:14:47.218392] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:01.313 [2024-07-15 14:14:47.218575] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:01.313 [2024-07-15 14:14:47.220134] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:01.313 [2024-07-15 14:14:47.220307] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:01.313 14:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:01.313 14:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:01.313 14:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:01.313 14:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:01.313 14:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:01.313 14:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:01.313 14:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:01.313 14:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:01.313 14:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:01.313 14:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:01.313 14:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:01.313 14:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:01.573 14:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:01.573 "name": "Existed_Raid", 00:22:01.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:01.573 "strip_size_kb": 64, 00:22:01.573 "state": "configuring", 00:22:01.573 "raid_level": "raid0", 00:22:01.573 "superblock": false, 00:22:01.573 "num_base_bdevs": 4, 00:22:01.573 "num_base_bdevs_discovered": 3, 00:22:01.573 "num_base_bdevs_operational": 4, 00:22:01.573 "base_bdevs_list": [ 00:22:01.573 { 00:22:01.573 "name": "BaseBdev1", 00:22:01.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:01.573 "is_configured": false, 00:22:01.573 "data_offset": 0, 00:22:01.573 "data_size": 0 00:22:01.573 }, 00:22:01.573 { 00:22:01.573 "name": "BaseBdev2", 00:22:01.573 "uuid": "2ae9a398-9b91-4565-b114-54f0008ec1ba", 00:22:01.573 "is_configured": true, 00:22:01.573 "data_offset": 0, 00:22:01.573 "data_size": 65536 00:22:01.573 }, 00:22:01.573 { 00:22:01.573 "name": "BaseBdev3", 00:22:01.573 "uuid": "30c2442f-1c81-463e-bc77-af1a5eb5561e", 00:22:01.573 "is_configured": true, 00:22:01.573 "data_offset": 0, 00:22:01.573 "data_size": 65536 00:22:01.573 }, 00:22:01.573 { 00:22:01.573 "name": "BaseBdev4", 00:22:01.573 "uuid": "fb861af0-cd5d-47f5-9cf2-37ff6b17e7d9", 00:22:01.573 "is_configured": true, 00:22:01.573 "data_offset": 0, 00:22:01.573 "data_size": 65536 00:22:01.573 } 00:22:01.573 ] 00:22:01.573 }' 00:22:01.573 14:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:01.573 14:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:02.177 14:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:22:02.434 [2024-07-15 14:14:48.388926] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:02.434 14:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:02.434 14:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:02.434 14:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:02.434 14:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:02.434 14:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:02.434 14:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:02.434 14:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:02.434 14:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:02.434 14:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:02.435 14:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:02.435 14:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:02.435 14:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:03.001 14:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:03.001 "name": "Existed_Raid", 00:22:03.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:03.001 "strip_size_kb": 64, 00:22:03.001 "state": "configuring", 00:22:03.001 "raid_level": "raid0", 00:22:03.001 "superblock": false, 00:22:03.001 "num_base_bdevs": 4, 00:22:03.001 "num_base_bdevs_discovered": 2, 00:22:03.001 "num_base_bdevs_operational": 4, 00:22:03.001 "base_bdevs_list": [ 00:22:03.001 { 00:22:03.001 "name": "BaseBdev1", 00:22:03.001 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:03.001 "is_configured": false, 00:22:03.001 "data_offset": 0, 00:22:03.001 "data_size": 0 00:22:03.001 }, 00:22:03.001 { 00:22:03.001 "name": null, 00:22:03.001 "uuid": "2ae9a398-9b91-4565-b114-54f0008ec1ba", 00:22:03.001 "is_configured": false, 00:22:03.001 "data_offset": 0, 00:22:03.001 "data_size": 65536 00:22:03.001 }, 00:22:03.001 { 00:22:03.001 "name": "BaseBdev3", 00:22:03.001 "uuid": "30c2442f-1c81-463e-bc77-af1a5eb5561e", 00:22:03.001 "is_configured": true, 00:22:03.001 "data_offset": 0, 00:22:03.001 "data_size": 65536 00:22:03.001 }, 00:22:03.001 { 00:22:03.001 "name": "BaseBdev4", 00:22:03.001 "uuid": "fb861af0-cd5d-47f5-9cf2-37ff6b17e7d9", 00:22:03.001 "is_configured": true, 00:22:03.001 "data_offset": 0, 00:22:03.001 "data_size": 65536 00:22:03.001 } 00:22:03.001 ] 00:22:03.001 }' 00:22:03.001 14:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:03.001 14:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:03.568 14:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:03.568 14:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:03.827 14:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:22:03.827 14:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:04.086 [2024-07-15 14:14:49.932472] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:04.086 BaseBdev1 00:22:04.086 14:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:22:04.086 14:14:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:22:04.086 14:14:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:04.086 14:14:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:22:04.086 14:14:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:04.086 14:14:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:04.086 14:14:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:04.345 14:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:04.604 [ 00:22:04.604 { 00:22:04.604 "name": "BaseBdev1", 00:22:04.604 "aliases": [ 00:22:04.604 "6af9a280-bb9e-4d27-8776-2027c94b8a20" 00:22:04.604 ], 00:22:04.604 "product_name": "Malloc disk", 00:22:04.604 "block_size": 512, 00:22:04.604 "num_blocks": 65536, 00:22:04.604 "uuid": "6af9a280-bb9e-4d27-8776-2027c94b8a20", 00:22:04.604 "assigned_rate_limits": { 00:22:04.604 "rw_ios_per_sec": 0, 00:22:04.604 "rw_mbytes_per_sec": 0, 00:22:04.604 "r_mbytes_per_sec": 0, 00:22:04.604 "w_mbytes_per_sec": 0 00:22:04.604 }, 00:22:04.604 "claimed": true, 00:22:04.604 "claim_type": "exclusive_write", 00:22:04.604 "zoned": false, 00:22:04.604 "supported_io_types": { 00:22:04.604 "read": true, 00:22:04.604 "write": true, 00:22:04.604 "unmap": true, 00:22:04.604 "flush": true, 00:22:04.604 "reset": true, 00:22:04.604 "nvme_admin": false, 00:22:04.604 "nvme_io": false, 00:22:04.604 "nvme_io_md": false, 00:22:04.604 "write_zeroes": true, 00:22:04.604 "zcopy": true, 00:22:04.604 "get_zone_info": false, 00:22:04.604 "zone_management": false, 00:22:04.604 "zone_append": false, 00:22:04.604 "compare": false, 00:22:04.604 "compare_and_write": false, 00:22:04.604 "abort": true, 00:22:04.604 "seek_hole": false, 00:22:04.604 "seek_data": false, 00:22:04.604 "copy": true, 00:22:04.604 "nvme_iov_md": false 00:22:04.604 }, 00:22:04.604 "memory_domains": [ 00:22:04.604 { 00:22:04.604 "dma_device_id": "system", 00:22:04.604 "dma_device_type": 1 00:22:04.604 }, 00:22:04.604 { 00:22:04.604 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:04.604 "dma_device_type": 2 00:22:04.604 } 00:22:04.604 ], 00:22:04.604 "driver_specific": {} 00:22:04.604 } 00:22:04.604 ] 00:22:04.604 14:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:22:04.604 14:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:04.604 14:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:04.604 14:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:04.604 14:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:04.604 14:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:04.604 14:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:04.604 14:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:04.604 14:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:04.604 14:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:04.604 14:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:04.604 14:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:04.604 14:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:04.863 14:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:04.863 "name": "Existed_Raid", 00:22:04.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:04.863 "strip_size_kb": 64, 00:22:04.863 "state": "configuring", 00:22:04.863 "raid_level": "raid0", 00:22:04.863 "superblock": false, 00:22:04.863 "num_base_bdevs": 4, 00:22:04.863 "num_base_bdevs_discovered": 3, 00:22:04.863 "num_base_bdevs_operational": 4, 00:22:04.863 "base_bdevs_list": [ 00:22:04.863 { 00:22:04.863 "name": "BaseBdev1", 00:22:04.863 "uuid": "6af9a280-bb9e-4d27-8776-2027c94b8a20", 00:22:04.863 "is_configured": true, 00:22:04.863 "data_offset": 0, 00:22:04.863 "data_size": 65536 00:22:04.863 }, 00:22:04.863 { 00:22:04.863 "name": null, 00:22:04.863 "uuid": "2ae9a398-9b91-4565-b114-54f0008ec1ba", 00:22:04.863 "is_configured": false, 00:22:04.863 "data_offset": 0, 00:22:04.863 "data_size": 65536 00:22:04.863 }, 00:22:04.863 { 00:22:04.863 "name": "BaseBdev3", 00:22:04.863 "uuid": "30c2442f-1c81-463e-bc77-af1a5eb5561e", 00:22:04.863 "is_configured": true, 00:22:04.863 "data_offset": 0, 00:22:04.863 "data_size": 65536 00:22:04.863 }, 00:22:04.863 { 00:22:04.863 "name": "BaseBdev4", 00:22:04.863 "uuid": "fb861af0-cd5d-47f5-9cf2-37ff6b17e7d9", 00:22:04.863 "is_configured": true, 00:22:04.863 "data_offset": 0, 00:22:04.863 "data_size": 65536 00:22:04.863 } 00:22:04.863 ] 00:22:04.863 }' 00:22:04.863 14:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:04.863 14:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:05.429 14:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:05.429 14:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:05.727 14:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:22:05.727 14:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:22:05.986 [2024-07-15 14:14:51.964826] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:05.986 14:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:05.986 14:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:05.986 14:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:05.986 14:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:05.986 14:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:05.986 14:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:06.244 14:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:06.244 14:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:06.244 14:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:06.244 14:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:06.244 14:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:06.244 14:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:06.244 14:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:06.244 "name": "Existed_Raid", 00:22:06.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:06.244 "strip_size_kb": 64, 00:22:06.244 "state": "configuring", 00:22:06.244 "raid_level": "raid0", 00:22:06.244 "superblock": false, 00:22:06.244 "num_base_bdevs": 4, 00:22:06.244 "num_base_bdevs_discovered": 2, 00:22:06.244 "num_base_bdevs_operational": 4, 00:22:06.244 "base_bdevs_list": [ 00:22:06.244 { 00:22:06.244 "name": "BaseBdev1", 00:22:06.244 "uuid": "6af9a280-bb9e-4d27-8776-2027c94b8a20", 00:22:06.244 "is_configured": true, 00:22:06.244 "data_offset": 0, 00:22:06.244 "data_size": 65536 00:22:06.244 }, 00:22:06.244 { 00:22:06.244 "name": null, 00:22:06.244 "uuid": "2ae9a398-9b91-4565-b114-54f0008ec1ba", 00:22:06.244 "is_configured": false, 00:22:06.244 "data_offset": 0, 00:22:06.244 "data_size": 65536 00:22:06.244 }, 00:22:06.244 { 00:22:06.244 "name": null, 00:22:06.244 "uuid": "30c2442f-1c81-463e-bc77-af1a5eb5561e", 00:22:06.244 "is_configured": false, 00:22:06.244 "data_offset": 0, 00:22:06.244 "data_size": 65536 00:22:06.244 }, 00:22:06.244 { 00:22:06.244 "name": "BaseBdev4", 00:22:06.244 "uuid": "fb861af0-cd5d-47f5-9cf2-37ff6b17e7d9", 00:22:06.244 "is_configured": true, 00:22:06.244 "data_offset": 0, 00:22:06.244 "data_size": 65536 00:22:06.244 } 00:22:06.244 ] 00:22:06.244 }' 00:22:06.244 14:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:06.244 14:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:07.180 14:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:07.180 14:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:07.180 14:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:22:07.180 14:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:22:07.438 [2024-07-15 14:14:53.377010] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:07.438 14:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:07.438 14:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:07.438 14:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:07.438 14:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:07.438 14:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:07.438 14:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:07.438 14:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:07.438 14:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:07.438 14:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:07.438 14:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:07.438 14:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:07.438 14:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:07.695 14:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:07.695 "name": "Existed_Raid", 00:22:07.695 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:07.695 "strip_size_kb": 64, 00:22:07.695 "state": "configuring", 00:22:07.695 "raid_level": "raid0", 00:22:07.695 "superblock": false, 00:22:07.695 "num_base_bdevs": 4, 00:22:07.695 "num_base_bdevs_discovered": 3, 00:22:07.695 "num_base_bdevs_operational": 4, 00:22:07.695 "base_bdevs_list": [ 00:22:07.695 { 00:22:07.695 "name": "BaseBdev1", 00:22:07.695 "uuid": "6af9a280-bb9e-4d27-8776-2027c94b8a20", 00:22:07.695 "is_configured": true, 00:22:07.695 "data_offset": 0, 00:22:07.695 "data_size": 65536 00:22:07.695 }, 00:22:07.695 { 00:22:07.695 "name": null, 00:22:07.695 "uuid": "2ae9a398-9b91-4565-b114-54f0008ec1ba", 00:22:07.695 "is_configured": false, 00:22:07.695 "data_offset": 0, 00:22:07.695 "data_size": 65536 00:22:07.695 }, 00:22:07.695 { 00:22:07.695 "name": "BaseBdev3", 00:22:07.695 "uuid": "30c2442f-1c81-463e-bc77-af1a5eb5561e", 00:22:07.695 "is_configured": true, 00:22:07.695 "data_offset": 0, 00:22:07.695 "data_size": 65536 00:22:07.695 }, 00:22:07.695 { 00:22:07.695 "name": "BaseBdev4", 00:22:07.695 "uuid": "fb861af0-cd5d-47f5-9cf2-37ff6b17e7d9", 00:22:07.695 "is_configured": true, 00:22:07.695 "data_offset": 0, 00:22:07.695 "data_size": 65536 00:22:07.695 } 00:22:07.695 ] 00:22:07.695 }' 00:22:07.695 14:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:07.695 14:14:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:08.260 14:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:08.260 14:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:08.831 14:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:22:08.831 14:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:08.831 [2024-07-15 14:14:54.797246] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:09.090 14:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:09.090 14:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:09.090 14:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:09.090 14:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:09.090 14:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:09.090 14:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:09.090 14:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:09.090 14:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:09.090 14:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:09.090 14:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:09.090 14:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:09.090 14:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:09.351 14:14:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:09.351 "name": "Existed_Raid", 00:22:09.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:09.351 "strip_size_kb": 64, 00:22:09.351 "state": "configuring", 00:22:09.351 "raid_level": "raid0", 00:22:09.351 "superblock": false, 00:22:09.351 "num_base_bdevs": 4, 00:22:09.351 "num_base_bdevs_discovered": 2, 00:22:09.351 "num_base_bdevs_operational": 4, 00:22:09.351 "base_bdevs_list": [ 00:22:09.351 { 00:22:09.351 "name": null, 00:22:09.351 "uuid": "6af9a280-bb9e-4d27-8776-2027c94b8a20", 00:22:09.351 "is_configured": false, 00:22:09.351 "data_offset": 0, 00:22:09.351 "data_size": 65536 00:22:09.351 }, 00:22:09.351 { 00:22:09.351 "name": null, 00:22:09.351 "uuid": "2ae9a398-9b91-4565-b114-54f0008ec1ba", 00:22:09.351 "is_configured": false, 00:22:09.351 "data_offset": 0, 00:22:09.351 "data_size": 65536 00:22:09.351 }, 00:22:09.351 { 00:22:09.351 "name": "BaseBdev3", 00:22:09.351 "uuid": "30c2442f-1c81-463e-bc77-af1a5eb5561e", 00:22:09.351 "is_configured": true, 00:22:09.351 "data_offset": 0, 00:22:09.351 "data_size": 65536 00:22:09.351 }, 00:22:09.351 { 00:22:09.351 "name": "BaseBdev4", 00:22:09.351 "uuid": "fb861af0-cd5d-47f5-9cf2-37ff6b17e7d9", 00:22:09.351 "is_configured": true, 00:22:09.351 "data_offset": 0, 00:22:09.351 "data_size": 65536 00:22:09.351 } 00:22:09.351 ] 00:22:09.351 }' 00:22:09.351 14:14:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:09.351 14:14:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:09.920 14:14:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:09.920 14:14:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:10.179 14:14:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:22:10.179 14:14:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:22:10.437 [2024-07-15 14:14:56.261840] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:10.437 14:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:10.437 14:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:10.437 14:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:10.437 14:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:10.437 14:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:10.437 14:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:10.437 14:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:10.437 14:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:10.437 14:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:10.437 14:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:10.437 14:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:10.437 14:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:10.697 14:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:10.697 "name": "Existed_Raid", 00:22:10.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:10.697 "strip_size_kb": 64, 00:22:10.697 "state": "configuring", 00:22:10.697 "raid_level": "raid0", 00:22:10.697 "superblock": false, 00:22:10.697 "num_base_bdevs": 4, 00:22:10.697 "num_base_bdevs_discovered": 3, 00:22:10.697 "num_base_bdevs_operational": 4, 00:22:10.697 "base_bdevs_list": [ 00:22:10.697 { 00:22:10.697 "name": null, 00:22:10.697 "uuid": "6af9a280-bb9e-4d27-8776-2027c94b8a20", 00:22:10.697 "is_configured": false, 00:22:10.697 "data_offset": 0, 00:22:10.697 "data_size": 65536 00:22:10.697 }, 00:22:10.697 { 00:22:10.697 "name": "BaseBdev2", 00:22:10.697 "uuid": "2ae9a398-9b91-4565-b114-54f0008ec1ba", 00:22:10.697 "is_configured": true, 00:22:10.697 "data_offset": 0, 00:22:10.697 "data_size": 65536 00:22:10.697 }, 00:22:10.697 { 00:22:10.697 "name": "BaseBdev3", 00:22:10.697 "uuid": "30c2442f-1c81-463e-bc77-af1a5eb5561e", 00:22:10.697 "is_configured": true, 00:22:10.697 "data_offset": 0, 00:22:10.697 "data_size": 65536 00:22:10.697 }, 00:22:10.697 { 00:22:10.697 "name": "BaseBdev4", 00:22:10.697 "uuid": "fb861af0-cd5d-47f5-9cf2-37ff6b17e7d9", 00:22:10.697 "is_configured": true, 00:22:10.697 "data_offset": 0, 00:22:10.697 "data_size": 65536 00:22:10.697 } 00:22:10.697 ] 00:22:10.697 }' 00:22:10.697 14:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:10.697 14:14:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:11.264 14:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:11.264 14:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:11.523 14:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:22:11.523 14:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:22:11.523 14:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:11.783 14:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 6af9a280-bb9e-4d27-8776-2027c94b8a20 00:22:12.042 [2024-07-15 14:14:57.949612] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:22:12.042 [2024-07-15 14:14:57.949921] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:22:12.042 [2024-07-15 14:14:57.949972] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:22:12.042 [2024-07-15 14:14:57.950192] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:22:12.042 [2024-07-15 14:14:57.950521] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:22:12.042 [2024-07-15 14:14:57.950653] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000009380 00:22:12.042 [2024-07-15 14:14:57.950962] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:12.042 NewBaseBdev 00:22:12.042 14:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:22:12.042 14:14:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:22:12.042 14:14:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:12.042 14:14:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:22:12.042 14:14:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:12.042 14:14:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:12.042 14:14:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:12.301 14:14:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:22:12.560 [ 00:22:12.560 { 00:22:12.561 "name": "NewBaseBdev", 00:22:12.561 "aliases": [ 00:22:12.561 "6af9a280-bb9e-4d27-8776-2027c94b8a20" 00:22:12.561 ], 00:22:12.561 "product_name": "Malloc disk", 00:22:12.561 "block_size": 512, 00:22:12.561 "num_blocks": 65536, 00:22:12.561 "uuid": "6af9a280-bb9e-4d27-8776-2027c94b8a20", 00:22:12.561 "assigned_rate_limits": { 00:22:12.561 "rw_ios_per_sec": 0, 00:22:12.561 "rw_mbytes_per_sec": 0, 00:22:12.561 "r_mbytes_per_sec": 0, 00:22:12.561 "w_mbytes_per_sec": 0 00:22:12.561 }, 00:22:12.561 "claimed": true, 00:22:12.561 "claim_type": "exclusive_write", 00:22:12.561 "zoned": false, 00:22:12.561 "supported_io_types": { 00:22:12.561 "read": true, 00:22:12.561 "write": true, 00:22:12.561 "unmap": true, 00:22:12.561 "flush": true, 00:22:12.561 "reset": true, 00:22:12.561 "nvme_admin": false, 00:22:12.561 "nvme_io": false, 00:22:12.561 "nvme_io_md": false, 00:22:12.561 "write_zeroes": true, 00:22:12.561 "zcopy": true, 00:22:12.561 "get_zone_info": false, 00:22:12.561 "zone_management": false, 00:22:12.561 "zone_append": false, 00:22:12.561 "compare": false, 00:22:12.561 "compare_and_write": false, 00:22:12.561 "abort": true, 00:22:12.561 "seek_hole": false, 00:22:12.561 "seek_data": false, 00:22:12.561 "copy": true, 00:22:12.561 "nvme_iov_md": false 00:22:12.561 }, 00:22:12.561 "memory_domains": [ 00:22:12.561 { 00:22:12.561 "dma_device_id": "system", 00:22:12.561 "dma_device_type": 1 00:22:12.561 }, 00:22:12.561 { 00:22:12.561 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:12.561 "dma_device_type": 2 00:22:12.561 } 00:22:12.561 ], 00:22:12.561 "driver_specific": {} 00:22:12.561 } 00:22:12.561 ] 00:22:12.561 14:14:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:22:12.561 14:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:22:12.561 14:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:12.561 14:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:12.561 14:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:12.561 14:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:12.561 14:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:12.561 14:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:12.561 14:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:12.561 14:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:12.561 14:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:12.561 14:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:12.561 14:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:12.855 14:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:12.855 "name": "Existed_Raid", 00:22:12.855 "uuid": "d0cd22bb-ece5-48f1-bc20-7ebe9c989a5f", 00:22:12.855 "strip_size_kb": 64, 00:22:12.855 "state": "online", 00:22:12.855 "raid_level": "raid0", 00:22:12.855 "superblock": false, 00:22:12.855 "num_base_bdevs": 4, 00:22:12.855 "num_base_bdevs_discovered": 4, 00:22:12.855 "num_base_bdevs_operational": 4, 00:22:12.855 "base_bdevs_list": [ 00:22:12.855 { 00:22:12.855 "name": "NewBaseBdev", 00:22:12.855 "uuid": "6af9a280-bb9e-4d27-8776-2027c94b8a20", 00:22:12.855 "is_configured": true, 00:22:12.855 "data_offset": 0, 00:22:12.855 "data_size": 65536 00:22:12.855 }, 00:22:12.855 { 00:22:12.855 "name": "BaseBdev2", 00:22:12.855 "uuid": "2ae9a398-9b91-4565-b114-54f0008ec1ba", 00:22:12.855 "is_configured": true, 00:22:12.855 "data_offset": 0, 00:22:12.855 "data_size": 65536 00:22:12.855 }, 00:22:12.855 { 00:22:12.855 "name": "BaseBdev3", 00:22:12.855 "uuid": "30c2442f-1c81-463e-bc77-af1a5eb5561e", 00:22:12.855 "is_configured": true, 00:22:12.855 "data_offset": 0, 00:22:12.855 "data_size": 65536 00:22:12.855 }, 00:22:12.855 { 00:22:12.855 "name": "BaseBdev4", 00:22:12.855 "uuid": "fb861af0-cd5d-47f5-9cf2-37ff6b17e7d9", 00:22:12.855 "is_configured": true, 00:22:12.855 "data_offset": 0, 00:22:12.855 "data_size": 65536 00:22:12.855 } 00:22:12.855 ] 00:22:12.855 }' 00:22:12.855 14:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:12.855 14:14:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:13.811 14:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:22:13.811 14:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:22:13.811 14:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:13.811 14:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:13.811 14:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:13.811 14:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:22:13.811 14:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:13.811 14:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:22:13.811 [2024-07-15 14:14:59.710285] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:13.811 14:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:13.811 "name": "Existed_Raid", 00:22:13.811 "aliases": [ 00:22:13.811 "d0cd22bb-ece5-48f1-bc20-7ebe9c989a5f" 00:22:13.811 ], 00:22:13.811 "product_name": "Raid Volume", 00:22:13.811 "block_size": 512, 00:22:13.811 "num_blocks": 262144, 00:22:13.811 "uuid": "d0cd22bb-ece5-48f1-bc20-7ebe9c989a5f", 00:22:13.811 "assigned_rate_limits": { 00:22:13.811 "rw_ios_per_sec": 0, 00:22:13.811 "rw_mbytes_per_sec": 0, 00:22:13.811 "r_mbytes_per_sec": 0, 00:22:13.811 "w_mbytes_per_sec": 0 00:22:13.811 }, 00:22:13.811 "claimed": false, 00:22:13.811 "zoned": false, 00:22:13.811 "supported_io_types": { 00:22:13.811 "read": true, 00:22:13.811 "write": true, 00:22:13.811 "unmap": true, 00:22:13.811 "flush": true, 00:22:13.811 "reset": true, 00:22:13.811 "nvme_admin": false, 00:22:13.811 "nvme_io": false, 00:22:13.811 "nvme_io_md": false, 00:22:13.811 "write_zeroes": true, 00:22:13.811 "zcopy": false, 00:22:13.811 "get_zone_info": false, 00:22:13.811 "zone_management": false, 00:22:13.811 "zone_append": false, 00:22:13.811 "compare": false, 00:22:13.811 "compare_and_write": false, 00:22:13.811 "abort": false, 00:22:13.811 "seek_hole": false, 00:22:13.811 "seek_data": false, 00:22:13.811 "copy": false, 00:22:13.811 "nvme_iov_md": false 00:22:13.811 }, 00:22:13.811 "memory_domains": [ 00:22:13.811 { 00:22:13.811 "dma_device_id": "system", 00:22:13.811 "dma_device_type": 1 00:22:13.811 }, 00:22:13.811 { 00:22:13.811 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:13.811 "dma_device_type": 2 00:22:13.811 }, 00:22:13.811 { 00:22:13.811 "dma_device_id": "system", 00:22:13.811 "dma_device_type": 1 00:22:13.811 }, 00:22:13.811 { 00:22:13.811 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:13.811 "dma_device_type": 2 00:22:13.811 }, 00:22:13.811 { 00:22:13.811 "dma_device_id": "system", 00:22:13.811 "dma_device_type": 1 00:22:13.811 }, 00:22:13.811 { 00:22:13.811 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:13.811 "dma_device_type": 2 00:22:13.811 }, 00:22:13.811 { 00:22:13.811 "dma_device_id": "system", 00:22:13.811 "dma_device_type": 1 00:22:13.811 }, 00:22:13.811 { 00:22:13.811 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:13.811 "dma_device_type": 2 00:22:13.811 } 00:22:13.811 ], 00:22:13.811 "driver_specific": { 00:22:13.811 "raid": { 00:22:13.811 "uuid": "d0cd22bb-ece5-48f1-bc20-7ebe9c989a5f", 00:22:13.811 "strip_size_kb": 64, 00:22:13.811 "state": "online", 00:22:13.811 "raid_level": "raid0", 00:22:13.811 "superblock": false, 00:22:13.811 "num_base_bdevs": 4, 00:22:13.811 "num_base_bdevs_discovered": 4, 00:22:13.811 "num_base_bdevs_operational": 4, 00:22:13.811 "base_bdevs_list": [ 00:22:13.811 { 00:22:13.811 "name": "NewBaseBdev", 00:22:13.811 "uuid": "6af9a280-bb9e-4d27-8776-2027c94b8a20", 00:22:13.811 "is_configured": true, 00:22:13.811 "data_offset": 0, 00:22:13.811 "data_size": 65536 00:22:13.811 }, 00:22:13.811 { 00:22:13.811 "name": "BaseBdev2", 00:22:13.811 "uuid": "2ae9a398-9b91-4565-b114-54f0008ec1ba", 00:22:13.811 "is_configured": true, 00:22:13.811 "data_offset": 0, 00:22:13.811 "data_size": 65536 00:22:13.811 }, 00:22:13.811 { 00:22:13.811 "name": "BaseBdev3", 00:22:13.811 "uuid": "30c2442f-1c81-463e-bc77-af1a5eb5561e", 00:22:13.811 "is_configured": true, 00:22:13.811 "data_offset": 0, 00:22:13.811 "data_size": 65536 00:22:13.811 }, 00:22:13.811 { 00:22:13.811 "name": "BaseBdev4", 00:22:13.811 "uuid": "fb861af0-cd5d-47f5-9cf2-37ff6b17e7d9", 00:22:13.811 "is_configured": true, 00:22:13.811 "data_offset": 0, 00:22:13.811 "data_size": 65536 00:22:13.811 } 00:22:13.811 ] 00:22:13.811 } 00:22:13.811 } 00:22:13.811 }' 00:22:13.811 14:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:13.811 14:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:22:13.811 BaseBdev2 00:22:13.811 BaseBdev3 00:22:13.811 BaseBdev4' 00:22:13.811 14:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:13.811 14:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:22:13.811 14:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:14.069 14:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:14.069 "name": "NewBaseBdev", 00:22:14.069 "aliases": [ 00:22:14.069 "6af9a280-bb9e-4d27-8776-2027c94b8a20" 00:22:14.069 ], 00:22:14.069 "product_name": "Malloc disk", 00:22:14.069 "block_size": 512, 00:22:14.069 "num_blocks": 65536, 00:22:14.069 "uuid": "6af9a280-bb9e-4d27-8776-2027c94b8a20", 00:22:14.069 "assigned_rate_limits": { 00:22:14.069 "rw_ios_per_sec": 0, 00:22:14.069 "rw_mbytes_per_sec": 0, 00:22:14.069 "r_mbytes_per_sec": 0, 00:22:14.069 "w_mbytes_per_sec": 0 00:22:14.069 }, 00:22:14.069 "claimed": true, 00:22:14.069 "claim_type": "exclusive_write", 00:22:14.069 "zoned": false, 00:22:14.069 "supported_io_types": { 00:22:14.069 "read": true, 00:22:14.069 "write": true, 00:22:14.069 "unmap": true, 00:22:14.069 "flush": true, 00:22:14.069 "reset": true, 00:22:14.069 "nvme_admin": false, 00:22:14.069 "nvme_io": false, 00:22:14.069 "nvme_io_md": false, 00:22:14.069 "write_zeroes": true, 00:22:14.069 "zcopy": true, 00:22:14.069 "get_zone_info": false, 00:22:14.069 "zone_management": false, 00:22:14.069 "zone_append": false, 00:22:14.069 "compare": false, 00:22:14.069 "compare_and_write": false, 00:22:14.069 "abort": true, 00:22:14.069 "seek_hole": false, 00:22:14.069 "seek_data": false, 00:22:14.069 "copy": true, 00:22:14.069 "nvme_iov_md": false 00:22:14.069 }, 00:22:14.069 "memory_domains": [ 00:22:14.069 { 00:22:14.069 "dma_device_id": "system", 00:22:14.069 "dma_device_type": 1 00:22:14.069 }, 00:22:14.069 { 00:22:14.069 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:14.069 "dma_device_type": 2 00:22:14.069 } 00:22:14.069 ], 00:22:14.069 "driver_specific": {} 00:22:14.069 }' 00:22:14.069 14:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:14.327 14:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:14.327 14:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:14.327 14:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:14.327 14:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:14.327 14:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:14.327 14:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:14.327 14:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:14.327 14:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:14.327 14:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:14.327 14:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:14.585 14:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:14.585 14:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:14.585 14:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:22:14.585 14:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:14.843 14:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:14.843 "name": "BaseBdev2", 00:22:14.843 "aliases": [ 00:22:14.843 "2ae9a398-9b91-4565-b114-54f0008ec1ba" 00:22:14.843 ], 00:22:14.843 "product_name": "Malloc disk", 00:22:14.843 "block_size": 512, 00:22:14.843 "num_blocks": 65536, 00:22:14.843 "uuid": "2ae9a398-9b91-4565-b114-54f0008ec1ba", 00:22:14.843 "assigned_rate_limits": { 00:22:14.843 "rw_ios_per_sec": 0, 00:22:14.843 "rw_mbytes_per_sec": 0, 00:22:14.843 "r_mbytes_per_sec": 0, 00:22:14.843 "w_mbytes_per_sec": 0 00:22:14.843 }, 00:22:14.843 "claimed": true, 00:22:14.843 "claim_type": "exclusive_write", 00:22:14.843 "zoned": false, 00:22:14.843 "supported_io_types": { 00:22:14.843 "read": true, 00:22:14.843 "write": true, 00:22:14.843 "unmap": true, 00:22:14.843 "flush": true, 00:22:14.843 "reset": true, 00:22:14.843 "nvme_admin": false, 00:22:14.843 "nvme_io": false, 00:22:14.843 "nvme_io_md": false, 00:22:14.843 "write_zeroes": true, 00:22:14.843 "zcopy": true, 00:22:14.843 "get_zone_info": false, 00:22:14.843 "zone_management": false, 00:22:14.843 "zone_append": false, 00:22:14.843 "compare": false, 00:22:14.843 "compare_and_write": false, 00:22:14.843 "abort": true, 00:22:14.843 "seek_hole": false, 00:22:14.843 "seek_data": false, 00:22:14.844 "copy": true, 00:22:14.844 "nvme_iov_md": false 00:22:14.844 }, 00:22:14.844 "memory_domains": [ 00:22:14.844 { 00:22:14.844 "dma_device_id": "system", 00:22:14.844 "dma_device_type": 1 00:22:14.844 }, 00:22:14.844 { 00:22:14.844 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:14.844 "dma_device_type": 2 00:22:14.844 } 00:22:14.844 ], 00:22:14.844 "driver_specific": {} 00:22:14.844 }' 00:22:14.844 14:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:14.844 14:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:14.844 14:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:14.844 14:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:14.844 14:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:14.844 14:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:14.844 14:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:14.844 14:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:15.101 14:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:15.101 14:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:15.101 14:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:15.101 14:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:15.101 14:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:15.101 14:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:15.101 14:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:22:15.360 14:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:15.360 "name": "BaseBdev3", 00:22:15.360 "aliases": [ 00:22:15.360 "30c2442f-1c81-463e-bc77-af1a5eb5561e" 00:22:15.360 ], 00:22:15.360 "product_name": "Malloc disk", 00:22:15.360 "block_size": 512, 00:22:15.360 "num_blocks": 65536, 00:22:15.360 "uuid": "30c2442f-1c81-463e-bc77-af1a5eb5561e", 00:22:15.360 "assigned_rate_limits": { 00:22:15.360 "rw_ios_per_sec": 0, 00:22:15.360 "rw_mbytes_per_sec": 0, 00:22:15.360 "r_mbytes_per_sec": 0, 00:22:15.360 "w_mbytes_per_sec": 0 00:22:15.360 }, 00:22:15.360 "claimed": true, 00:22:15.360 "claim_type": "exclusive_write", 00:22:15.360 "zoned": false, 00:22:15.360 "supported_io_types": { 00:22:15.360 "read": true, 00:22:15.360 "write": true, 00:22:15.360 "unmap": true, 00:22:15.360 "flush": true, 00:22:15.360 "reset": true, 00:22:15.360 "nvme_admin": false, 00:22:15.360 "nvme_io": false, 00:22:15.360 "nvme_io_md": false, 00:22:15.360 "write_zeroes": true, 00:22:15.360 "zcopy": true, 00:22:15.360 "get_zone_info": false, 00:22:15.360 "zone_management": false, 00:22:15.360 "zone_append": false, 00:22:15.360 "compare": false, 00:22:15.360 "compare_and_write": false, 00:22:15.360 "abort": true, 00:22:15.360 "seek_hole": false, 00:22:15.360 "seek_data": false, 00:22:15.360 "copy": true, 00:22:15.360 "nvme_iov_md": false 00:22:15.360 }, 00:22:15.360 "memory_domains": [ 00:22:15.360 { 00:22:15.360 "dma_device_id": "system", 00:22:15.360 "dma_device_type": 1 00:22:15.360 }, 00:22:15.360 { 00:22:15.360 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:15.360 "dma_device_type": 2 00:22:15.360 } 00:22:15.360 ], 00:22:15.360 "driver_specific": {} 00:22:15.360 }' 00:22:15.360 14:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:15.360 14:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:15.360 14:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:15.360 14:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:15.618 14:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:15.618 14:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:15.618 14:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:15.618 14:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:15.618 14:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:15.618 14:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:15.618 14:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:15.877 14:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:15.877 14:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:15.877 14:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:22:15.877 14:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:15.877 14:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:15.877 "name": "BaseBdev4", 00:22:15.877 "aliases": [ 00:22:15.877 "fb861af0-cd5d-47f5-9cf2-37ff6b17e7d9" 00:22:15.877 ], 00:22:15.877 "product_name": "Malloc disk", 00:22:15.877 "block_size": 512, 00:22:15.877 "num_blocks": 65536, 00:22:15.877 "uuid": "fb861af0-cd5d-47f5-9cf2-37ff6b17e7d9", 00:22:15.877 "assigned_rate_limits": { 00:22:15.877 "rw_ios_per_sec": 0, 00:22:15.877 "rw_mbytes_per_sec": 0, 00:22:15.877 "r_mbytes_per_sec": 0, 00:22:15.877 "w_mbytes_per_sec": 0 00:22:15.877 }, 00:22:15.877 "claimed": true, 00:22:15.877 "claim_type": "exclusive_write", 00:22:15.877 "zoned": false, 00:22:15.877 "supported_io_types": { 00:22:15.877 "read": true, 00:22:15.877 "write": true, 00:22:15.877 "unmap": true, 00:22:15.877 "flush": true, 00:22:15.877 "reset": true, 00:22:15.877 "nvme_admin": false, 00:22:15.877 "nvme_io": false, 00:22:15.877 "nvme_io_md": false, 00:22:15.877 "write_zeroes": true, 00:22:15.877 "zcopy": true, 00:22:15.877 "get_zone_info": false, 00:22:15.877 "zone_management": false, 00:22:15.877 "zone_append": false, 00:22:15.877 "compare": false, 00:22:15.877 "compare_and_write": false, 00:22:15.877 "abort": true, 00:22:15.877 "seek_hole": false, 00:22:15.877 "seek_data": false, 00:22:15.877 "copy": true, 00:22:15.877 "nvme_iov_md": false 00:22:15.877 }, 00:22:15.877 "memory_domains": [ 00:22:15.877 { 00:22:15.877 "dma_device_id": "system", 00:22:15.877 "dma_device_type": 1 00:22:15.877 }, 00:22:15.877 { 00:22:15.877 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:15.877 "dma_device_type": 2 00:22:15.877 } 00:22:15.877 ], 00:22:15.877 "driver_specific": {} 00:22:15.877 }' 00:22:15.877 14:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:16.135 14:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:16.135 14:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:16.135 14:15:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:16.135 14:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:16.135 14:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:16.135 14:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:16.135 14:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:16.393 14:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:16.393 14:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:16.393 14:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:16.393 14:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:16.393 14:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:16.651 [2024-07-15 14:15:02.462346] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:16.651 [2024-07-15 14:15:02.462659] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:16.651 [2024-07-15 14:15:02.462866] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:16.651 [2024-07-15 14:15:02.463022] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:16.651 [2024-07-15 14:15:02.463132] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name Existed_Raid, state offline 00:22:16.651 14:15:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 200348 00:22:16.651 14:15:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 200348 ']' 00:22:16.651 14:15:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 200348 00:22:16.651 14:15:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:22:16.651 14:15:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:16.651 14:15:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 200348 00:22:16.651 14:15:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:16.651 14:15:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:16.651 14:15:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 200348' 00:22:16.651 killing process with pid 200348 00:22:16.651 14:15:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 200348 00:22:16.651 [2024-07-15 14:15:02.503881] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:16.651 14:15:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 200348 00:22:16.909 [2024-07-15 14:15:02.842421] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:18.284 14:15:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:22:18.284 00:22:18.284 real 0m37.010s 00:22:18.284 user 1m8.287s 00:22:18.284 sys 0m4.258s 00:22:18.284 14:15:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:18.284 14:15:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:18.284 ************************************ 00:22:18.284 END TEST raid_state_function_test 00:22:18.284 ************************************ 00:22:18.284 14:15:04 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:22:18.284 14:15:04 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:22:18.284 14:15:04 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:22:18.284 14:15:04 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:18.284 14:15:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:18.284 ************************************ 00:22:18.284 START TEST raid_state_function_test_sb 00:22:18.284 ************************************ 00:22:18.284 14:15:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 4 true 00:22:18.284 14:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:22:18.284 14:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:22:18.284 14:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:22:18.284 14:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:22:18.284 14:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:22:18.284 14:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:18.284 14:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:22:18.284 14:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:18.284 14:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:18.284 14:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:22:18.284 14:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:18.284 14:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:18.284 14:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:22:18.284 14:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:18.284 14:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:18.284 14:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev4 00:22:18.284 14:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:18.284 14:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:18.284 14:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:18.284 14:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:22:18.284 14:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:22:18.284 14:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:22:18.284 14:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:22:18.284 14:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:22:18.284 14:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:22:18.284 14:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:22:18.284 14:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:22:18.284 14:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:22:18.284 14:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:22:18.284 14:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=201474 00:22:18.284 14:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 201474' 00:22:18.284 Process raid pid: 201474 00:22:18.284 14:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 201474 /var/tmp/spdk-raid.sock 00:22:18.284 14:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:22:18.284 14:15:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 201474 ']' 00:22:18.284 14:15:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:18.284 14:15:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:18.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:18.284 14:15:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:18.285 14:15:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:18.285 14:15:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:18.285 [2024-07-15 14:15:04.078633] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:22:18.285 [2024-07-15 14:15:04.079031] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:18.285 [2024-07-15 14:15:04.231107] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:18.543 [2024-07-15 14:15:04.448598] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:18.802 [2024-07-15 14:15:04.650750] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:19.370 14:15:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:19.370 14:15:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:22:19.370 14:15:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:19.370 [2024-07-15 14:15:05.358919] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:19.370 [2024-07-15 14:15:05.359643] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:19.370 [2024-07-15 14:15:05.359874] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:19.370 [2024-07-15 14:15:05.360117] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:19.370 [2024-07-15 14:15:05.360236] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:19.370 [2024-07-15 14:15:05.360381] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:19.370 [2024-07-15 14:15:05.360521] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:19.370 [2024-07-15 14:15:05.360763] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:19.628 14:15:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:19.628 14:15:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:19.629 14:15:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:19.629 14:15:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:19.629 14:15:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:19.629 14:15:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:19.629 14:15:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:19.629 14:15:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:19.629 14:15:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:19.629 14:15:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:19.629 14:15:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:19.629 14:15:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:19.629 14:15:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:19.629 "name": "Existed_Raid", 00:22:19.629 "uuid": "936d16be-825c-4660-a0d9-85d835f60408", 00:22:19.629 "strip_size_kb": 64, 00:22:19.629 "state": "configuring", 00:22:19.887 "raid_level": "raid0", 00:22:19.887 "superblock": true, 00:22:19.887 "num_base_bdevs": 4, 00:22:19.887 "num_base_bdevs_discovered": 0, 00:22:19.887 "num_base_bdevs_operational": 4, 00:22:19.887 "base_bdevs_list": [ 00:22:19.887 { 00:22:19.887 "name": "BaseBdev1", 00:22:19.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:19.887 "is_configured": false, 00:22:19.887 "data_offset": 0, 00:22:19.887 "data_size": 0 00:22:19.887 }, 00:22:19.887 { 00:22:19.887 "name": "BaseBdev2", 00:22:19.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:19.887 "is_configured": false, 00:22:19.887 "data_offset": 0, 00:22:19.887 "data_size": 0 00:22:19.887 }, 00:22:19.887 { 00:22:19.887 "name": "BaseBdev3", 00:22:19.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:19.887 "is_configured": false, 00:22:19.887 "data_offset": 0, 00:22:19.887 "data_size": 0 00:22:19.887 }, 00:22:19.887 { 00:22:19.887 "name": "BaseBdev4", 00:22:19.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:19.887 "is_configured": false, 00:22:19.887 "data_offset": 0, 00:22:19.887 "data_size": 0 00:22:19.887 } 00:22:19.887 ] 00:22:19.887 }' 00:22:19.887 14:15:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:19.887 14:15:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:20.522 14:15:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:20.781 [2024-07-15 14:15:06.591034] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:20.781 [2024-07-15 14:15:06.591251] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:22:20.781 14:15:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:21.040 [2024-07-15 14:15:06.819128] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:21.040 [2024-07-15 14:15:06.819722] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:21.040 [2024-07-15 14:15:06.819878] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:21.040 [2024-07-15 14:15:06.820018] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:21.040 [2024-07-15 14:15:06.820165] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:21.040 [2024-07-15 14:15:06.820388] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:21.040 [2024-07-15 14:15:06.820506] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:21.040 [2024-07-15 14:15:06.820711] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:21.040 14:15:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:21.298 [2024-07-15 14:15:07.096761] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:21.298 BaseBdev1 00:22:21.298 14:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:22:21.298 14:15:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:22:21.298 14:15:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:21.298 14:15:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:22:21.298 14:15:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:21.298 14:15:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:21.298 14:15:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:21.556 14:15:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:21.813 [ 00:22:21.813 { 00:22:21.813 "name": "BaseBdev1", 00:22:21.813 "aliases": [ 00:22:21.813 "40928297-ca50-4fa6-875f-c376be3b2e38" 00:22:21.813 ], 00:22:21.813 "product_name": "Malloc disk", 00:22:21.813 "block_size": 512, 00:22:21.813 "num_blocks": 65536, 00:22:21.813 "uuid": "40928297-ca50-4fa6-875f-c376be3b2e38", 00:22:21.813 "assigned_rate_limits": { 00:22:21.813 "rw_ios_per_sec": 0, 00:22:21.813 "rw_mbytes_per_sec": 0, 00:22:21.813 "r_mbytes_per_sec": 0, 00:22:21.813 "w_mbytes_per_sec": 0 00:22:21.813 }, 00:22:21.813 "claimed": true, 00:22:21.813 "claim_type": "exclusive_write", 00:22:21.813 "zoned": false, 00:22:21.813 "supported_io_types": { 00:22:21.813 "read": true, 00:22:21.813 "write": true, 00:22:21.813 "unmap": true, 00:22:21.813 "flush": true, 00:22:21.813 "reset": true, 00:22:21.813 "nvme_admin": false, 00:22:21.813 "nvme_io": false, 00:22:21.813 "nvme_io_md": false, 00:22:21.813 "write_zeroes": true, 00:22:21.813 "zcopy": true, 00:22:21.813 "get_zone_info": false, 00:22:21.813 "zone_management": false, 00:22:21.813 "zone_append": false, 00:22:21.813 "compare": false, 00:22:21.813 "compare_and_write": false, 00:22:21.813 "abort": true, 00:22:21.813 "seek_hole": false, 00:22:21.813 "seek_data": false, 00:22:21.813 "copy": true, 00:22:21.813 "nvme_iov_md": false 00:22:21.813 }, 00:22:21.813 "memory_domains": [ 00:22:21.813 { 00:22:21.813 "dma_device_id": "system", 00:22:21.813 "dma_device_type": 1 00:22:21.813 }, 00:22:21.813 { 00:22:21.813 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:21.813 "dma_device_type": 2 00:22:21.813 } 00:22:21.813 ], 00:22:21.813 "driver_specific": {} 00:22:21.813 } 00:22:21.813 ] 00:22:21.813 14:15:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:22:21.813 14:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:21.813 14:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:21.813 14:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:21.813 14:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:21.813 14:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:21.813 14:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:21.813 14:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:21.813 14:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:21.813 14:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:21.813 14:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:21.813 14:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:21.813 14:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:22.071 14:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:22.071 "name": "Existed_Raid", 00:22:22.071 "uuid": "4db7aa86-600c-44a3-b6bc-f6d378d2145b", 00:22:22.071 "strip_size_kb": 64, 00:22:22.071 "state": "configuring", 00:22:22.071 "raid_level": "raid0", 00:22:22.071 "superblock": true, 00:22:22.071 "num_base_bdevs": 4, 00:22:22.071 "num_base_bdevs_discovered": 1, 00:22:22.071 "num_base_bdevs_operational": 4, 00:22:22.071 "base_bdevs_list": [ 00:22:22.071 { 00:22:22.071 "name": "BaseBdev1", 00:22:22.071 "uuid": "40928297-ca50-4fa6-875f-c376be3b2e38", 00:22:22.071 "is_configured": true, 00:22:22.071 "data_offset": 2048, 00:22:22.071 "data_size": 63488 00:22:22.071 }, 00:22:22.071 { 00:22:22.071 "name": "BaseBdev2", 00:22:22.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:22.071 "is_configured": false, 00:22:22.071 "data_offset": 0, 00:22:22.071 "data_size": 0 00:22:22.071 }, 00:22:22.071 { 00:22:22.071 "name": "BaseBdev3", 00:22:22.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:22.071 "is_configured": false, 00:22:22.071 "data_offset": 0, 00:22:22.071 "data_size": 0 00:22:22.071 }, 00:22:22.071 { 00:22:22.071 "name": "BaseBdev4", 00:22:22.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:22.071 "is_configured": false, 00:22:22.071 "data_offset": 0, 00:22:22.071 "data_size": 0 00:22:22.071 } 00:22:22.071 ] 00:22:22.071 }' 00:22:22.071 14:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:22.071 14:15:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:22.635 14:15:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:22.893 [2024-07-15 14:15:08.737245] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:22.893 [2024-07-15 14:15:08.737501] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:22:22.893 14:15:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:23.151 [2024-07-15 14:15:09.021368] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:23.151 [2024-07-15 14:15:09.023067] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:23.151 [2024-07-15 14:15:09.023588] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:23.151 [2024-07-15 14:15:09.023748] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:23.151 [2024-07-15 14:15:09.023907] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:23.151 [2024-07-15 14:15:09.024020] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:23.151 [2024-07-15 14:15:09.024159] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:23.151 14:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:22:23.151 14:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:23.151 14:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:23.151 14:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:23.151 14:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:23.151 14:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:23.151 14:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:23.151 14:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:23.151 14:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:23.151 14:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:23.151 14:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:23.151 14:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:23.151 14:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:23.151 14:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:23.409 14:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:23.409 "name": "Existed_Raid", 00:22:23.409 "uuid": "569a679e-a7ec-4857-abe2-7fde81ab5d16", 00:22:23.409 "strip_size_kb": 64, 00:22:23.409 "state": "configuring", 00:22:23.409 "raid_level": "raid0", 00:22:23.409 "superblock": true, 00:22:23.409 "num_base_bdevs": 4, 00:22:23.409 "num_base_bdevs_discovered": 1, 00:22:23.409 "num_base_bdevs_operational": 4, 00:22:23.409 "base_bdevs_list": [ 00:22:23.409 { 00:22:23.409 "name": "BaseBdev1", 00:22:23.409 "uuid": "40928297-ca50-4fa6-875f-c376be3b2e38", 00:22:23.409 "is_configured": true, 00:22:23.409 "data_offset": 2048, 00:22:23.409 "data_size": 63488 00:22:23.409 }, 00:22:23.409 { 00:22:23.409 "name": "BaseBdev2", 00:22:23.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:23.409 "is_configured": false, 00:22:23.409 "data_offset": 0, 00:22:23.409 "data_size": 0 00:22:23.409 }, 00:22:23.409 { 00:22:23.409 "name": "BaseBdev3", 00:22:23.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:23.409 "is_configured": false, 00:22:23.409 "data_offset": 0, 00:22:23.409 "data_size": 0 00:22:23.409 }, 00:22:23.409 { 00:22:23.409 "name": "BaseBdev4", 00:22:23.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:23.409 "is_configured": false, 00:22:23.409 "data_offset": 0, 00:22:23.409 "data_size": 0 00:22:23.409 } 00:22:23.409 ] 00:22:23.409 }' 00:22:23.409 14:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:23.409 14:15:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:23.977 14:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:24.544 [2024-07-15 14:15:10.250862] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:24.544 BaseBdev2 00:22:24.544 14:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:22:24.544 14:15:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:22:24.544 14:15:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:24.544 14:15:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:22:24.544 14:15:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:24.544 14:15:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:24.544 14:15:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:24.544 14:15:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:25.113 [ 00:22:25.113 { 00:22:25.113 "name": "BaseBdev2", 00:22:25.113 "aliases": [ 00:22:25.113 "9f5beb12-cf6b-4824-90ff-64c359d757ba" 00:22:25.113 ], 00:22:25.113 "product_name": "Malloc disk", 00:22:25.113 "block_size": 512, 00:22:25.113 "num_blocks": 65536, 00:22:25.113 "uuid": "9f5beb12-cf6b-4824-90ff-64c359d757ba", 00:22:25.113 "assigned_rate_limits": { 00:22:25.113 "rw_ios_per_sec": 0, 00:22:25.113 "rw_mbytes_per_sec": 0, 00:22:25.113 "r_mbytes_per_sec": 0, 00:22:25.113 "w_mbytes_per_sec": 0 00:22:25.113 }, 00:22:25.113 "claimed": true, 00:22:25.113 "claim_type": "exclusive_write", 00:22:25.113 "zoned": false, 00:22:25.113 "supported_io_types": { 00:22:25.113 "read": true, 00:22:25.113 "write": true, 00:22:25.113 "unmap": true, 00:22:25.113 "flush": true, 00:22:25.113 "reset": true, 00:22:25.113 "nvme_admin": false, 00:22:25.113 "nvme_io": false, 00:22:25.113 "nvme_io_md": false, 00:22:25.113 "write_zeroes": true, 00:22:25.113 "zcopy": true, 00:22:25.113 "get_zone_info": false, 00:22:25.113 "zone_management": false, 00:22:25.113 "zone_append": false, 00:22:25.113 "compare": false, 00:22:25.113 "compare_and_write": false, 00:22:25.113 "abort": true, 00:22:25.113 "seek_hole": false, 00:22:25.113 "seek_data": false, 00:22:25.113 "copy": true, 00:22:25.113 "nvme_iov_md": false 00:22:25.113 }, 00:22:25.114 "memory_domains": [ 00:22:25.114 { 00:22:25.114 "dma_device_id": "system", 00:22:25.114 "dma_device_type": 1 00:22:25.114 }, 00:22:25.114 { 00:22:25.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:25.114 "dma_device_type": 2 00:22:25.114 } 00:22:25.114 ], 00:22:25.114 "driver_specific": {} 00:22:25.114 } 00:22:25.114 ] 00:22:25.114 14:15:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:22:25.114 14:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:22:25.114 14:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:25.114 14:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:25.114 14:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:25.114 14:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:25.114 14:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:25.114 14:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:25.114 14:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:25.114 14:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:25.114 14:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:25.114 14:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:25.114 14:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:25.114 14:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:25.114 14:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:25.114 14:15:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:25.114 "name": "Existed_Raid", 00:22:25.114 "uuid": "569a679e-a7ec-4857-abe2-7fde81ab5d16", 00:22:25.114 "strip_size_kb": 64, 00:22:25.114 "state": "configuring", 00:22:25.114 "raid_level": "raid0", 00:22:25.114 "superblock": true, 00:22:25.114 "num_base_bdevs": 4, 00:22:25.114 "num_base_bdevs_discovered": 2, 00:22:25.114 "num_base_bdevs_operational": 4, 00:22:25.114 "base_bdevs_list": [ 00:22:25.114 { 00:22:25.114 "name": "BaseBdev1", 00:22:25.114 "uuid": "40928297-ca50-4fa6-875f-c376be3b2e38", 00:22:25.114 "is_configured": true, 00:22:25.114 "data_offset": 2048, 00:22:25.114 "data_size": 63488 00:22:25.114 }, 00:22:25.114 { 00:22:25.114 "name": "BaseBdev2", 00:22:25.114 "uuid": "9f5beb12-cf6b-4824-90ff-64c359d757ba", 00:22:25.114 "is_configured": true, 00:22:25.114 "data_offset": 2048, 00:22:25.114 "data_size": 63488 00:22:25.114 }, 00:22:25.114 { 00:22:25.114 "name": "BaseBdev3", 00:22:25.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:25.114 "is_configured": false, 00:22:25.114 "data_offset": 0, 00:22:25.114 "data_size": 0 00:22:25.114 }, 00:22:25.114 { 00:22:25.114 "name": "BaseBdev4", 00:22:25.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:25.114 "is_configured": false, 00:22:25.114 "data_offset": 0, 00:22:25.114 "data_size": 0 00:22:25.114 } 00:22:25.114 ] 00:22:25.114 }' 00:22:25.114 14:15:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:25.114 14:15:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:26.050 14:15:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:26.309 [2024-07-15 14:15:12.071259] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:26.309 BaseBdev3 00:22:26.309 14:15:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:22:26.309 14:15:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:22:26.309 14:15:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:26.309 14:15:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:22:26.309 14:15:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:26.309 14:15:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:26.309 14:15:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:26.643 14:15:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:26.901 [ 00:22:26.901 { 00:22:26.901 "name": "BaseBdev3", 00:22:26.901 "aliases": [ 00:22:26.901 "823d79a7-a477-4826-83ce-8f47ea6b3d7a" 00:22:26.901 ], 00:22:26.901 "product_name": "Malloc disk", 00:22:26.901 "block_size": 512, 00:22:26.901 "num_blocks": 65536, 00:22:26.901 "uuid": "823d79a7-a477-4826-83ce-8f47ea6b3d7a", 00:22:26.901 "assigned_rate_limits": { 00:22:26.901 "rw_ios_per_sec": 0, 00:22:26.901 "rw_mbytes_per_sec": 0, 00:22:26.901 "r_mbytes_per_sec": 0, 00:22:26.901 "w_mbytes_per_sec": 0 00:22:26.901 }, 00:22:26.901 "claimed": true, 00:22:26.901 "claim_type": "exclusive_write", 00:22:26.901 "zoned": false, 00:22:26.901 "supported_io_types": { 00:22:26.901 "read": true, 00:22:26.901 "write": true, 00:22:26.901 "unmap": true, 00:22:26.901 "flush": true, 00:22:26.901 "reset": true, 00:22:26.901 "nvme_admin": false, 00:22:26.901 "nvme_io": false, 00:22:26.901 "nvme_io_md": false, 00:22:26.901 "write_zeroes": true, 00:22:26.901 "zcopy": true, 00:22:26.901 "get_zone_info": false, 00:22:26.901 "zone_management": false, 00:22:26.901 "zone_append": false, 00:22:26.901 "compare": false, 00:22:26.901 "compare_and_write": false, 00:22:26.901 "abort": true, 00:22:26.901 "seek_hole": false, 00:22:26.901 "seek_data": false, 00:22:26.901 "copy": true, 00:22:26.901 "nvme_iov_md": false 00:22:26.901 }, 00:22:26.901 "memory_domains": [ 00:22:26.901 { 00:22:26.901 "dma_device_id": "system", 00:22:26.901 "dma_device_type": 1 00:22:26.901 }, 00:22:26.901 { 00:22:26.901 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:26.901 "dma_device_type": 2 00:22:26.901 } 00:22:26.901 ], 00:22:26.901 "driver_specific": {} 00:22:26.901 } 00:22:26.901 ] 00:22:26.901 14:15:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:22:26.901 14:15:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:22:26.901 14:15:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:26.901 14:15:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:26.901 14:15:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:26.901 14:15:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:26.901 14:15:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:26.901 14:15:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:26.901 14:15:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:26.901 14:15:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:26.901 14:15:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:26.901 14:15:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:26.901 14:15:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:26.901 14:15:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:26.901 14:15:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:27.160 14:15:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:27.160 "name": "Existed_Raid", 00:22:27.160 "uuid": "569a679e-a7ec-4857-abe2-7fde81ab5d16", 00:22:27.160 "strip_size_kb": 64, 00:22:27.160 "state": "configuring", 00:22:27.160 "raid_level": "raid0", 00:22:27.160 "superblock": true, 00:22:27.160 "num_base_bdevs": 4, 00:22:27.160 "num_base_bdevs_discovered": 3, 00:22:27.160 "num_base_bdevs_operational": 4, 00:22:27.160 "base_bdevs_list": [ 00:22:27.160 { 00:22:27.160 "name": "BaseBdev1", 00:22:27.160 "uuid": "40928297-ca50-4fa6-875f-c376be3b2e38", 00:22:27.160 "is_configured": true, 00:22:27.160 "data_offset": 2048, 00:22:27.160 "data_size": 63488 00:22:27.160 }, 00:22:27.160 { 00:22:27.160 "name": "BaseBdev2", 00:22:27.160 "uuid": "9f5beb12-cf6b-4824-90ff-64c359d757ba", 00:22:27.160 "is_configured": true, 00:22:27.160 "data_offset": 2048, 00:22:27.160 "data_size": 63488 00:22:27.160 }, 00:22:27.160 { 00:22:27.160 "name": "BaseBdev3", 00:22:27.160 "uuid": "823d79a7-a477-4826-83ce-8f47ea6b3d7a", 00:22:27.160 "is_configured": true, 00:22:27.160 "data_offset": 2048, 00:22:27.160 "data_size": 63488 00:22:27.160 }, 00:22:27.160 { 00:22:27.160 "name": "BaseBdev4", 00:22:27.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:27.160 "is_configured": false, 00:22:27.160 "data_offset": 0, 00:22:27.160 "data_size": 0 00:22:27.160 } 00:22:27.160 ] 00:22:27.160 }' 00:22:27.160 14:15:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:27.160 14:15:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:27.736 14:15:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:22:27.994 [2024-07-15 14:15:13.967142] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:27.994 [2024-07-15 14:15:13.967552] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:22:27.994 [2024-07-15 14:15:13.967683] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:22:27.994 [2024-07-15 14:15:13.967866] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:22:27.994 [2024-07-15 14:15:13.968192] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:22:27.994 [2024-07-15 14:15:13.968245] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:22:27.994 [2024-07-15 14:15:13.968529] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:27.994 BaseBdev4 00:22:27.994 14:15:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:22:27.994 14:15:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:22:27.994 14:15:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:27.994 14:15:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:22:27.994 14:15:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:27.994 14:15:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:27.994 14:15:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:28.562 14:15:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:22:28.562 [ 00:22:28.562 { 00:22:28.562 "name": "BaseBdev4", 00:22:28.562 "aliases": [ 00:22:28.562 "88820496-d4ff-405a-9fde-64f3d44aea5e" 00:22:28.562 ], 00:22:28.562 "product_name": "Malloc disk", 00:22:28.562 "block_size": 512, 00:22:28.562 "num_blocks": 65536, 00:22:28.562 "uuid": "88820496-d4ff-405a-9fde-64f3d44aea5e", 00:22:28.562 "assigned_rate_limits": { 00:22:28.562 "rw_ios_per_sec": 0, 00:22:28.562 "rw_mbytes_per_sec": 0, 00:22:28.562 "r_mbytes_per_sec": 0, 00:22:28.562 "w_mbytes_per_sec": 0 00:22:28.562 }, 00:22:28.562 "claimed": true, 00:22:28.562 "claim_type": "exclusive_write", 00:22:28.562 "zoned": false, 00:22:28.562 "supported_io_types": { 00:22:28.562 "read": true, 00:22:28.562 "write": true, 00:22:28.562 "unmap": true, 00:22:28.562 "flush": true, 00:22:28.562 "reset": true, 00:22:28.562 "nvme_admin": false, 00:22:28.562 "nvme_io": false, 00:22:28.562 "nvme_io_md": false, 00:22:28.562 "write_zeroes": true, 00:22:28.562 "zcopy": true, 00:22:28.562 "get_zone_info": false, 00:22:28.562 "zone_management": false, 00:22:28.562 "zone_append": false, 00:22:28.562 "compare": false, 00:22:28.562 "compare_and_write": false, 00:22:28.562 "abort": true, 00:22:28.562 "seek_hole": false, 00:22:28.562 "seek_data": false, 00:22:28.562 "copy": true, 00:22:28.562 "nvme_iov_md": false 00:22:28.562 }, 00:22:28.562 "memory_domains": [ 00:22:28.562 { 00:22:28.562 "dma_device_id": "system", 00:22:28.562 "dma_device_type": 1 00:22:28.562 }, 00:22:28.562 { 00:22:28.562 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:28.562 "dma_device_type": 2 00:22:28.562 } 00:22:28.562 ], 00:22:28.562 "driver_specific": {} 00:22:28.562 } 00:22:28.562 ] 00:22:28.562 14:15:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:22:28.562 14:15:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:22:28.562 14:15:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:28.562 14:15:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:22:28.562 14:15:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:28.562 14:15:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:28.562 14:15:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:28.562 14:15:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:28.562 14:15:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:28.821 14:15:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:28.821 14:15:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:28.821 14:15:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:28.821 14:15:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:28.821 14:15:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:28.821 14:15:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:28.821 14:15:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:28.821 "name": "Existed_Raid", 00:22:28.821 "uuid": "569a679e-a7ec-4857-abe2-7fde81ab5d16", 00:22:28.821 "strip_size_kb": 64, 00:22:28.821 "state": "online", 00:22:28.821 "raid_level": "raid0", 00:22:28.821 "superblock": true, 00:22:28.821 "num_base_bdevs": 4, 00:22:28.821 "num_base_bdevs_discovered": 4, 00:22:28.821 "num_base_bdevs_operational": 4, 00:22:28.821 "base_bdevs_list": [ 00:22:28.821 { 00:22:28.821 "name": "BaseBdev1", 00:22:28.821 "uuid": "40928297-ca50-4fa6-875f-c376be3b2e38", 00:22:28.821 "is_configured": true, 00:22:28.821 "data_offset": 2048, 00:22:28.821 "data_size": 63488 00:22:28.821 }, 00:22:28.821 { 00:22:28.821 "name": "BaseBdev2", 00:22:28.821 "uuid": "9f5beb12-cf6b-4824-90ff-64c359d757ba", 00:22:28.821 "is_configured": true, 00:22:28.821 "data_offset": 2048, 00:22:28.821 "data_size": 63488 00:22:28.821 }, 00:22:28.821 { 00:22:28.821 "name": "BaseBdev3", 00:22:28.821 "uuid": "823d79a7-a477-4826-83ce-8f47ea6b3d7a", 00:22:28.821 "is_configured": true, 00:22:28.821 "data_offset": 2048, 00:22:28.821 "data_size": 63488 00:22:28.821 }, 00:22:28.821 { 00:22:28.821 "name": "BaseBdev4", 00:22:28.821 "uuid": "88820496-d4ff-405a-9fde-64f3d44aea5e", 00:22:28.821 "is_configured": true, 00:22:28.821 "data_offset": 2048, 00:22:28.821 "data_size": 63488 00:22:28.821 } 00:22:28.821 ] 00:22:28.821 }' 00:22:28.821 14:15:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:28.821 14:15:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:29.757 14:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:22:29.757 14:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:22:29.757 14:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:29.757 14:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:29.757 14:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:29.757 14:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:22:29.757 14:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:22:29.757 14:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:29.757 [2024-07-15 14:15:15.747725] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:30.016 14:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:30.016 "name": "Existed_Raid", 00:22:30.016 "aliases": [ 00:22:30.016 "569a679e-a7ec-4857-abe2-7fde81ab5d16" 00:22:30.016 ], 00:22:30.016 "product_name": "Raid Volume", 00:22:30.016 "block_size": 512, 00:22:30.016 "num_blocks": 253952, 00:22:30.016 "uuid": "569a679e-a7ec-4857-abe2-7fde81ab5d16", 00:22:30.016 "assigned_rate_limits": { 00:22:30.016 "rw_ios_per_sec": 0, 00:22:30.016 "rw_mbytes_per_sec": 0, 00:22:30.016 "r_mbytes_per_sec": 0, 00:22:30.016 "w_mbytes_per_sec": 0 00:22:30.016 }, 00:22:30.016 "claimed": false, 00:22:30.016 "zoned": false, 00:22:30.016 "supported_io_types": { 00:22:30.016 "read": true, 00:22:30.016 "write": true, 00:22:30.016 "unmap": true, 00:22:30.016 "flush": true, 00:22:30.016 "reset": true, 00:22:30.016 "nvme_admin": false, 00:22:30.016 "nvme_io": false, 00:22:30.016 "nvme_io_md": false, 00:22:30.016 "write_zeroes": true, 00:22:30.016 "zcopy": false, 00:22:30.016 "get_zone_info": false, 00:22:30.016 "zone_management": false, 00:22:30.016 "zone_append": false, 00:22:30.016 "compare": false, 00:22:30.016 "compare_and_write": false, 00:22:30.016 "abort": false, 00:22:30.016 "seek_hole": false, 00:22:30.016 "seek_data": false, 00:22:30.016 "copy": false, 00:22:30.016 "nvme_iov_md": false 00:22:30.016 }, 00:22:30.016 "memory_domains": [ 00:22:30.016 { 00:22:30.016 "dma_device_id": "system", 00:22:30.016 "dma_device_type": 1 00:22:30.016 }, 00:22:30.016 { 00:22:30.016 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:30.016 "dma_device_type": 2 00:22:30.016 }, 00:22:30.016 { 00:22:30.016 "dma_device_id": "system", 00:22:30.016 "dma_device_type": 1 00:22:30.016 }, 00:22:30.016 { 00:22:30.016 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:30.016 "dma_device_type": 2 00:22:30.016 }, 00:22:30.016 { 00:22:30.016 "dma_device_id": "system", 00:22:30.016 "dma_device_type": 1 00:22:30.016 }, 00:22:30.016 { 00:22:30.016 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:30.016 "dma_device_type": 2 00:22:30.016 }, 00:22:30.016 { 00:22:30.016 "dma_device_id": "system", 00:22:30.016 "dma_device_type": 1 00:22:30.016 }, 00:22:30.016 { 00:22:30.016 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:30.016 "dma_device_type": 2 00:22:30.016 } 00:22:30.016 ], 00:22:30.016 "driver_specific": { 00:22:30.016 "raid": { 00:22:30.016 "uuid": "569a679e-a7ec-4857-abe2-7fde81ab5d16", 00:22:30.016 "strip_size_kb": 64, 00:22:30.016 "state": "online", 00:22:30.016 "raid_level": "raid0", 00:22:30.016 "superblock": true, 00:22:30.016 "num_base_bdevs": 4, 00:22:30.017 "num_base_bdevs_discovered": 4, 00:22:30.017 "num_base_bdevs_operational": 4, 00:22:30.017 "base_bdevs_list": [ 00:22:30.017 { 00:22:30.017 "name": "BaseBdev1", 00:22:30.017 "uuid": "40928297-ca50-4fa6-875f-c376be3b2e38", 00:22:30.017 "is_configured": true, 00:22:30.017 "data_offset": 2048, 00:22:30.017 "data_size": 63488 00:22:30.017 }, 00:22:30.017 { 00:22:30.017 "name": "BaseBdev2", 00:22:30.017 "uuid": "9f5beb12-cf6b-4824-90ff-64c359d757ba", 00:22:30.017 "is_configured": true, 00:22:30.017 "data_offset": 2048, 00:22:30.017 "data_size": 63488 00:22:30.017 }, 00:22:30.017 { 00:22:30.017 "name": "BaseBdev3", 00:22:30.017 "uuid": "823d79a7-a477-4826-83ce-8f47ea6b3d7a", 00:22:30.017 "is_configured": true, 00:22:30.017 "data_offset": 2048, 00:22:30.017 "data_size": 63488 00:22:30.017 }, 00:22:30.017 { 00:22:30.017 "name": "BaseBdev4", 00:22:30.017 "uuid": "88820496-d4ff-405a-9fde-64f3d44aea5e", 00:22:30.017 "is_configured": true, 00:22:30.017 "data_offset": 2048, 00:22:30.017 "data_size": 63488 00:22:30.017 } 00:22:30.017 ] 00:22:30.017 } 00:22:30.017 } 00:22:30.017 }' 00:22:30.017 14:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:30.017 14:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:22:30.017 BaseBdev2 00:22:30.017 BaseBdev3 00:22:30.017 BaseBdev4' 00:22:30.017 14:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:30.017 14:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:22:30.017 14:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:30.275 14:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:30.275 "name": "BaseBdev1", 00:22:30.275 "aliases": [ 00:22:30.275 "40928297-ca50-4fa6-875f-c376be3b2e38" 00:22:30.275 ], 00:22:30.275 "product_name": "Malloc disk", 00:22:30.275 "block_size": 512, 00:22:30.275 "num_blocks": 65536, 00:22:30.275 "uuid": "40928297-ca50-4fa6-875f-c376be3b2e38", 00:22:30.275 "assigned_rate_limits": { 00:22:30.275 "rw_ios_per_sec": 0, 00:22:30.275 "rw_mbytes_per_sec": 0, 00:22:30.275 "r_mbytes_per_sec": 0, 00:22:30.275 "w_mbytes_per_sec": 0 00:22:30.275 }, 00:22:30.275 "claimed": true, 00:22:30.275 "claim_type": "exclusive_write", 00:22:30.275 "zoned": false, 00:22:30.275 "supported_io_types": { 00:22:30.275 "read": true, 00:22:30.275 "write": true, 00:22:30.275 "unmap": true, 00:22:30.275 "flush": true, 00:22:30.275 "reset": true, 00:22:30.275 "nvme_admin": false, 00:22:30.275 "nvme_io": false, 00:22:30.275 "nvme_io_md": false, 00:22:30.275 "write_zeroes": true, 00:22:30.275 "zcopy": true, 00:22:30.275 "get_zone_info": false, 00:22:30.275 "zone_management": false, 00:22:30.275 "zone_append": false, 00:22:30.275 "compare": false, 00:22:30.275 "compare_and_write": false, 00:22:30.275 "abort": true, 00:22:30.275 "seek_hole": false, 00:22:30.275 "seek_data": false, 00:22:30.275 "copy": true, 00:22:30.275 "nvme_iov_md": false 00:22:30.275 }, 00:22:30.275 "memory_domains": [ 00:22:30.275 { 00:22:30.275 "dma_device_id": "system", 00:22:30.275 "dma_device_type": 1 00:22:30.275 }, 00:22:30.275 { 00:22:30.275 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:30.275 "dma_device_type": 2 00:22:30.275 } 00:22:30.275 ], 00:22:30.275 "driver_specific": {} 00:22:30.275 }' 00:22:30.275 14:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:30.275 14:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:30.275 14:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:30.275 14:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:30.275 14:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:30.532 14:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:30.532 14:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:30.532 14:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:30.532 14:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:30.532 14:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:30.532 14:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:30.532 14:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:30.532 14:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:30.532 14:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:22:30.532 14:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:30.788 14:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:30.788 "name": "BaseBdev2", 00:22:30.788 "aliases": [ 00:22:30.788 "9f5beb12-cf6b-4824-90ff-64c359d757ba" 00:22:30.788 ], 00:22:30.788 "product_name": "Malloc disk", 00:22:30.788 "block_size": 512, 00:22:30.788 "num_blocks": 65536, 00:22:30.788 "uuid": "9f5beb12-cf6b-4824-90ff-64c359d757ba", 00:22:30.788 "assigned_rate_limits": { 00:22:30.788 "rw_ios_per_sec": 0, 00:22:30.788 "rw_mbytes_per_sec": 0, 00:22:30.788 "r_mbytes_per_sec": 0, 00:22:30.788 "w_mbytes_per_sec": 0 00:22:30.788 }, 00:22:30.788 "claimed": true, 00:22:30.788 "claim_type": "exclusive_write", 00:22:30.788 "zoned": false, 00:22:30.788 "supported_io_types": { 00:22:30.788 "read": true, 00:22:30.788 "write": true, 00:22:30.788 "unmap": true, 00:22:30.788 "flush": true, 00:22:30.788 "reset": true, 00:22:30.788 "nvme_admin": false, 00:22:30.788 "nvme_io": false, 00:22:30.788 "nvme_io_md": false, 00:22:30.788 "write_zeroes": true, 00:22:30.788 "zcopy": true, 00:22:30.788 "get_zone_info": false, 00:22:30.788 "zone_management": false, 00:22:30.788 "zone_append": false, 00:22:30.788 "compare": false, 00:22:30.788 "compare_and_write": false, 00:22:30.788 "abort": true, 00:22:30.788 "seek_hole": false, 00:22:30.788 "seek_data": false, 00:22:30.788 "copy": true, 00:22:30.788 "nvme_iov_md": false 00:22:30.788 }, 00:22:30.788 "memory_domains": [ 00:22:30.788 { 00:22:30.788 "dma_device_id": "system", 00:22:30.788 "dma_device_type": 1 00:22:30.788 }, 00:22:30.788 { 00:22:30.788 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:30.788 "dma_device_type": 2 00:22:30.788 } 00:22:30.788 ], 00:22:30.788 "driver_specific": {} 00:22:30.788 }' 00:22:30.788 14:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:31.045 14:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:31.045 14:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:31.045 14:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:31.045 14:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:31.045 14:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:31.045 14:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:31.045 14:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:31.045 14:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:31.045 14:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:31.303 14:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:31.303 14:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:31.303 14:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:31.303 14:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:31.303 14:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:22:31.560 14:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:31.560 "name": "BaseBdev3", 00:22:31.560 "aliases": [ 00:22:31.560 "823d79a7-a477-4826-83ce-8f47ea6b3d7a" 00:22:31.560 ], 00:22:31.560 "product_name": "Malloc disk", 00:22:31.560 "block_size": 512, 00:22:31.560 "num_blocks": 65536, 00:22:31.560 "uuid": "823d79a7-a477-4826-83ce-8f47ea6b3d7a", 00:22:31.560 "assigned_rate_limits": { 00:22:31.560 "rw_ios_per_sec": 0, 00:22:31.560 "rw_mbytes_per_sec": 0, 00:22:31.560 "r_mbytes_per_sec": 0, 00:22:31.560 "w_mbytes_per_sec": 0 00:22:31.560 }, 00:22:31.560 "claimed": true, 00:22:31.560 "claim_type": "exclusive_write", 00:22:31.560 "zoned": false, 00:22:31.560 "supported_io_types": { 00:22:31.560 "read": true, 00:22:31.560 "write": true, 00:22:31.560 "unmap": true, 00:22:31.560 "flush": true, 00:22:31.560 "reset": true, 00:22:31.560 "nvme_admin": false, 00:22:31.560 "nvme_io": false, 00:22:31.560 "nvme_io_md": false, 00:22:31.560 "write_zeroes": true, 00:22:31.560 "zcopy": true, 00:22:31.560 "get_zone_info": false, 00:22:31.560 "zone_management": false, 00:22:31.560 "zone_append": false, 00:22:31.560 "compare": false, 00:22:31.560 "compare_and_write": false, 00:22:31.560 "abort": true, 00:22:31.560 "seek_hole": false, 00:22:31.560 "seek_data": false, 00:22:31.560 "copy": true, 00:22:31.560 "nvme_iov_md": false 00:22:31.560 }, 00:22:31.560 "memory_domains": [ 00:22:31.560 { 00:22:31.560 "dma_device_id": "system", 00:22:31.560 "dma_device_type": 1 00:22:31.560 }, 00:22:31.560 { 00:22:31.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:31.560 "dma_device_type": 2 00:22:31.560 } 00:22:31.560 ], 00:22:31.560 "driver_specific": {} 00:22:31.560 }' 00:22:31.560 14:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:31.560 14:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:31.560 14:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:31.560 14:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:31.818 14:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:31.818 14:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:31.818 14:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:31.818 14:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:31.818 14:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:31.818 14:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:31.818 14:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:31.818 14:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:31.818 14:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:31.819 14:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:22:31.819 14:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:32.422 14:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:32.422 "name": "BaseBdev4", 00:22:32.422 "aliases": [ 00:22:32.422 "88820496-d4ff-405a-9fde-64f3d44aea5e" 00:22:32.422 ], 00:22:32.422 "product_name": "Malloc disk", 00:22:32.422 "block_size": 512, 00:22:32.422 "num_blocks": 65536, 00:22:32.422 "uuid": "88820496-d4ff-405a-9fde-64f3d44aea5e", 00:22:32.422 "assigned_rate_limits": { 00:22:32.422 "rw_ios_per_sec": 0, 00:22:32.422 "rw_mbytes_per_sec": 0, 00:22:32.422 "r_mbytes_per_sec": 0, 00:22:32.422 "w_mbytes_per_sec": 0 00:22:32.422 }, 00:22:32.422 "claimed": true, 00:22:32.422 "claim_type": "exclusive_write", 00:22:32.422 "zoned": false, 00:22:32.422 "supported_io_types": { 00:22:32.422 "read": true, 00:22:32.422 "write": true, 00:22:32.422 "unmap": true, 00:22:32.422 "flush": true, 00:22:32.422 "reset": true, 00:22:32.422 "nvme_admin": false, 00:22:32.422 "nvme_io": false, 00:22:32.422 "nvme_io_md": false, 00:22:32.422 "write_zeroes": true, 00:22:32.422 "zcopy": true, 00:22:32.422 "get_zone_info": false, 00:22:32.422 "zone_management": false, 00:22:32.422 "zone_append": false, 00:22:32.422 "compare": false, 00:22:32.422 "compare_and_write": false, 00:22:32.422 "abort": true, 00:22:32.422 "seek_hole": false, 00:22:32.422 "seek_data": false, 00:22:32.422 "copy": true, 00:22:32.422 "nvme_iov_md": false 00:22:32.422 }, 00:22:32.422 "memory_domains": [ 00:22:32.422 { 00:22:32.422 "dma_device_id": "system", 00:22:32.422 "dma_device_type": 1 00:22:32.422 }, 00:22:32.422 { 00:22:32.422 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:32.422 "dma_device_type": 2 00:22:32.422 } 00:22:32.422 ], 00:22:32.422 "driver_specific": {} 00:22:32.422 }' 00:22:32.422 14:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:32.422 14:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:32.422 14:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:32.422 14:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:32.422 14:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:32.422 14:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:32.422 14:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:32.422 14:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:32.422 14:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:32.422 14:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:32.681 14:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:32.681 14:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:32.681 14:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:32.940 [2024-07-15 14:15:18.767956] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:32.940 [2024-07-15 14:15:18.768004] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:32.940 [2024-07-15 14:15:18.768064] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:32.940 14:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:22:32.940 14:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:22:32.940 14:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:22:32.940 14:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:22:32.940 14:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:22:32.940 14:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:22:32.940 14:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:32.940 14:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:22:32.940 14:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:32.940 14:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:32.940 14:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:32.940 14:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:32.940 14:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:32.940 14:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:32.940 14:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:32.940 14:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:32.940 14:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:33.198 14:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:33.198 "name": "Existed_Raid", 00:22:33.198 "uuid": "569a679e-a7ec-4857-abe2-7fde81ab5d16", 00:22:33.198 "strip_size_kb": 64, 00:22:33.198 "state": "offline", 00:22:33.198 "raid_level": "raid0", 00:22:33.198 "superblock": true, 00:22:33.198 "num_base_bdevs": 4, 00:22:33.198 "num_base_bdevs_discovered": 3, 00:22:33.198 "num_base_bdevs_operational": 3, 00:22:33.198 "base_bdevs_list": [ 00:22:33.198 { 00:22:33.198 "name": null, 00:22:33.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:33.198 "is_configured": false, 00:22:33.198 "data_offset": 2048, 00:22:33.198 "data_size": 63488 00:22:33.198 }, 00:22:33.198 { 00:22:33.198 "name": "BaseBdev2", 00:22:33.198 "uuid": "9f5beb12-cf6b-4824-90ff-64c359d757ba", 00:22:33.198 "is_configured": true, 00:22:33.198 "data_offset": 2048, 00:22:33.198 "data_size": 63488 00:22:33.198 }, 00:22:33.198 { 00:22:33.198 "name": "BaseBdev3", 00:22:33.198 "uuid": "823d79a7-a477-4826-83ce-8f47ea6b3d7a", 00:22:33.198 "is_configured": true, 00:22:33.198 "data_offset": 2048, 00:22:33.198 "data_size": 63488 00:22:33.198 }, 00:22:33.198 { 00:22:33.198 "name": "BaseBdev4", 00:22:33.198 "uuid": "88820496-d4ff-405a-9fde-64f3d44aea5e", 00:22:33.198 "is_configured": true, 00:22:33.198 "data_offset": 2048, 00:22:33.198 "data_size": 63488 00:22:33.198 } 00:22:33.198 ] 00:22:33.198 }' 00:22:33.198 14:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:33.198 14:15:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:33.764 14:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:22:34.021 14:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:34.021 14:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:34.021 14:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:22:34.021 14:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:22:34.021 14:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:34.021 14:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:22:34.587 [2024-07-15 14:15:20.285480] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:34.587 14:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:22:34.587 14:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:34.587 14:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:34.587 14:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:22:34.845 14:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:22:34.845 14:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:34.845 14:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:22:35.103 [2024-07-15 14:15:20.893145] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:35.103 14:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:22:35.103 14:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:35.103 14:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:35.103 14:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:22:35.361 14:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:22:35.361 14:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:35.361 14:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:22:35.619 [2024-07-15 14:15:21.552511] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:22:35.619 [2024-07-15 14:15:21.552576] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:22:35.876 14:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:22:35.876 14:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:35.876 14:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:35.876 14:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:22:36.134 14:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:22:36.134 14:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:22:36.134 14:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:22:36.134 14:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:22:36.134 14:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:36.134 14:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:36.393 BaseBdev2 00:22:36.393 14:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:22:36.393 14:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:22:36.393 14:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:36.393 14:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:22:36.393 14:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:36.393 14:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:36.393 14:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:36.652 14:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:36.911 [ 00:22:36.911 { 00:22:36.911 "name": "BaseBdev2", 00:22:36.911 "aliases": [ 00:22:36.911 "eebec8ee-ce01-4d07-b1c8-80067d803f8c" 00:22:36.911 ], 00:22:36.911 "product_name": "Malloc disk", 00:22:36.911 "block_size": 512, 00:22:36.911 "num_blocks": 65536, 00:22:36.911 "uuid": "eebec8ee-ce01-4d07-b1c8-80067d803f8c", 00:22:36.911 "assigned_rate_limits": { 00:22:36.911 "rw_ios_per_sec": 0, 00:22:36.911 "rw_mbytes_per_sec": 0, 00:22:36.911 "r_mbytes_per_sec": 0, 00:22:36.911 "w_mbytes_per_sec": 0 00:22:36.911 }, 00:22:36.911 "claimed": false, 00:22:36.911 "zoned": false, 00:22:36.911 "supported_io_types": { 00:22:36.911 "read": true, 00:22:36.911 "write": true, 00:22:36.911 "unmap": true, 00:22:36.911 "flush": true, 00:22:36.911 "reset": true, 00:22:36.911 "nvme_admin": false, 00:22:36.911 "nvme_io": false, 00:22:36.911 "nvme_io_md": false, 00:22:36.911 "write_zeroes": true, 00:22:36.911 "zcopy": true, 00:22:36.911 "get_zone_info": false, 00:22:36.911 "zone_management": false, 00:22:36.911 "zone_append": false, 00:22:36.911 "compare": false, 00:22:36.911 "compare_and_write": false, 00:22:36.911 "abort": true, 00:22:36.911 "seek_hole": false, 00:22:36.911 "seek_data": false, 00:22:36.911 "copy": true, 00:22:36.911 "nvme_iov_md": false 00:22:36.911 }, 00:22:36.911 "memory_domains": [ 00:22:36.911 { 00:22:36.911 "dma_device_id": "system", 00:22:36.911 "dma_device_type": 1 00:22:36.911 }, 00:22:36.911 { 00:22:36.911 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:36.911 "dma_device_type": 2 00:22:36.911 } 00:22:36.911 ], 00:22:36.911 "driver_specific": {} 00:22:36.911 } 00:22:36.911 ] 00:22:36.911 14:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:22:36.911 14:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:22:36.911 14:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:36.911 14:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:37.170 BaseBdev3 00:22:37.170 14:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:22:37.170 14:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:22:37.170 14:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:37.170 14:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:22:37.170 14:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:37.170 14:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:37.170 14:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:37.427 14:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:37.684 [ 00:22:37.684 { 00:22:37.684 "name": "BaseBdev3", 00:22:37.684 "aliases": [ 00:22:37.684 "dc766f54-e891-4d7e-9389-03bc1f908b79" 00:22:37.684 ], 00:22:37.684 "product_name": "Malloc disk", 00:22:37.684 "block_size": 512, 00:22:37.684 "num_blocks": 65536, 00:22:37.684 "uuid": "dc766f54-e891-4d7e-9389-03bc1f908b79", 00:22:37.684 "assigned_rate_limits": { 00:22:37.684 "rw_ios_per_sec": 0, 00:22:37.684 "rw_mbytes_per_sec": 0, 00:22:37.684 "r_mbytes_per_sec": 0, 00:22:37.684 "w_mbytes_per_sec": 0 00:22:37.684 }, 00:22:37.684 "claimed": false, 00:22:37.684 "zoned": false, 00:22:37.684 "supported_io_types": { 00:22:37.684 "read": true, 00:22:37.684 "write": true, 00:22:37.684 "unmap": true, 00:22:37.684 "flush": true, 00:22:37.684 "reset": true, 00:22:37.684 "nvme_admin": false, 00:22:37.684 "nvme_io": false, 00:22:37.684 "nvme_io_md": false, 00:22:37.684 "write_zeroes": true, 00:22:37.684 "zcopy": true, 00:22:37.684 "get_zone_info": false, 00:22:37.684 "zone_management": false, 00:22:37.684 "zone_append": false, 00:22:37.684 "compare": false, 00:22:37.684 "compare_and_write": false, 00:22:37.684 "abort": true, 00:22:37.684 "seek_hole": false, 00:22:37.684 "seek_data": false, 00:22:37.684 "copy": true, 00:22:37.684 "nvme_iov_md": false 00:22:37.684 }, 00:22:37.684 "memory_domains": [ 00:22:37.684 { 00:22:37.684 "dma_device_id": "system", 00:22:37.684 "dma_device_type": 1 00:22:37.684 }, 00:22:37.684 { 00:22:37.685 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:37.685 "dma_device_type": 2 00:22:37.685 } 00:22:37.685 ], 00:22:37.685 "driver_specific": {} 00:22:37.685 } 00:22:37.685 ] 00:22:37.685 14:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:22:37.685 14:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:22:37.685 14:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:37.685 14:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:22:37.943 BaseBdev4 00:22:37.943 14:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:22:37.943 14:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:22:37.943 14:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:37.943 14:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:22:37.943 14:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:37.943 14:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:37.943 14:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:38.201 14:15:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:22:38.459 [ 00:22:38.459 { 00:22:38.459 "name": "BaseBdev4", 00:22:38.459 "aliases": [ 00:22:38.459 "267df7f2-309f-4c8b-ae17-a2e5d23778ef" 00:22:38.459 ], 00:22:38.459 "product_name": "Malloc disk", 00:22:38.459 "block_size": 512, 00:22:38.459 "num_blocks": 65536, 00:22:38.459 "uuid": "267df7f2-309f-4c8b-ae17-a2e5d23778ef", 00:22:38.459 "assigned_rate_limits": { 00:22:38.459 "rw_ios_per_sec": 0, 00:22:38.459 "rw_mbytes_per_sec": 0, 00:22:38.459 "r_mbytes_per_sec": 0, 00:22:38.459 "w_mbytes_per_sec": 0 00:22:38.459 }, 00:22:38.459 "claimed": false, 00:22:38.459 "zoned": false, 00:22:38.459 "supported_io_types": { 00:22:38.459 "read": true, 00:22:38.459 "write": true, 00:22:38.459 "unmap": true, 00:22:38.459 "flush": true, 00:22:38.459 "reset": true, 00:22:38.459 "nvme_admin": false, 00:22:38.459 "nvme_io": false, 00:22:38.459 "nvme_io_md": false, 00:22:38.459 "write_zeroes": true, 00:22:38.459 "zcopy": true, 00:22:38.459 "get_zone_info": false, 00:22:38.459 "zone_management": false, 00:22:38.459 "zone_append": false, 00:22:38.459 "compare": false, 00:22:38.459 "compare_and_write": false, 00:22:38.459 "abort": true, 00:22:38.459 "seek_hole": false, 00:22:38.459 "seek_data": false, 00:22:38.459 "copy": true, 00:22:38.459 "nvme_iov_md": false 00:22:38.459 }, 00:22:38.459 "memory_domains": [ 00:22:38.459 { 00:22:38.459 "dma_device_id": "system", 00:22:38.459 "dma_device_type": 1 00:22:38.459 }, 00:22:38.459 { 00:22:38.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:38.460 "dma_device_type": 2 00:22:38.460 } 00:22:38.460 ], 00:22:38.460 "driver_specific": {} 00:22:38.460 } 00:22:38.460 ] 00:22:38.460 14:15:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:22:38.460 14:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:22:38.460 14:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:22:38.460 14:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:38.718 [2024-07-15 14:15:24.528174] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:38.718 [2024-07-15 14:15:24.528496] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:38.718 [2024-07-15 14:15:24.528685] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:38.718 [2024-07-15 14:15:24.530709] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:38.718 [2024-07-15 14:15:24.530934] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:38.718 14:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:38.718 14:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:38.718 14:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:38.718 14:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:38.718 14:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:38.718 14:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:38.718 14:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:38.718 14:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:38.718 14:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:38.718 14:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:38.718 14:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:38.718 14:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:38.975 14:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:38.975 "name": "Existed_Raid", 00:22:38.975 "uuid": "0a6f5eaa-6177-438a-ad5f-8742b4db60dd", 00:22:38.975 "strip_size_kb": 64, 00:22:38.975 "state": "configuring", 00:22:38.975 "raid_level": "raid0", 00:22:38.975 "superblock": true, 00:22:38.975 "num_base_bdevs": 4, 00:22:38.975 "num_base_bdevs_discovered": 3, 00:22:38.975 "num_base_bdevs_operational": 4, 00:22:38.975 "base_bdevs_list": [ 00:22:38.975 { 00:22:38.975 "name": "BaseBdev1", 00:22:38.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:38.975 "is_configured": false, 00:22:38.975 "data_offset": 0, 00:22:38.975 "data_size": 0 00:22:38.975 }, 00:22:38.975 { 00:22:38.975 "name": "BaseBdev2", 00:22:38.975 "uuid": "eebec8ee-ce01-4d07-b1c8-80067d803f8c", 00:22:38.975 "is_configured": true, 00:22:38.975 "data_offset": 2048, 00:22:38.975 "data_size": 63488 00:22:38.975 }, 00:22:38.975 { 00:22:38.975 "name": "BaseBdev3", 00:22:38.975 "uuid": "dc766f54-e891-4d7e-9389-03bc1f908b79", 00:22:38.975 "is_configured": true, 00:22:38.975 "data_offset": 2048, 00:22:38.975 "data_size": 63488 00:22:38.975 }, 00:22:38.975 { 00:22:38.975 "name": "BaseBdev4", 00:22:38.975 "uuid": "267df7f2-309f-4c8b-ae17-a2e5d23778ef", 00:22:38.975 "is_configured": true, 00:22:38.975 "data_offset": 2048, 00:22:38.975 "data_size": 63488 00:22:38.975 } 00:22:38.975 ] 00:22:38.975 }' 00:22:38.975 14:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:38.975 14:15:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:39.539 14:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:22:39.796 [2024-07-15 14:15:25.652252] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:39.796 14:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:39.796 14:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:39.796 14:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:39.796 14:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:39.796 14:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:39.796 14:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:39.797 14:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:39.797 14:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:39.797 14:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:39.797 14:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:39.797 14:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:39.797 14:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:40.055 14:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:40.055 "name": "Existed_Raid", 00:22:40.055 "uuid": "0a6f5eaa-6177-438a-ad5f-8742b4db60dd", 00:22:40.055 "strip_size_kb": 64, 00:22:40.055 "state": "configuring", 00:22:40.055 "raid_level": "raid0", 00:22:40.055 "superblock": true, 00:22:40.055 "num_base_bdevs": 4, 00:22:40.055 "num_base_bdevs_discovered": 2, 00:22:40.055 "num_base_bdevs_operational": 4, 00:22:40.055 "base_bdevs_list": [ 00:22:40.055 { 00:22:40.055 "name": "BaseBdev1", 00:22:40.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:40.055 "is_configured": false, 00:22:40.055 "data_offset": 0, 00:22:40.055 "data_size": 0 00:22:40.055 }, 00:22:40.055 { 00:22:40.055 "name": null, 00:22:40.055 "uuid": "eebec8ee-ce01-4d07-b1c8-80067d803f8c", 00:22:40.055 "is_configured": false, 00:22:40.055 "data_offset": 2048, 00:22:40.056 "data_size": 63488 00:22:40.056 }, 00:22:40.056 { 00:22:40.056 "name": "BaseBdev3", 00:22:40.056 "uuid": "dc766f54-e891-4d7e-9389-03bc1f908b79", 00:22:40.056 "is_configured": true, 00:22:40.056 "data_offset": 2048, 00:22:40.056 "data_size": 63488 00:22:40.056 }, 00:22:40.056 { 00:22:40.056 "name": "BaseBdev4", 00:22:40.056 "uuid": "267df7f2-309f-4c8b-ae17-a2e5d23778ef", 00:22:40.056 "is_configured": true, 00:22:40.056 "data_offset": 2048, 00:22:40.056 "data_size": 63488 00:22:40.056 } 00:22:40.056 ] 00:22:40.056 }' 00:22:40.056 14:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:40.056 14:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:40.989 14:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:40.989 14:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:40.989 14:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:22:40.989 14:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:41.246 [2024-07-15 14:15:27.216195] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:41.246 BaseBdev1 00:22:41.246 14:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:22:41.246 14:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:22:41.246 14:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:41.246 14:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:22:41.246 14:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:41.246 14:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:41.246 14:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:41.504 14:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:41.763 [ 00:22:41.763 { 00:22:41.763 "name": "BaseBdev1", 00:22:41.763 "aliases": [ 00:22:41.763 "806fa780-6d1e-4231-8074-010c2def5c68" 00:22:41.763 ], 00:22:41.763 "product_name": "Malloc disk", 00:22:41.763 "block_size": 512, 00:22:41.763 "num_blocks": 65536, 00:22:41.763 "uuid": "806fa780-6d1e-4231-8074-010c2def5c68", 00:22:41.763 "assigned_rate_limits": { 00:22:41.763 "rw_ios_per_sec": 0, 00:22:41.763 "rw_mbytes_per_sec": 0, 00:22:41.763 "r_mbytes_per_sec": 0, 00:22:41.763 "w_mbytes_per_sec": 0 00:22:41.763 }, 00:22:41.763 "claimed": true, 00:22:41.763 "claim_type": "exclusive_write", 00:22:41.763 "zoned": false, 00:22:41.763 "supported_io_types": { 00:22:41.763 "read": true, 00:22:41.763 "write": true, 00:22:41.763 "unmap": true, 00:22:41.763 "flush": true, 00:22:41.763 "reset": true, 00:22:41.763 "nvme_admin": false, 00:22:41.763 "nvme_io": false, 00:22:41.763 "nvme_io_md": false, 00:22:41.763 "write_zeroes": true, 00:22:41.763 "zcopy": true, 00:22:41.763 "get_zone_info": false, 00:22:41.763 "zone_management": false, 00:22:41.763 "zone_append": false, 00:22:41.763 "compare": false, 00:22:41.763 "compare_and_write": false, 00:22:41.763 "abort": true, 00:22:41.763 "seek_hole": false, 00:22:41.763 "seek_data": false, 00:22:41.763 "copy": true, 00:22:41.763 "nvme_iov_md": false 00:22:41.763 }, 00:22:41.763 "memory_domains": [ 00:22:41.763 { 00:22:41.763 "dma_device_id": "system", 00:22:41.763 "dma_device_type": 1 00:22:41.763 }, 00:22:41.763 { 00:22:41.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:41.763 "dma_device_type": 2 00:22:41.763 } 00:22:41.763 ], 00:22:41.763 "driver_specific": {} 00:22:41.763 } 00:22:41.763 ] 00:22:41.763 14:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:22:41.763 14:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:41.763 14:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:41.763 14:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:41.763 14:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:41.763 14:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:41.763 14:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:41.763 14:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:41.763 14:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:41.763 14:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:41.763 14:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:41.763 14:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:41.763 14:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:42.021 14:15:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:42.021 "name": "Existed_Raid", 00:22:42.021 "uuid": "0a6f5eaa-6177-438a-ad5f-8742b4db60dd", 00:22:42.021 "strip_size_kb": 64, 00:22:42.021 "state": "configuring", 00:22:42.021 "raid_level": "raid0", 00:22:42.021 "superblock": true, 00:22:42.021 "num_base_bdevs": 4, 00:22:42.021 "num_base_bdevs_discovered": 3, 00:22:42.021 "num_base_bdevs_operational": 4, 00:22:42.021 "base_bdevs_list": [ 00:22:42.021 { 00:22:42.021 "name": "BaseBdev1", 00:22:42.021 "uuid": "806fa780-6d1e-4231-8074-010c2def5c68", 00:22:42.021 "is_configured": true, 00:22:42.021 "data_offset": 2048, 00:22:42.021 "data_size": 63488 00:22:42.021 }, 00:22:42.021 { 00:22:42.021 "name": null, 00:22:42.021 "uuid": "eebec8ee-ce01-4d07-b1c8-80067d803f8c", 00:22:42.021 "is_configured": false, 00:22:42.021 "data_offset": 2048, 00:22:42.021 "data_size": 63488 00:22:42.021 }, 00:22:42.021 { 00:22:42.021 "name": "BaseBdev3", 00:22:42.021 "uuid": "dc766f54-e891-4d7e-9389-03bc1f908b79", 00:22:42.021 "is_configured": true, 00:22:42.021 "data_offset": 2048, 00:22:42.021 "data_size": 63488 00:22:42.021 }, 00:22:42.021 { 00:22:42.021 "name": "BaseBdev4", 00:22:42.021 "uuid": "267df7f2-309f-4c8b-ae17-a2e5d23778ef", 00:22:42.021 "is_configured": true, 00:22:42.021 "data_offset": 2048, 00:22:42.021 "data_size": 63488 00:22:42.021 } 00:22:42.021 ] 00:22:42.021 }' 00:22:42.021 14:15:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:42.280 14:15:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:42.846 14:15:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:42.846 14:15:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:43.105 14:15:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:22:43.105 14:15:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:22:43.364 [2024-07-15 14:15:29.157742] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:43.364 14:15:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:43.364 14:15:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:43.364 14:15:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:43.364 14:15:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:43.364 14:15:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:43.364 14:15:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:43.364 14:15:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:43.364 14:15:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:43.364 14:15:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:43.364 14:15:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:43.364 14:15:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:43.364 14:15:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:43.622 14:15:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:43.622 "name": "Existed_Raid", 00:22:43.622 "uuid": "0a6f5eaa-6177-438a-ad5f-8742b4db60dd", 00:22:43.622 "strip_size_kb": 64, 00:22:43.622 "state": "configuring", 00:22:43.622 "raid_level": "raid0", 00:22:43.622 "superblock": true, 00:22:43.622 "num_base_bdevs": 4, 00:22:43.622 "num_base_bdevs_discovered": 2, 00:22:43.622 "num_base_bdevs_operational": 4, 00:22:43.623 "base_bdevs_list": [ 00:22:43.623 { 00:22:43.623 "name": "BaseBdev1", 00:22:43.623 "uuid": "806fa780-6d1e-4231-8074-010c2def5c68", 00:22:43.623 "is_configured": true, 00:22:43.623 "data_offset": 2048, 00:22:43.623 "data_size": 63488 00:22:43.623 }, 00:22:43.623 { 00:22:43.623 "name": null, 00:22:43.623 "uuid": "eebec8ee-ce01-4d07-b1c8-80067d803f8c", 00:22:43.623 "is_configured": false, 00:22:43.623 "data_offset": 2048, 00:22:43.623 "data_size": 63488 00:22:43.623 }, 00:22:43.623 { 00:22:43.623 "name": null, 00:22:43.623 "uuid": "dc766f54-e891-4d7e-9389-03bc1f908b79", 00:22:43.623 "is_configured": false, 00:22:43.623 "data_offset": 2048, 00:22:43.623 "data_size": 63488 00:22:43.623 }, 00:22:43.623 { 00:22:43.623 "name": "BaseBdev4", 00:22:43.623 "uuid": "267df7f2-309f-4c8b-ae17-a2e5d23778ef", 00:22:43.623 "is_configured": true, 00:22:43.623 "data_offset": 2048, 00:22:43.623 "data_size": 63488 00:22:43.623 } 00:22:43.623 ] 00:22:43.623 }' 00:22:43.623 14:15:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:43.623 14:15:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:44.190 14:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:44.190 14:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:44.449 14:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:22:44.449 14:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:22:44.708 [2024-07-15 14:15:30.648429] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:44.708 14:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:44.708 14:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:44.708 14:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:44.708 14:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:44.708 14:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:44.708 14:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:44.708 14:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:44.708 14:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:44.708 14:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:44.708 14:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:44.708 14:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:44.708 14:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:44.967 14:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:44.967 "name": "Existed_Raid", 00:22:44.967 "uuid": "0a6f5eaa-6177-438a-ad5f-8742b4db60dd", 00:22:44.967 "strip_size_kb": 64, 00:22:44.967 "state": "configuring", 00:22:44.967 "raid_level": "raid0", 00:22:44.967 "superblock": true, 00:22:44.967 "num_base_bdevs": 4, 00:22:44.967 "num_base_bdevs_discovered": 3, 00:22:44.967 "num_base_bdevs_operational": 4, 00:22:44.967 "base_bdevs_list": [ 00:22:44.967 { 00:22:44.967 "name": "BaseBdev1", 00:22:44.967 "uuid": "806fa780-6d1e-4231-8074-010c2def5c68", 00:22:44.967 "is_configured": true, 00:22:44.967 "data_offset": 2048, 00:22:44.967 "data_size": 63488 00:22:44.967 }, 00:22:44.967 { 00:22:44.967 "name": null, 00:22:44.967 "uuid": "eebec8ee-ce01-4d07-b1c8-80067d803f8c", 00:22:44.967 "is_configured": false, 00:22:44.967 "data_offset": 2048, 00:22:44.967 "data_size": 63488 00:22:44.967 }, 00:22:44.967 { 00:22:44.967 "name": "BaseBdev3", 00:22:44.967 "uuid": "dc766f54-e891-4d7e-9389-03bc1f908b79", 00:22:44.967 "is_configured": true, 00:22:44.967 "data_offset": 2048, 00:22:44.967 "data_size": 63488 00:22:44.967 }, 00:22:44.967 { 00:22:44.967 "name": "BaseBdev4", 00:22:44.967 "uuid": "267df7f2-309f-4c8b-ae17-a2e5d23778ef", 00:22:44.967 "is_configured": true, 00:22:44.967 "data_offset": 2048, 00:22:44.967 "data_size": 63488 00:22:44.967 } 00:22:44.967 ] 00:22:44.967 }' 00:22:44.967 14:15:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:44.967 14:15:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:45.900 14:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:45.900 14:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:45.900 14:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:22:45.900 14:15:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:46.158 [2024-07-15 14:15:32.069789] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:46.431 14:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:46.431 14:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:46.431 14:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:46.431 14:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:46.431 14:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:46.431 14:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:46.431 14:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:46.431 14:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:46.431 14:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:46.431 14:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:46.431 14:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:46.431 14:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:46.689 14:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:46.689 "name": "Existed_Raid", 00:22:46.690 "uuid": "0a6f5eaa-6177-438a-ad5f-8742b4db60dd", 00:22:46.690 "strip_size_kb": 64, 00:22:46.690 "state": "configuring", 00:22:46.690 "raid_level": "raid0", 00:22:46.690 "superblock": true, 00:22:46.690 "num_base_bdevs": 4, 00:22:46.690 "num_base_bdevs_discovered": 2, 00:22:46.690 "num_base_bdevs_operational": 4, 00:22:46.690 "base_bdevs_list": [ 00:22:46.690 { 00:22:46.690 "name": null, 00:22:46.690 "uuid": "806fa780-6d1e-4231-8074-010c2def5c68", 00:22:46.690 "is_configured": false, 00:22:46.690 "data_offset": 2048, 00:22:46.690 "data_size": 63488 00:22:46.690 }, 00:22:46.690 { 00:22:46.690 "name": null, 00:22:46.690 "uuid": "eebec8ee-ce01-4d07-b1c8-80067d803f8c", 00:22:46.690 "is_configured": false, 00:22:46.690 "data_offset": 2048, 00:22:46.690 "data_size": 63488 00:22:46.690 }, 00:22:46.690 { 00:22:46.690 "name": "BaseBdev3", 00:22:46.690 "uuid": "dc766f54-e891-4d7e-9389-03bc1f908b79", 00:22:46.690 "is_configured": true, 00:22:46.690 "data_offset": 2048, 00:22:46.690 "data_size": 63488 00:22:46.690 }, 00:22:46.690 { 00:22:46.690 "name": "BaseBdev4", 00:22:46.690 "uuid": "267df7f2-309f-4c8b-ae17-a2e5d23778ef", 00:22:46.690 "is_configured": true, 00:22:46.690 "data_offset": 2048, 00:22:46.690 "data_size": 63488 00:22:46.690 } 00:22:46.690 ] 00:22:46.690 }' 00:22:46.690 14:15:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:46.690 14:15:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:47.257 14:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:47.257 14:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:47.515 14:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:22:47.515 14:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:22:47.774 [2024-07-15 14:15:33.689569] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:47.774 14:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:47.774 14:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:47.774 14:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:47.774 14:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:47.774 14:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:47.774 14:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:47.774 14:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:47.774 14:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:47.774 14:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:47.774 14:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:47.774 14:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:47.774 14:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:48.033 14:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:48.033 "name": "Existed_Raid", 00:22:48.033 "uuid": "0a6f5eaa-6177-438a-ad5f-8742b4db60dd", 00:22:48.033 "strip_size_kb": 64, 00:22:48.033 "state": "configuring", 00:22:48.033 "raid_level": "raid0", 00:22:48.033 "superblock": true, 00:22:48.033 "num_base_bdevs": 4, 00:22:48.033 "num_base_bdevs_discovered": 3, 00:22:48.033 "num_base_bdevs_operational": 4, 00:22:48.033 "base_bdevs_list": [ 00:22:48.033 { 00:22:48.033 "name": null, 00:22:48.033 "uuid": "806fa780-6d1e-4231-8074-010c2def5c68", 00:22:48.033 "is_configured": false, 00:22:48.033 "data_offset": 2048, 00:22:48.033 "data_size": 63488 00:22:48.033 }, 00:22:48.033 { 00:22:48.033 "name": "BaseBdev2", 00:22:48.033 "uuid": "eebec8ee-ce01-4d07-b1c8-80067d803f8c", 00:22:48.033 "is_configured": true, 00:22:48.033 "data_offset": 2048, 00:22:48.033 "data_size": 63488 00:22:48.033 }, 00:22:48.033 { 00:22:48.034 "name": "BaseBdev3", 00:22:48.034 "uuid": "dc766f54-e891-4d7e-9389-03bc1f908b79", 00:22:48.034 "is_configured": true, 00:22:48.034 "data_offset": 2048, 00:22:48.034 "data_size": 63488 00:22:48.034 }, 00:22:48.034 { 00:22:48.034 "name": "BaseBdev4", 00:22:48.034 "uuid": "267df7f2-309f-4c8b-ae17-a2e5d23778ef", 00:22:48.034 "is_configured": true, 00:22:48.034 "data_offset": 2048, 00:22:48.034 "data_size": 63488 00:22:48.034 } 00:22:48.034 ] 00:22:48.034 }' 00:22:48.034 14:15:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:48.034 14:15:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:48.970 14:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:48.970 14:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:48.970 14:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:22:48.970 14:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:48.970 14:15:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:22:49.232 14:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 806fa780-6d1e-4231-8074-010c2def5c68 00:22:49.491 [2024-07-15 14:15:35.423713] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:22:49.491 [2024-07-15 14:15:35.424273] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:22:49.491 [2024-07-15 14:15:35.424484] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:22:49.491 [2024-07-15 14:15:35.424787] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:22:49.491 [2024-07-15 14:15:35.425234] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:22:49.491 NewBaseBdev 00:22:49.491 [2024-07-15 14:15:35.425569] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000009380 00:22:49.491 [2024-07-15 14:15:35.425872] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:49.491 14:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:22:49.491 14:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:22:49.491 14:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:49.491 14:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:22:49.491 14:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:49.491 14:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:49.491 14:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:49.749 14:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:22:50.007 [ 00:22:50.007 { 00:22:50.007 "name": "NewBaseBdev", 00:22:50.007 "aliases": [ 00:22:50.007 "806fa780-6d1e-4231-8074-010c2def5c68" 00:22:50.007 ], 00:22:50.007 "product_name": "Malloc disk", 00:22:50.007 "block_size": 512, 00:22:50.007 "num_blocks": 65536, 00:22:50.007 "uuid": "806fa780-6d1e-4231-8074-010c2def5c68", 00:22:50.007 "assigned_rate_limits": { 00:22:50.007 "rw_ios_per_sec": 0, 00:22:50.007 "rw_mbytes_per_sec": 0, 00:22:50.007 "r_mbytes_per_sec": 0, 00:22:50.007 "w_mbytes_per_sec": 0 00:22:50.007 }, 00:22:50.007 "claimed": true, 00:22:50.008 "claim_type": "exclusive_write", 00:22:50.008 "zoned": false, 00:22:50.008 "supported_io_types": { 00:22:50.008 "read": true, 00:22:50.008 "write": true, 00:22:50.008 "unmap": true, 00:22:50.008 "flush": true, 00:22:50.008 "reset": true, 00:22:50.008 "nvme_admin": false, 00:22:50.008 "nvme_io": false, 00:22:50.008 "nvme_io_md": false, 00:22:50.008 "write_zeroes": true, 00:22:50.008 "zcopy": true, 00:22:50.008 "get_zone_info": false, 00:22:50.008 "zone_management": false, 00:22:50.008 "zone_append": false, 00:22:50.008 "compare": false, 00:22:50.008 "compare_and_write": false, 00:22:50.008 "abort": true, 00:22:50.008 "seek_hole": false, 00:22:50.008 "seek_data": false, 00:22:50.008 "copy": true, 00:22:50.008 "nvme_iov_md": false 00:22:50.008 }, 00:22:50.008 "memory_domains": [ 00:22:50.008 { 00:22:50.008 "dma_device_id": "system", 00:22:50.008 "dma_device_type": 1 00:22:50.008 }, 00:22:50.008 { 00:22:50.008 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:50.008 "dma_device_type": 2 00:22:50.008 } 00:22:50.008 ], 00:22:50.008 "driver_specific": {} 00:22:50.008 } 00:22:50.008 ] 00:22:50.008 14:15:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:22:50.008 14:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:22:50.008 14:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:50.008 14:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:50.008 14:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:50.008 14:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:50.008 14:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:50.008 14:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:50.008 14:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:50.008 14:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:50.008 14:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:50.008 14:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:50.008 14:15:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:50.267 14:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:50.267 "name": "Existed_Raid", 00:22:50.267 "uuid": "0a6f5eaa-6177-438a-ad5f-8742b4db60dd", 00:22:50.267 "strip_size_kb": 64, 00:22:50.267 "state": "online", 00:22:50.267 "raid_level": "raid0", 00:22:50.267 "superblock": true, 00:22:50.267 "num_base_bdevs": 4, 00:22:50.267 "num_base_bdevs_discovered": 4, 00:22:50.267 "num_base_bdevs_operational": 4, 00:22:50.267 "base_bdevs_list": [ 00:22:50.267 { 00:22:50.267 "name": "NewBaseBdev", 00:22:50.267 "uuid": "806fa780-6d1e-4231-8074-010c2def5c68", 00:22:50.267 "is_configured": true, 00:22:50.267 "data_offset": 2048, 00:22:50.267 "data_size": 63488 00:22:50.267 }, 00:22:50.267 { 00:22:50.267 "name": "BaseBdev2", 00:22:50.267 "uuid": "eebec8ee-ce01-4d07-b1c8-80067d803f8c", 00:22:50.267 "is_configured": true, 00:22:50.267 "data_offset": 2048, 00:22:50.267 "data_size": 63488 00:22:50.267 }, 00:22:50.267 { 00:22:50.267 "name": "BaseBdev3", 00:22:50.267 "uuid": "dc766f54-e891-4d7e-9389-03bc1f908b79", 00:22:50.267 "is_configured": true, 00:22:50.267 "data_offset": 2048, 00:22:50.267 "data_size": 63488 00:22:50.267 }, 00:22:50.267 { 00:22:50.267 "name": "BaseBdev4", 00:22:50.267 "uuid": "267df7f2-309f-4c8b-ae17-a2e5d23778ef", 00:22:50.267 "is_configured": true, 00:22:50.267 "data_offset": 2048, 00:22:50.267 "data_size": 63488 00:22:50.267 } 00:22:50.267 ] 00:22:50.267 }' 00:22:50.267 14:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:50.267 14:15:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:50.836 14:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:22:50.836 14:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:22:50.836 14:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:50.836 14:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:50.836 14:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:50.836 14:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:22:50.836 14:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:22:50.836 14:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:51.093 [2024-07-15 14:15:36.976924] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:51.093 14:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:51.093 "name": "Existed_Raid", 00:22:51.093 "aliases": [ 00:22:51.093 "0a6f5eaa-6177-438a-ad5f-8742b4db60dd" 00:22:51.093 ], 00:22:51.093 "product_name": "Raid Volume", 00:22:51.093 "block_size": 512, 00:22:51.093 "num_blocks": 253952, 00:22:51.093 "uuid": "0a6f5eaa-6177-438a-ad5f-8742b4db60dd", 00:22:51.093 "assigned_rate_limits": { 00:22:51.093 "rw_ios_per_sec": 0, 00:22:51.093 "rw_mbytes_per_sec": 0, 00:22:51.093 "r_mbytes_per_sec": 0, 00:22:51.093 "w_mbytes_per_sec": 0 00:22:51.094 }, 00:22:51.094 "claimed": false, 00:22:51.094 "zoned": false, 00:22:51.094 "supported_io_types": { 00:22:51.094 "read": true, 00:22:51.094 "write": true, 00:22:51.094 "unmap": true, 00:22:51.094 "flush": true, 00:22:51.094 "reset": true, 00:22:51.094 "nvme_admin": false, 00:22:51.094 "nvme_io": false, 00:22:51.094 "nvme_io_md": false, 00:22:51.094 "write_zeroes": true, 00:22:51.094 "zcopy": false, 00:22:51.094 "get_zone_info": false, 00:22:51.094 "zone_management": false, 00:22:51.094 "zone_append": false, 00:22:51.094 "compare": false, 00:22:51.094 "compare_and_write": false, 00:22:51.094 "abort": false, 00:22:51.094 "seek_hole": false, 00:22:51.094 "seek_data": false, 00:22:51.094 "copy": false, 00:22:51.094 "nvme_iov_md": false 00:22:51.094 }, 00:22:51.094 "memory_domains": [ 00:22:51.094 { 00:22:51.094 "dma_device_id": "system", 00:22:51.094 "dma_device_type": 1 00:22:51.094 }, 00:22:51.094 { 00:22:51.094 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:51.094 "dma_device_type": 2 00:22:51.094 }, 00:22:51.094 { 00:22:51.094 "dma_device_id": "system", 00:22:51.094 "dma_device_type": 1 00:22:51.094 }, 00:22:51.094 { 00:22:51.094 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:51.094 "dma_device_type": 2 00:22:51.094 }, 00:22:51.094 { 00:22:51.094 "dma_device_id": "system", 00:22:51.094 "dma_device_type": 1 00:22:51.094 }, 00:22:51.094 { 00:22:51.094 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:51.094 "dma_device_type": 2 00:22:51.094 }, 00:22:51.094 { 00:22:51.094 "dma_device_id": "system", 00:22:51.094 "dma_device_type": 1 00:22:51.094 }, 00:22:51.094 { 00:22:51.094 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:51.094 "dma_device_type": 2 00:22:51.094 } 00:22:51.094 ], 00:22:51.094 "driver_specific": { 00:22:51.094 "raid": { 00:22:51.094 "uuid": "0a6f5eaa-6177-438a-ad5f-8742b4db60dd", 00:22:51.094 "strip_size_kb": 64, 00:22:51.094 "state": "online", 00:22:51.094 "raid_level": "raid0", 00:22:51.094 "superblock": true, 00:22:51.094 "num_base_bdevs": 4, 00:22:51.094 "num_base_bdevs_discovered": 4, 00:22:51.094 "num_base_bdevs_operational": 4, 00:22:51.094 "base_bdevs_list": [ 00:22:51.094 { 00:22:51.094 "name": "NewBaseBdev", 00:22:51.094 "uuid": "806fa780-6d1e-4231-8074-010c2def5c68", 00:22:51.094 "is_configured": true, 00:22:51.094 "data_offset": 2048, 00:22:51.094 "data_size": 63488 00:22:51.094 }, 00:22:51.094 { 00:22:51.094 "name": "BaseBdev2", 00:22:51.094 "uuid": "eebec8ee-ce01-4d07-b1c8-80067d803f8c", 00:22:51.094 "is_configured": true, 00:22:51.094 "data_offset": 2048, 00:22:51.094 "data_size": 63488 00:22:51.094 }, 00:22:51.094 { 00:22:51.094 "name": "BaseBdev3", 00:22:51.094 "uuid": "dc766f54-e891-4d7e-9389-03bc1f908b79", 00:22:51.094 "is_configured": true, 00:22:51.094 "data_offset": 2048, 00:22:51.094 "data_size": 63488 00:22:51.094 }, 00:22:51.094 { 00:22:51.094 "name": "BaseBdev4", 00:22:51.094 "uuid": "267df7f2-309f-4c8b-ae17-a2e5d23778ef", 00:22:51.094 "is_configured": true, 00:22:51.094 "data_offset": 2048, 00:22:51.094 "data_size": 63488 00:22:51.094 } 00:22:51.094 ] 00:22:51.094 } 00:22:51.094 } 00:22:51.094 }' 00:22:51.094 14:15:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:51.094 14:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:22:51.094 BaseBdev2 00:22:51.094 BaseBdev3 00:22:51.094 BaseBdev4' 00:22:51.094 14:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:51.094 14:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:22:51.094 14:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:51.660 14:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:51.660 "name": "NewBaseBdev", 00:22:51.660 "aliases": [ 00:22:51.660 "806fa780-6d1e-4231-8074-010c2def5c68" 00:22:51.660 ], 00:22:51.660 "product_name": "Malloc disk", 00:22:51.660 "block_size": 512, 00:22:51.660 "num_blocks": 65536, 00:22:51.660 "uuid": "806fa780-6d1e-4231-8074-010c2def5c68", 00:22:51.660 "assigned_rate_limits": { 00:22:51.660 "rw_ios_per_sec": 0, 00:22:51.660 "rw_mbytes_per_sec": 0, 00:22:51.660 "r_mbytes_per_sec": 0, 00:22:51.660 "w_mbytes_per_sec": 0 00:22:51.660 }, 00:22:51.660 "claimed": true, 00:22:51.660 "claim_type": "exclusive_write", 00:22:51.660 "zoned": false, 00:22:51.660 "supported_io_types": { 00:22:51.660 "read": true, 00:22:51.660 "write": true, 00:22:51.660 "unmap": true, 00:22:51.660 "flush": true, 00:22:51.660 "reset": true, 00:22:51.660 "nvme_admin": false, 00:22:51.660 "nvme_io": false, 00:22:51.660 "nvme_io_md": false, 00:22:51.660 "write_zeroes": true, 00:22:51.660 "zcopy": true, 00:22:51.660 "get_zone_info": false, 00:22:51.660 "zone_management": false, 00:22:51.660 "zone_append": false, 00:22:51.660 "compare": false, 00:22:51.660 "compare_and_write": false, 00:22:51.660 "abort": true, 00:22:51.660 "seek_hole": false, 00:22:51.660 "seek_data": false, 00:22:51.660 "copy": true, 00:22:51.660 "nvme_iov_md": false 00:22:51.660 }, 00:22:51.660 "memory_domains": [ 00:22:51.660 { 00:22:51.660 "dma_device_id": "system", 00:22:51.660 "dma_device_type": 1 00:22:51.660 }, 00:22:51.660 { 00:22:51.660 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:51.660 "dma_device_type": 2 00:22:51.660 } 00:22:51.660 ], 00:22:51.660 "driver_specific": {} 00:22:51.660 }' 00:22:51.660 14:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:51.660 14:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:51.660 14:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:51.660 14:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:51.660 14:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:51.660 14:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:51.660 14:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:51.660 14:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:51.660 14:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:51.660 14:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:51.919 14:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:51.919 14:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:51.919 14:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:51.919 14:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:22:51.919 14:15:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:52.178 14:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:52.178 "name": "BaseBdev2", 00:22:52.178 "aliases": [ 00:22:52.178 "eebec8ee-ce01-4d07-b1c8-80067d803f8c" 00:22:52.178 ], 00:22:52.178 "product_name": "Malloc disk", 00:22:52.178 "block_size": 512, 00:22:52.178 "num_blocks": 65536, 00:22:52.178 "uuid": "eebec8ee-ce01-4d07-b1c8-80067d803f8c", 00:22:52.178 "assigned_rate_limits": { 00:22:52.178 "rw_ios_per_sec": 0, 00:22:52.178 "rw_mbytes_per_sec": 0, 00:22:52.178 "r_mbytes_per_sec": 0, 00:22:52.178 "w_mbytes_per_sec": 0 00:22:52.178 }, 00:22:52.178 "claimed": true, 00:22:52.178 "claim_type": "exclusive_write", 00:22:52.178 "zoned": false, 00:22:52.178 "supported_io_types": { 00:22:52.178 "read": true, 00:22:52.178 "write": true, 00:22:52.178 "unmap": true, 00:22:52.178 "flush": true, 00:22:52.178 "reset": true, 00:22:52.178 "nvme_admin": false, 00:22:52.178 "nvme_io": false, 00:22:52.178 "nvme_io_md": false, 00:22:52.178 "write_zeroes": true, 00:22:52.178 "zcopy": true, 00:22:52.178 "get_zone_info": false, 00:22:52.178 "zone_management": false, 00:22:52.178 "zone_append": false, 00:22:52.178 "compare": false, 00:22:52.178 "compare_and_write": false, 00:22:52.178 "abort": true, 00:22:52.178 "seek_hole": false, 00:22:52.178 "seek_data": false, 00:22:52.178 "copy": true, 00:22:52.178 "nvme_iov_md": false 00:22:52.178 }, 00:22:52.178 "memory_domains": [ 00:22:52.178 { 00:22:52.178 "dma_device_id": "system", 00:22:52.178 "dma_device_type": 1 00:22:52.178 }, 00:22:52.178 { 00:22:52.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:52.178 "dma_device_type": 2 00:22:52.178 } 00:22:52.178 ], 00:22:52.178 "driver_specific": {} 00:22:52.178 }' 00:22:52.178 14:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:52.178 14:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:52.178 14:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:52.178 14:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:52.556 14:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:52.556 14:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:52.556 14:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:52.556 14:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:52.556 14:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:52.556 14:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:52.557 14:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:52.557 14:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:52.557 14:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:52.557 14:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:22:52.557 14:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:52.816 14:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:52.816 "name": "BaseBdev3", 00:22:52.816 "aliases": [ 00:22:52.816 "dc766f54-e891-4d7e-9389-03bc1f908b79" 00:22:52.816 ], 00:22:52.816 "product_name": "Malloc disk", 00:22:52.816 "block_size": 512, 00:22:52.816 "num_blocks": 65536, 00:22:52.816 "uuid": "dc766f54-e891-4d7e-9389-03bc1f908b79", 00:22:52.816 "assigned_rate_limits": { 00:22:52.816 "rw_ios_per_sec": 0, 00:22:52.816 "rw_mbytes_per_sec": 0, 00:22:52.816 "r_mbytes_per_sec": 0, 00:22:52.816 "w_mbytes_per_sec": 0 00:22:52.816 }, 00:22:52.816 "claimed": true, 00:22:52.816 "claim_type": "exclusive_write", 00:22:52.816 "zoned": false, 00:22:52.816 "supported_io_types": { 00:22:52.816 "read": true, 00:22:52.816 "write": true, 00:22:52.816 "unmap": true, 00:22:52.816 "flush": true, 00:22:52.816 "reset": true, 00:22:52.816 "nvme_admin": false, 00:22:52.816 "nvme_io": false, 00:22:52.816 "nvme_io_md": false, 00:22:52.816 "write_zeroes": true, 00:22:52.816 "zcopy": true, 00:22:52.816 "get_zone_info": false, 00:22:52.816 "zone_management": false, 00:22:52.816 "zone_append": false, 00:22:52.816 "compare": false, 00:22:52.816 "compare_and_write": false, 00:22:52.816 "abort": true, 00:22:52.816 "seek_hole": false, 00:22:52.816 "seek_data": false, 00:22:52.816 "copy": true, 00:22:52.816 "nvme_iov_md": false 00:22:52.816 }, 00:22:52.816 "memory_domains": [ 00:22:52.816 { 00:22:52.816 "dma_device_id": "system", 00:22:52.816 "dma_device_type": 1 00:22:52.816 }, 00:22:52.816 { 00:22:52.816 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:52.816 "dma_device_type": 2 00:22:52.816 } 00:22:52.816 ], 00:22:52.816 "driver_specific": {} 00:22:52.816 }' 00:22:52.816 14:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:52.816 14:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:53.075 14:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:53.075 14:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:53.075 14:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:53.075 14:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:53.075 14:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:53.075 14:15:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:53.075 14:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:53.075 14:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:53.075 14:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:53.333 14:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:53.333 14:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:53.333 14:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:22:53.333 14:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:53.592 14:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:53.592 "name": "BaseBdev4", 00:22:53.592 "aliases": [ 00:22:53.592 "267df7f2-309f-4c8b-ae17-a2e5d23778ef" 00:22:53.592 ], 00:22:53.592 "product_name": "Malloc disk", 00:22:53.592 "block_size": 512, 00:22:53.592 "num_blocks": 65536, 00:22:53.592 "uuid": "267df7f2-309f-4c8b-ae17-a2e5d23778ef", 00:22:53.592 "assigned_rate_limits": { 00:22:53.592 "rw_ios_per_sec": 0, 00:22:53.592 "rw_mbytes_per_sec": 0, 00:22:53.592 "r_mbytes_per_sec": 0, 00:22:53.592 "w_mbytes_per_sec": 0 00:22:53.592 }, 00:22:53.592 "claimed": true, 00:22:53.592 "claim_type": "exclusive_write", 00:22:53.592 "zoned": false, 00:22:53.592 "supported_io_types": { 00:22:53.592 "read": true, 00:22:53.592 "write": true, 00:22:53.592 "unmap": true, 00:22:53.592 "flush": true, 00:22:53.592 "reset": true, 00:22:53.592 "nvme_admin": false, 00:22:53.592 "nvme_io": false, 00:22:53.592 "nvme_io_md": false, 00:22:53.592 "write_zeroes": true, 00:22:53.592 "zcopy": true, 00:22:53.592 "get_zone_info": false, 00:22:53.592 "zone_management": false, 00:22:53.592 "zone_append": false, 00:22:53.592 "compare": false, 00:22:53.592 "compare_and_write": false, 00:22:53.592 "abort": true, 00:22:53.592 "seek_hole": false, 00:22:53.592 "seek_data": false, 00:22:53.592 "copy": true, 00:22:53.592 "nvme_iov_md": false 00:22:53.592 }, 00:22:53.592 "memory_domains": [ 00:22:53.592 { 00:22:53.592 "dma_device_id": "system", 00:22:53.592 "dma_device_type": 1 00:22:53.592 }, 00:22:53.592 { 00:22:53.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:53.592 "dma_device_type": 2 00:22:53.592 } 00:22:53.592 ], 00:22:53.592 "driver_specific": {} 00:22:53.592 }' 00:22:53.592 14:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:53.592 14:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:53.592 14:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:53.592 14:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:53.592 14:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:53.592 14:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:53.592 14:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:53.592 14:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:53.851 14:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:53.851 14:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:53.851 14:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:53.851 14:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:53.851 14:15:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:54.110 [2024-07-15 14:15:40.009649] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:54.110 [2024-07-15 14:15:40.010128] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:54.110 [2024-07-15 14:15:40.010403] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:54.110 [2024-07-15 14:15:40.010661] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:54.110 [2024-07-15 14:15:40.010860] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name Existed_Raid, state offline 00:22:54.110 14:15:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 201474 00:22:54.110 14:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 201474 ']' 00:22:54.110 14:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 201474 00:22:54.110 14:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:22:54.110 14:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:54.110 14:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 201474 00:22:54.110 14:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:54.110 14:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:54.110 14:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 201474' 00:22:54.110 killing process with pid 201474 00:22:54.110 14:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 201474 00:22:54.110 14:15:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 201474 00:22:54.110 [2024-07-15 14:15:40.057895] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:54.677 [2024-07-15 14:15:40.453394] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:56.056 14:15:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:22:56.056 00:22:56.056 real 0m37.607s 00:22:56.056 user 1m9.031s 00:22:56.056 sys 0m4.416s 00:22:56.056 14:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:56.056 14:15:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:56.056 ************************************ 00:22:56.056 END TEST raid_state_function_test_sb 00:22:56.056 ************************************ 00:22:56.056 14:15:41 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:22:56.056 14:15:41 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:22:56.056 14:15:41 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:22:56.056 14:15:41 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:56.056 14:15:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:56.056 ************************************ 00:22:56.056 START TEST raid_superblock_test 00:22:56.056 ************************************ 00:22:56.056 14:15:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid0 4 00:22:56.056 14:15:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid0 00:22:56.056 14:15:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:22:56.056 14:15:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:22:56.056 14:15:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:22:56.056 14:15:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:22:56.056 14:15:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:22:56.056 14:15:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:22:56.056 14:15:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:22:56.056 14:15:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:22:56.056 14:15:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:22:56.056 14:15:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:22:56.056 14:15:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:22:56.056 14:15:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:22:56.056 14:15:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid0 '!=' raid1 ']' 00:22:56.056 14:15:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:22:56.057 14:15:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:22:56.057 14:15:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=202605 00:22:56.057 14:15:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:22:56.057 14:15:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 202605 /var/tmp/spdk-raid.sock 00:22:56.057 14:15:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 202605 ']' 00:22:56.057 14:15:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:56.057 14:15:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:56.057 14:15:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:56.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:56.057 14:15:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:56.057 14:15:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:56.057 [2024-07-15 14:15:41.740516] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:22:56.057 [2024-07-15 14:15:41.740873] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid202605 ] 00:22:56.057 [2024-07-15 14:15:41.905389] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:56.317 [2024-07-15 14:15:42.154325] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:56.590 [2024-07-15 14:15:42.354636] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:56.848 14:15:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:56.848 14:15:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:22:56.848 14:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:22:56.848 14:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:22:56.848 14:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:22:56.848 14:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:22:56.848 14:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:22:56.848 14:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:56.848 14:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:22:56.848 14:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:56.848 14:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:22:57.106 malloc1 00:22:57.364 14:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:57.364 [2024-07-15 14:15:43.340434] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:57.364 [2024-07-15 14:15:43.341069] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:57.364 [2024-07-15 14:15:43.341320] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:22:57.364 [2024-07-15 14:15:43.341533] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:57.364 [2024-07-15 14:15:43.343602] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:57.364 [2024-07-15 14:15:43.343883] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:57.364 pt1 00:22:57.364 14:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:22:57.364 14:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:22:57.364 14:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:22:57.364 14:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:22:57.364 14:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:22:57.364 14:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:57.364 14:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:22:57.364 14:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:57.364 14:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:22:57.931 malloc2 00:22:57.931 14:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:57.931 [2024-07-15 14:15:43.876162] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:57.931 [2024-07-15 14:15:43.876649] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:57.931 [2024-07-15 14:15:43.876908] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:22:57.931 [2024-07-15 14:15:43.877143] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:57.931 [2024-07-15 14:15:43.879083] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:57.931 [2024-07-15 14:15:43.879311] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:57.931 pt2 00:22:57.931 14:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:22:57.931 14:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:22:57.931 14:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:22:57.931 14:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:22:57.931 14:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:22:57.931 14:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:57.931 14:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:22:57.931 14:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:57.931 14:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:22:58.189 malloc3 00:22:58.189 14:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:58.448 [2024-07-15 14:15:44.381991] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:58.448 [2024-07-15 14:15:44.382558] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:58.448 [2024-07-15 14:15:44.382800] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:22:58.448 [2024-07-15 14:15:44.383018] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:58.448 [2024-07-15 14:15:44.384902] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:58.448 [2024-07-15 14:15:44.385138] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:58.448 pt3 00:22:58.448 14:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:22:58.448 14:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:22:58.448 14:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:22:58.448 14:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:22:58.448 14:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:22:58.448 14:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:58.448 14:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:22:58.448 14:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:58.448 14:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:22:58.706 malloc4 00:22:58.706 14:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:22:58.965 [2024-07-15 14:15:44.932123] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:22:58.965 [2024-07-15 14:15:44.932659] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:58.965 [2024-07-15 14:15:44.932940] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:22:58.965 [2024-07-15 14:15:44.933165] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:58.965 [2024-07-15 14:15:44.935224] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:58.965 [2024-07-15 14:15:44.935462] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:22:58.965 pt4 00:22:58.965 14:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:22:58.965 14:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:22:58.965 14:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:22:59.225 [2024-07-15 14:15:45.180225] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:59.225 [2024-07-15 14:15:45.181980] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:59.225 [2024-07-15 14:15:45.182206] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:59.225 [2024-07-15 14:15:45.182378] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:22:59.225 [2024-07-15 14:15:45.182660] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:22:59.225 [2024-07-15 14:15:45.182800] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:22:59.225 [2024-07-15 14:15:45.183045] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:22:59.225 [2024-07-15 14:15:45.183450] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:22:59.225 [2024-07-15 14:15:45.183600] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:22:59.225 [2024-07-15 14:15:45.183867] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:59.225 14:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:22:59.225 14:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:59.225 14:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:59.225 14:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:59.225 14:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:59.225 14:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:59.225 14:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:59.225 14:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:59.225 14:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:59.225 14:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:59.225 14:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:59.225 14:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:59.484 14:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:59.484 "name": "raid_bdev1", 00:22:59.484 "uuid": "2b108422-f410-4b33-b26a-097a206897a8", 00:22:59.484 "strip_size_kb": 64, 00:22:59.484 "state": "online", 00:22:59.484 "raid_level": "raid0", 00:22:59.484 "superblock": true, 00:22:59.484 "num_base_bdevs": 4, 00:22:59.484 "num_base_bdevs_discovered": 4, 00:22:59.484 "num_base_bdevs_operational": 4, 00:22:59.484 "base_bdevs_list": [ 00:22:59.484 { 00:22:59.484 "name": "pt1", 00:22:59.484 "uuid": "00000000-0000-0000-0000-000000000001", 00:22:59.484 "is_configured": true, 00:22:59.484 "data_offset": 2048, 00:22:59.484 "data_size": 63488 00:22:59.484 }, 00:22:59.484 { 00:22:59.484 "name": "pt2", 00:22:59.484 "uuid": "00000000-0000-0000-0000-000000000002", 00:22:59.484 "is_configured": true, 00:22:59.484 "data_offset": 2048, 00:22:59.484 "data_size": 63488 00:22:59.484 }, 00:22:59.484 { 00:22:59.484 "name": "pt3", 00:22:59.484 "uuid": "00000000-0000-0000-0000-000000000003", 00:22:59.484 "is_configured": true, 00:22:59.484 "data_offset": 2048, 00:22:59.484 "data_size": 63488 00:22:59.484 }, 00:22:59.484 { 00:22:59.484 "name": "pt4", 00:22:59.484 "uuid": "00000000-0000-0000-0000-000000000004", 00:22:59.484 "is_configured": true, 00:22:59.484 "data_offset": 2048, 00:22:59.484 "data_size": 63488 00:22:59.484 } 00:22:59.484 ] 00:22:59.484 }' 00:22:59.484 14:15:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:59.484 14:15:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:00.420 14:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:23:00.420 14:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:23:00.420 14:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:23:00.420 14:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:23:00.420 14:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:23:00.420 14:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:23:00.420 14:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:00.420 14:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:23:00.420 [2024-07-15 14:15:46.320581] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:00.420 14:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:23:00.420 "name": "raid_bdev1", 00:23:00.420 "aliases": [ 00:23:00.420 "2b108422-f410-4b33-b26a-097a206897a8" 00:23:00.420 ], 00:23:00.420 "product_name": "Raid Volume", 00:23:00.420 "block_size": 512, 00:23:00.420 "num_blocks": 253952, 00:23:00.420 "uuid": "2b108422-f410-4b33-b26a-097a206897a8", 00:23:00.420 "assigned_rate_limits": { 00:23:00.420 "rw_ios_per_sec": 0, 00:23:00.420 "rw_mbytes_per_sec": 0, 00:23:00.420 "r_mbytes_per_sec": 0, 00:23:00.420 "w_mbytes_per_sec": 0 00:23:00.420 }, 00:23:00.420 "claimed": false, 00:23:00.420 "zoned": false, 00:23:00.420 "supported_io_types": { 00:23:00.420 "read": true, 00:23:00.420 "write": true, 00:23:00.420 "unmap": true, 00:23:00.420 "flush": true, 00:23:00.420 "reset": true, 00:23:00.420 "nvme_admin": false, 00:23:00.420 "nvme_io": false, 00:23:00.420 "nvme_io_md": false, 00:23:00.420 "write_zeroes": true, 00:23:00.420 "zcopy": false, 00:23:00.420 "get_zone_info": false, 00:23:00.420 "zone_management": false, 00:23:00.420 "zone_append": false, 00:23:00.420 "compare": false, 00:23:00.420 "compare_and_write": false, 00:23:00.420 "abort": false, 00:23:00.420 "seek_hole": false, 00:23:00.420 "seek_data": false, 00:23:00.420 "copy": false, 00:23:00.420 "nvme_iov_md": false 00:23:00.420 }, 00:23:00.420 "memory_domains": [ 00:23:00.420 { 00:23:00.420 "dma_device_id": "system", 00:23:00.420 "dma_device_type": 1 00:23:00.420 }, 00:23:00.420 { 00:23:00.420 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:00.420 "dma_device_type": 2 00:23:00.420 }, 00:23:00.420 { 00:23:00.420 "dma_device_id": "system", 00:23:00.420 "dma_device_type": 1 00:23:00.420 }, 00:23:00.420 { 00:23:00.420 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:00.420 "dma_device_type": 2 00:23:00.420 }, 00:23:00.420 { 00:23:00.420 "dma_device_id": "system", 00:23:00.420 "dma_device_type": 1 00:23:00.420 }, 00:23:00.420 { 00:23:00.420 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:00.420 "dma_device_type": 2 00:23:00.420 }, 00:23:00.420 { 00:23:00.420 "dma_device_id": "system", 00:23:00.420 "dma_device_type": 1 00:23:00.420 }, 00:23:00.420 { 00:23:00.420 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:00.420 "dma_device_type": 2 00:23:00.420 } 00:23:00.420 ], 00:23:00.420 "driver_specific": { 00:23:00.420 "raid": { 00:23:00.420 "uuid": "2b108422-f410-4b33-b26a-097a206897a8", 00:23:00.420 "strip_size_kb": 64, 00:23:00.420 "state": "online", 00:23:00.420 "raid_level": "raid0", 00:23:00.420 "superblock": true, 00:23:00.420 "num_base_bdevs": 4, 00:23:00.420 "num_base_bdevs_discovered": 4, 00:23:00.420 "num_base_bdevs_operational": 4, 00:23:00.420 "base_bdevs_list": [ 00:23:00.420 { 00:23:00.420 "name": "pt1", 00:23:00.420 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:00.420 "is_configured": true, 00:23:00.420 "data_offset": 2048, 00:23:00.420 "data_size": 63488 00:23:00.420 }, 00:23:00.420 { 00:23:00.420 "name": "pt2", 00:23:00.420 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:00.420 "is_configured": true, 00:23:00.420 "data_offset": 2048, 00:23:00.420 "data_size": 63488 00:23:00.420 }, 00:23:00.421 { 00:23:00.421 "name": "pt3", 00:23:00.421 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:00.421 "is_configured": true, 00:23:00.421 "data_offset": 2048, 00:23:00.421 "data_size": 63488 00:23:00.421 }, 00:23:00.421 { 00:23:00.421 "name": "pt4", 00:23:00.421 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:00.421 "is_configured": true, 00:23:00.421 "data_offset": 2048, 00:23:00.421 "data_size": 63488 00:23:00.421 } 00:23:00.421 ] 00:23:00.421 } 00:23:00.421 } 00:23:00.421 }' 00:23:00.421 14:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:00.421 14:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:23:00.421 pt2 00:23:00.421 pt3 00:23:00.421 pt4' 00:23:00.421 14:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:00.421 14:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:00.421 14:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:23:00.680 14:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:00.680 "name": "pt1", 00:23:00.680 "aliases": [ 00:23:00.680 "00000000-0000-0000-0000-000000000001" 00:23:00.680 ], 00:23:00.680 "product_name": "passthru", 00:23:00.680 "block_size": 512, 00:23:00.680 "num_blocks": 65536, 00:23:00.680 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:00.680 "assigned_rate_limits": { 00:23:00.680 "rw_ios_per_sec": 0, 00:23:00.680 "rw_mbytes_per_sec": 0, 00:23:00.680 "r_mbytes_per_sec": 0, 00:23:00.680 "w_mbytes_per_sec": 0 00:23:00.680 }, 00:23:00.680 "claimed": true, 00:23:00.680 "claim_type": "exclusive_write", 00:23:00.680 "zoned": false, 00:23:00.680 "supported_io_types": { 00:23:00.680 "read": true, 00:23:00.680 "write": true, 00:23:00.680 "unmap": true, 00:23:00.680 "flush": true, 00:23:00.680 "reset": true, 00:23:00.680 "nvme_admin": false, 00:23:00.680 "nvme_io": false, 00:23:00.680 "nvme_io_md": false, 00:23:00.680 "write_zeroes": true, 00:23:00.680 "zcopy": true, 00:23:00.680 "get_zone_info": false, 00:23:00.680 "zone_management": false, 00:23:00.680 "zone_append": false, 00:23:00.680 "compare": false, 00:23:00.680 "compare_and_write": false, 00:23:00.680 "abort": true, 00:23:00.680 "seek_hole": false, 00:23:00.680 "seek_data": false, 00:23:00.680 "copy": true, 00:23:00.680 "nvme_iov_md": false 00:23:00.680 }, 00:23:00.680 "memory_domains": [ 00:23:00.680 { 00:23:00.680 "dma_device_id": "system", 00:23:00.680 "dma_device_type": 1 00:23:00.680 }, 00:23:00.680 { 00:23:00.680 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:00.680 "dma_device_type": 2 00:23:00.680 } 00:23:00.680 ], 00:23:00.680 "driver_specific": { 00:23:00.680 "passthru": { 00:23:00.680 "name": "pt1", 00:23:00.680 "base_bdev_name": "malloc1" 00:23:00.680 } 00:23:00.680 } 00:23:00.680 }' 00:23:00.680 14:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:00.938 14:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:00.938 14:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:00.938 14:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:00.938 14:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:00.938 14:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:00.938 14:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:00.938 14:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:01.197 14:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:01.197 14:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:01.197 14:15:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:01.197 14:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:01.197 14:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:01.197 14:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:23:01.197 14:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:01.455 14:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:01.455 "name": "pt2", 00:23:01.455 "aliases": [ 00:23:01.455 "00000000-0000-0000-0000-000000000002" 00:23:01.455 ], 00:23:01.455 "product_name": "passthru", 00:23:01.455 "block_size": 512, 00:23:01.455 "num_blocks": 65536, 00:23:01.455 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:01.455 "assigned_rate_limits": { 00:23:01.455 "rw_ios_per_sec": 0, 00:23:01.455 "rw_mbytes_per_sec": 0, 00:23:01.455 "r_mbytes_per_sec": 0, 00:23:01.455 "w_mbytes_per_sec": 0 00:23:01.455 }, 00:23:01.455 "claimed": true, 00:23:01.455 "claim_type": "exclusive_write", 00:23:01.455 "zoned": false, 00:23:01.455 "supported_io_types": { 00:23:01.455 "read": true, 00:23:01.455 "write": true, 00:23:01.455 "unmap": true, 00:23:01.455 "flush": true, 00:23:01.455 "reset": true, 00:23:01.455 "nvme_admin": false, 00:23:01.455 "nvme_io": false, 00:23:01.455 "nvme_io_md": false, 00:23:01.455 "write_zeroes": true, 00:23:01.455 "zcopy": true, 00:23:01.455 "get_zone_info": false, 00:23:01.455 "zone_management": false, 00:23:01.455 "zone_append": false, 00:23:01.455 "compare": false, 00:23:01.455 "compare_and_write": false, 00:23:01.455 "abort": true, 00:23:01.455 "seek_hole": false, 00:23:01.455 "seek_data": false, 00:23:01.455 "copy": true, 00:23:01.455 "nvme_iov_md": false 00:23:01.455 }, 00:23:01.455 "memory_domains": [ 00:23:01.455 { 00:23:01.455 "dma_device_id": "system", 00:23:01.455 "dma_device_type": 1 00:23:01.455 }, 00:23:01.455 { 00:23:01.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:01.455 "dma_device_type": 2 00:23:01.455 } 00:23:01.455 ], 00:23:01.455 "driver_specific": { 00:23:01.455 "passthru": { 00:23:01.455 "name": "pt2", 00:23:01.455 "base_bdev_name": "malloc2" 00:23:01.455 } 00:23:01.455 } 00:23:01.455 }' 00:23:01.455 14:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:01.455 14:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:01.455 14:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:01.455 14:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:01.455 14:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:01.714 14:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:01.714 14:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:01.714 14:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:01.714 14:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:01.714 14:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:01.714 14:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:01.714 14:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:01.714 14:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:01.714 14:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:01.714 14:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:23:01.973 14:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:01.973 "name": "pt3", 00:23:01.973 "aliases": [ 00:23:01.973 "00000000-0000-0000-0000-000000000003" 00:23:01.973 ], 00:23:01.973 "product_name": "passthru", 00:23:01.973 "block_size": 512, 00:23:01.973 "num_blocks": 65536, 00:23:01.973 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:01.973 "assigned_rate_limits": { 00:23:01.973 "rw_ios_per_sec": 0, 00:23:01.973 "rw_mbytes_per_sec": 0, 00:23:01.973 "r_mbytes_per_sec": 0, 00:23:01.973 "w_mbytes_per_sec": 0 00:23:01.973 }, 00:23:01.973 "claimed": true, 00:23:01.973 "claim_type": "exclusive_write", 00:23:01.973 "zoned": false, 00:23:01.973 "supported_io_types": { 00:23:01.973 "read": true, 00:23:01.973 "write": true, 00:23:01.973 "unmap": true, 00:23:01.973 "flush": true, 00:23:01.973 "reset": true, 00:23:01.973 "nvme_admin": false, 00:23:01.973 "nvme_io": false, 00:23:01.973 "nvme_io_md": false, 00:23:01.973 "write_zeroes": true, 00:23:01.973 "zcopy": true, 00:23:01.973 "get_zone_info": false, 00:23:01.973 "zone_management": false, 00:23:01.973 "zone_append": false, 00:23:01.973 "compare": false, 00:23:01.973 "compare_and_write": false, 00:23:01.973 "abort": true, 00:23:01.973 "seek_hole": false, 00:23:01.973 "seek_data": false, 00:23:01.973 "copy": true, 00:23:01.973 "nvme_iov_md": false 00:23:01.973 }, 00:23:01.973 "memory_domains": [ 00:23:01.973 { 00:23:01.973 "dma_device_id": "system", 00:23:01.973 "dma_device_type": 1 00:23:01.973 }, 00:23:01.973 { 00:23:01.973 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:01.973 "dma_device_type": 2 00:23:01.973 } 00:23:01.973 ], 00:23:01.973 "driver_specific": { 00:23:01.973 "passthru": { 00:23:01.973 "name": "pt3", 00:23:01.973 "base_bdev_name": "malloc3" 00:23:01.973 } 00:23:01.973 } 00:23:01.973 }' 00:23:01.973 14:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:02.244 14:15:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:02.244 14:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:02.244 14:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:02.244 14:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:02.244 14:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:02.244 14:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:02.244 14:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:02.244 14:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:02.244 14:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:02.503 14:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:02.503 14:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:02.503 14:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:02.503 14:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:23:02.503 14:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:02.760 14:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:02.760 "name": "pt4", 00:23:02.760 "aliases": [ 00:23:02.760 "00000000-0000-0000-0000-000000000004" 00:23:02.760 ], 00:23:02.760 "product_name": "passthru", 00:23:02.760 "block_size": 512, 00:23:02.760 "num_blocks": 65536, 00:23:02.760 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:02.760 "assigned_rate_limits": { 00:23:02.760 "rw_ios_per_sec": 0, 00:23:02.760 "rw_mbytes_per_sec": 0, 00:23:02.760 "r_mbytes_per_sec": 0, 00:23:02.760 "w_mbytes_per_sec": 0 00:23:02.760 }, 00:23:02.760 "claimed": true, 00:23:02.760 "claim_type": "exclusive_write", 00:23:02.761 "zoned": false, 00:23:02.761 "supported_io_types": { 00:23:02.761 "read": true, 00:23:02.761 "write": true, 00:23:02.761 "unmap": true, 00:23:02.761 "flush": true, 00:23:02.761 "reset": true, 00:23:02.761 "nvme_admin": false, 00:23:02.761 "nvme_io": false, 00:23:02.761 "nvme_io_md": false, 00:23:02.761 "write_zeroes": true, 00:23:02.761 "zcopy": true, 00:23:02.761 "get_zone_info": false, 00:23:02.761 "zone_management": false, 00:23:02.761 "zone_append": false, 00:23:02.761 "compare": false, 00:23:02.761 "compare_and_write": false, 00:23:02.761 "abort": true, 00:23:02.761 "seek_hole": false, 00:23:02.761 "seek_data": false, 00:23:02.761 "copy": true, 00:23:02.761 "nvme_iov_md": false 00:23:02.761 }, 00:23:02.761 "memory_domains": [ 00:23:02.761 { 00:23:02.761 "dma_device_id": "system", 00:23:02.761 "dma_device_type": 1 00:23:02.761 }, 00:23:02.761 { 00:23:02.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:02.761 "dma_device_type": 2 00:23:02.761 } 00:23:02.761 ], 00:23:02.761 "driver_specific": { 00:23:02.761 "passthru": { 00:23:02.761 "name": "pt4", 00:23:02.761 "base_bdev_name": "malloc4" 00:23:02.761 } 00:23:02.761 } 00:23:02.761 }' 00:23:02.761 14:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:02.761 14:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:02.761 14:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:02.761 14:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:02.761 14:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:03.019 14:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:03.019 14:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:03.019 14:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:03.019 14:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:03.019 14:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:03.019 14:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:03.019 14:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:03.019 14:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:03.019 14:15:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:23:03.276 [2024-07-15 14:15:49.201136] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:03.276 14:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=2b108422-f410-4b33-b26a-097a206897a8 00:23:03.276 14:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 2b108422-f410-4b33-b26a-097a206897a8 ']' 00:23:03.276 14:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:03.533 [2024-07-15 14:15:49.481042] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:03.533 [2024-07-15 14:15:49.481296] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:03.533 [2024-07-15 14:15:49.481583] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:03.533 [2024-07-15 14:15:49.481845] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:03.533 [2024-07-15 14:15:49.482013] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:23:03.533 14:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:23:03.533 14:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:03.790 14:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:23:03.790 14:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:23:03.790 14:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:23:03.790 14:15:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:23:04.047 14:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:23:04.047 14:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:04.612 14:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:23:04.612 14:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:23:04.612 14:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:23:04.612 14:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:23:04.869 14:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:23:04.869 14:15:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:23:05.126 14:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:23:05.126 14:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:23:05.126 14:15:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:23:05.126 14:15:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:23:05.126 14:15:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:05.126 14:15:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:05.126 14:15:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:05.126 14:15:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:05.126 14:15:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:05.126 14:15:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:05.126 14:15:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:05.126 14:15:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:23:05.126 14:15:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:23:05.383 [2024-07-15 14:15:51.349299] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:23:05.383 [2024-07-15 14:15:51.351102] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:23:05.383 [2024-07-15 14:15:51.351321] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:23:05.383 [2024-07-15 14:15:51.351470] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:23:05.383 [2024-07-15 14:15:51.351622] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:23:05.383 [2024-07-15 14:15:51.351836] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:23:05.383 [2024-07-15 14:15:51.352002] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:23:05.383 [2024-07-15 14:15:51.352157] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:23:05.383 [2024-07-15 14:15:51.352297] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:05.383 [2024-07-15 14:15:51.352403] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state configuring 00:23:05.383 request: 00:23:05.383 { 00:23:05.383 "name": "raid_bdev1", 00:23:05.383 "raid_level": "raid0", 00:23:05.383 "base_bdevs": [ 00:23:05.383 "malloc1", 00:23:05.383 "malloc2", 00:23:05.383 "malloc3", 00:23:05.383 "malloc4" 00:23:05.383 ], 00:23:05.383 "strip_size_kb": 64, 00:23:05.383 "superblock": false, 00:23:05.383 "method": "bdev_raid_create", 00:23:05.383 "req_id": 1 00:23:05.383 } 00:23:05.383 Got JSON-RPC error response 00:23:05.383 response: 00:23:05.383 { 00:23:05.383 "code": -17, 00:23:05.383 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:23:05.383 } 00:23:05.383 14:15:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:23:05.383 14:15:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:05.383 14:15:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:05.383 14:15:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:05.383 14:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:05.383 14:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:23:05.642 14:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:23:05.642 14:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:23:05.642 14:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:05.901 [2024-07-15 14:15:51.853308] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:05.901 [2024-07-15 14:15:51.853646] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:05.901 [2024-07-15 14:15:51.853753] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:23:05.901 [2024-07-15 14:15:51.854072] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:05.901 [2024-07-15 14:15:51.855961] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:05.901 [2024-07-15 14:15:51.856129] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:05.901 [2024-07-15 14:15:51.856334] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:23:05.901 [2024-07-15 14:15:51.856434] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:05.901 pt1 00:23:05.901 14:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:23:05.901 14:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:05.901 14:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:05.901 14:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:05.901 14:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:05.901 14:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:05.901 14:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:05.901 14:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:05.901 14:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:05.901 14:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:05.901 14:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:05.901 14:15:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:06.158 14:15:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:06.158 "name": "raid_bdev1", 00:23:06.158 "uuid": "2b108422-f410-4b33-b26a-097a206897a8", 00:23:06.158 "strip_size_kb": 64, 00:23:06.158 "state": "configuring", 00:23:06.158 "raid_level": "raid0", 00:23:06.158 "superblock": true, 00:23:06.158 "num_base_bdevs": 4, 00:23:06.158 "num_base_bdevs_discovered": 1, 00:23:06.158 "num_base_bdevs_operational": 4, 00:23:06.158 "base_bdevs_list": [ 00:23:06.158 { 00:23:06.158 "name": "pt1", 00:23:06.158 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:06.158 "is_configured": true, 00:23:06.158 "data_offset": 2048, 00:23:06.158 "data_size": 63488 00:23:06.158 }, 00:23:06.158 { 00:23:06.158 "name": null, 00:23:06.158 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:06.158 "is_configured": false, 00:23:06.158 "data_offset": 2048, 00:23:06.158 "data_size": 63488 00:23:06.158 }, 00:23:06.158 { 00:23:06.158 "name": null, 00:23:06.158 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:06.158 "is_configured": false, 00:23:06.158 "data_offset": 2048, 00:23:06.158 "data_size": 63488 00:23:06.158 }, 00:23:06.158 { 00:23:06.158 "name": null, 00:23:06.158 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:06.158 "is_configured": false, 00:23:06.158 "data_offset": 2048, 00:23:06.158 "data_size": 63488 00:23:06.158 } 00:23:06.158 ] 00:23:06.158 }' 00:23:06.158 14:15:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:06.158 14:15:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:07.092 14:15:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:23:07.092 14:15:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:07.092 [2024-07-15 14:15:53.009535] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:07.092 [2024-07-15 14:15:53.009936] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:07.092 [2024-07-15 14:15:53.010119] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:23:07.092 [2024-07-15 14:15:53.010305] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:07.092 [2024-07-15 14:15:53.010826] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:07.092 [2024-07-15 14:15:53.010991] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:07.092 [2024-07-15 14:15:53.011213] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:07.092 [2024-07-15 14:15:53.011348] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:07.092 pt2 00:23:07.092 14:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:07.351 [2024-07-15 14:15:53.289591] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:23:07.351 14:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:23:07.351 14:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:07.351 14:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:07.351 14:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:07.351 14:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:07.351 14:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:07.351 14:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:07.351 14:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:07.351 14:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:07.351 14:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:07.351 14:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:07.351 14:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:07.610 14:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:07.610 "name": "raid_bdev1", 00:23:07.610 "uuid": "2b108422-f410-4b33-b26a-097a206897a8", 00:23:07.610 "strip_size_kb": 64, 00:23:07.610 "state": "configuring", 00:23:07.610 "raid_level": "raid0", 00:23:07.610 "superblock": true, 00:23:07.610 "num_base_bdevs": 4, 00:23:07.610 "num_base_bdevs_discovered": 1, 00:23:07.610 "num_base_bdevs_operational": 4, 00:23:07.610 "base_bdevs_list": [ 00:23:07.610 { 00:23:07.610 "name": "pt1", 00:23:07.610 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:07.610 "is_configured": true, 00:23:07.610 "data_offset": 2048, 00:23:07.610 "data_size": 63488 00:23:07.610 }, 00:23:07.610 { 00:23:07.610 "name": null, 00:23:07.610 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:07.610 "is_configured": false, 00:23:07.610 "data_offset": 2048, 00:23:07.610 "data_size": 63488 00:23:07.610 }, 00:23:07.610 { 00:23:07.610 "name": null, 00:23:07.610 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:07.610 "is_configured": false, 00:23:07.610 "data_offset": 2048, 00:23:07.610 "data_size": 63488 00:23:07.610 }, 00:23:07.610 { 00:23:07.610 "name": null, 00:23:07.610 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:07.610 "is_configured": false, 00:23:07.610 "data_offset": 2048, 00:23:07.610 "data_size": 63488 00:23:07.610 } 00:23:07.610 ] 00:23:07.610 }' 00:23:07.610 14:15:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:07.610 14:15:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:08.545 14:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:23:08.545 14:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:23:08.545 14:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:08.545 [2024-07-15 14:15:54.481752] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:08.545 [2024-07-15 14:15:54.482101] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:08.545 [2024-07-15 14:15:54.482267] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:23:08.545 [2024-07-15 14:15:54.482468] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:08.545 [2024-07-15 14:15:54.482983] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:08.545 [2024-07-15 14:15:54.483164] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:08.545 [2024-07-15 14:15:54.483370] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:23:08.545 [2024-07-15 14:15:54.483543] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:08.545 pt2 00:23:08.545 14:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:23:08.545 14:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:23:08.545 14:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:08.804 [2024-07-15 14:15:54.781837] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:08.804 [2024-07-15 14:15:54.782161] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:08.804 [2024-07-15 14:15:54.782235] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:23:08.804 [2024-07-15 14:15:54.782506] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:08.804 [2024-07-15 14:15:54.783026] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:08.804 [2024-07-15 14:15:54.783219] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:08.804 [2024-07-15 14:15:54.783414] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:23:08.804 [2024-07-15 14:15:54.783537] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:08.804 pt3 00:23:08.804 14:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:23:08.804 14:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:23:08.804 14:15:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:23:09.370 [2024-07-15 14:15:55.065877] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:23:09.370 [2024-07-15 14:15:55.066203] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:09.370 [2024-07-15 14:15:55.066363] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:23:09.370 [2024-07-15 14:15:55.066521] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:09.370 [2024-07-15 14:15:55.067005] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:09.370 [2024-07-15 14:15:55.067161] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:23:09.370 [2024-07-15 14:15:55.067354] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:23:09.370 [2024-07-15 14:15:55.067505] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:23:09.370 [2024-07-15 14:15:55.067699] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:23:09.370 [2024-07-15 14:15:55.067834] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:23:09.370 [2024-07-15 14:15:55.068026] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:23:09.370 [2024-07-15 14:15:55.068412] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:23:09.370 [2024-07-15 14:15:55.068579] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:23:09.370 [2024-07-15 14:15:55.068812] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:09.370 pt4 00:23:09.370 14:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:23:09.370 14:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:23:09.370 14:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:23:09.370 14:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:09.370 14:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:09.370 14:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:09.370 14:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:09.370 14:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:09.370 14:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:09.370 14:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:09.370 14:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:09.370 14:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:09.370 14:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:09.370 14:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:09.370 14:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:09.370 "name": "raid_bdev1", 00:23:09.370 "uuid": "2b108422-f410-4b33-b26a-097a206897a8", 00:23:09.370 "strip_size_kb": 64, 00:23:09.370 "state": "online", 00:23:09.370 "raid_level": "raid0", 00:23:09.370 "superblock": true, 00:23:09.370 "num_base_bdevs": 4, 00:23:09.370 "num_base_bdevs_discovered": 4, 00:23:09.370 "num_base_bdevs_operational": 4, 00:23:09.370 "base_bdevs_list": [ 00:23:09.370 { 00:23:09.370 "name": "pt1", 00:23:09.370 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:09.370 "is_configured": true, 00:23:09.370 "data_offset": 2048, 00:23:09.370 "data_size": 63488 00:23:09.370 }, 00:23:09.370 { 00:23:09.370 "name": "pt2", 00:23:09.370 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:09.370 "is_configured": true, 00:23:09.371 "data_offset": 2048, 00:23:09.371 "data_size": 63488 00:23:09.371 }, 00:23:09.371 { 00:23:09.371 "name": "pt3", 00:23:09.371 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:09.371 "is_configured": true, 00:23:09.371 "data_offset": 2048, 00:23:09.371 "data_size": 63488 00:23:09.371 }, 00:23:09.371 { 00:23:09.371 "name": "pt4", 00:23:09.371 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:09.371 "is_configured": true, 00:23:09.371 "data_offset": 2048, 00:23:09.371 "data_size": 63488 00:23:09.371 } 00:23:09.371 ] 00:23:09.371 }' 00:23:09.371 14:15:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:09.371 14:15:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:10.305 14:15:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:23:10.305 14:15:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:23:10.305 14:15:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:23:10.305 14:15:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:23:10.305 14:15:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:23:10.305 14:15:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:23:10.305 14:15:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:23:10.305 14:15:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:10.305 [2024-07-15 14:15:56.290327] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:10.563 14:15:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:23:10.563 "name": "raid_bdev1", 00:23:10.563 "aliases": [ 00:23:10.563 "2b108422-f410-4b33-b26a-097a206897a8" 00:23:10.563 ], 00:23:10.563 "product_name": "Raid Volume", 00:23:10.563 "block_size": 512, 00:23:10.563 "num_blocks": 253952, 00:23:10.563 "uuid": "2b108422-f410-4b33-b26a-097a206897a8", 00:23:10.563 "assigned_rate_limits": { 00:23:10.563 "rw_ios_per_sec": 0, 00:23:10.563 "rw_mbytes_per_sec": 0, 00:23:10.563 "r_mbytes_per_sec": 0, 00:23:10.563 "w_mbytes_per_sec": 0 00:23:10.563 }, 00:23:10.563 "claimed": false, 00:23:10.563 "zoned": false, 00:23:10.563 "supported_io_types": { 00:23:10.563 "read": true, 00:23:10.563 "write": true, 00:23:10.563 "unmap": true, 00:23:10.563 "flush": true, 00:23:10.563 "reset": true, 00:23:10.563 "nvme_admin": false, 00:23:10.563 "nvme_io": false, 00:23:10.563 "nvme_io_md": false, 00:23:10.563 "write_zeroes": true, 00:23:10.563 "zcopy": false, 00:23:10.563 "get_zone_info": false, 00:23:10.563 "zone_management": false, 00:23:10.563 "zone_append": false, 00:23:10.563 "compare": false, 00:23:10.563 "compare_and_write": false, 00:23:10.563 "abort": false, 00:23:10.563 "seek_hole": false, 00:23:10.563 "seek_data": false, 00:23:10.563 "copy": false, 00:23:10.563 "nvme_iov_md": false 00:23:10.563 }, 00:23:10.563 "memory_domains": [ 00:23:10.563 { 00:23:10.563 "dma_device_id": "system", 00:23:10.563 "dma_device_type": 1 00:23:10.563 }, 00:23:10.563 { 00:23:10.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:10.563 "dma_device_type": 2 00:23:10.563 }, 00:23:10.563 { 00:23:10.563 "dma_device_id": "system", 00:23:10.563 "dma_device_type": 1 00:23:10.563 }, 00:23:10.563 { 00:23:10.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:10.563 "dma_device_type": 2 00:23:10.563 }, 00:23:10.563 { 00:23:10.563 "dma_device_id": "system", 00:23:10.563 "dma_device_type": 1 00:23:10.563 }, 00:23:10.563 { 00:23:10.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:10.563 "dma_device_type": 2 00:23:10.563 }, 00:23:10.563 { 00:23:10.564 "dma_device_id": "system", 00:23:10.564 "dma_device_type": 1 00:23:10.564 }, 00:23:10.564 { 00:23:10.564 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:10.564 "dma_device_type": 2 00:23:10.564 } 00:23:10.564 ], 00:23:10.564 "driver_specific": { 00:23:10.564 "raid": { 00:23:10.564 "uuid": "2b108422-f410-4b33-b26a-097a206897a8", 00:23:10.564 "strip_size_kb": 64, 00:23:10.564 "state": "online", 00:23:10.564 "raid_level": "raid0", 00:23:10.564 "superblock": true, 00:23:10.564 "num_base_bdevs": 4, 00:23:10.564 "num_base_bdevs_discovered": 4, 00:23:10.564 "num_base_bdevs_operational": 4, 00:23:10.564 "base_bdevs_list": [ 00:23:10.564 { 00:23:10.564 "name": "pt1", 00:23:10.564 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:10.564 "is_configured": true, 00:23:10.564 "data_offset": 2048, 00:23:10.564 "data_size": 63488 00:23:10.564 }, 00:23:10.564 { 00:23:10.564 "name": "pt2", 00:23:10.564 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:10.564 "is_configured": true, 00:23:10.564 "data_offset": 2048, 00:23:10.564 "data_size": 63488 00:23:10.564 }, 00:23:10.564 { 00:23:10.564 "name": "pt3", 00:23:10.564 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:10.564 "is_configured": true, 00:23:10.564 "data_offset": 2048, 00:23:10.564 "data_size": 63488 00:23:10.564 }, 00:23:10.564 { 00:23:10.564 "name": "pt4", 00:23:10.564 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:10.564 "is_configured": true, 00:23:10.564 "data_offset": 2048, 00:23:10.564 "data_size": 63488 00:23:10.564 } 00:23:10.564 ] 00:23:10.564 } 00:23:10.564 } 00:23:10.564 }' 00:23:10.564 14:15:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:10.564 14:15:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:23:10.564 pt2 00:23:10.564 pt3 00:23:10.564 pt4' 00:23:10.564 14:15:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:10.564 14:15:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:23:10.564 14:15:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:10.822 14:15:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:10.822 "name": "pt1", 00:23:10.822 "aliases": [ 00:23:10.822 "00000000-0000-0000-0000-000000000001" 00:23:10.822 ], 00:23:10.822 "product_name": "passthru", 00:23:10.822 "block_size": 512, 00:23:10.822 "num_blocks": 65536, 00:23:10.822 "uuid": "00000000-0000-0000-0000-000000000001", 00:23:10.822 "assigned_rate_limits": { 00:23:10.822 "rw_ios_per_sec": 0, 00:23:10.822 "rw_mbytes_per_sec": 0, 00:23:10.822 "r_mbytes_per_sec": 0, 00:23:10.822 "w_mbytes_per_sec": 0 00:23:10.822 }, 00:23:10.822 "claimed": true, 00:23:10.822 "claim_type": "exclusive_write", 00:23:10.822 "zoned": false, 00:23:10.822 "supported_io_types": { 00:23:10.822 "read": true, 00:23:10.822 "write": true, 00:23:10.822 "unmap": true, 00:23:10.822 "flush": true, 00:23:10.822 "reset": true, 00:23:10.822 "nvme_admin": false, 00:23:10.822 "nvme_io": false, 00:23:10.822 "nvme_io_md": false, 00:23:10.822 "write_zeroes": true, 00:23:10.822 "zcopy": true, 00:23:10.822 "get_zone_info": false, 00:23:10.822 "zone_management": false, 00:23:10.822 "zone_append": false, 00:23:10.822 "compare": false, 00:23:10.822 "compare_and_write": false, 00:23:10.822 "abort": true, 00:23:10.822 "seek_hole": false, 00:23:10.822 "seek_data": false, 00:23:10.822 "copy": true, 00:23:10.822 "nvme_iov_md": false 00:23:10.822 }, 00:23:10.822 "memory_domains": [ 00:23:10.822 { 00:23:10.822 "dma_device_id": "system", 00:23:10.822 "dma_device_type": 1 00:23:10.822 }, 00:23:10.822 { 00:23:10.822 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:10.822 "dma_device_type": 2 00:23:10.822 } 00:23:10.822 ], 00:23:10.822 "driver_specific": { 00:23:10.822 "passthru": { 00:23:10.822 "name": "pt1", 00:23:10.822 "base_bdev_name": "malloc1" 00:23:10.822 } 00:23:10.822 } 00:23:10.822 }' 00:23:10.822 14:15:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:10.822 14:15:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:10.822 14:15:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:10.822 14:15:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:10.822 14:15:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:10.822 14:15:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:10.822 14:15:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:11.080 14:15:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:11.080 14:15:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:11.080 14:15:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:11.080 14:15:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:11.080 14:15:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:11.080 14:15:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:11.080 14:15:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:11.080 14:15:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:23:11.338 14:15:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:11.338 "name": "pt2", 00:23:11.338 "aliases": [ 00:23:11.338 "00000000-0000-0000-0000-000000000002" 00:23:11.338 ], 00:23:11.338 "product_name": "passthru", 00:23:11.338 "block_size": 512, 00:23:11.338 "num_blocks": 65536, 00:23:11.338 "uuid": "00000000-0000-0000-0000-000000000002", 00:23:11.338 "assigned_rate_limits": { 00:23:11.338 "rw_ios_per_sec": 0, 00:23:11.338 "rw_mbytes_per_sec": 0, 00:23:11.338 "r_mbytes_per_sec": 0, 00:23:11.338 "w_mbytes_per_sec": 0 00:23:11.338 }, 00:23:11.338 "claimed": true, 00:23:11.338 "claim_type": "exclusive_write", 00:23:11.338 "zoned": false, 00:23:11.338 "supported_io_types": { 00:23:11.338 "read": true, 00:23:11.338 "write": true, 00:23:11.338 "unmap": true, 00:23:11.338 "flush": true, 00:23:11.338 "reset": true, 00:23:11.338 "nvme_admin": false, 00:23:11.338 "nvme_io": false, 00:23:11.338 "nvme_io_md": false, 00:23:11.338 "write_zeroes": true, 00:23:11.338 "zcopy": true, 00:23:11.338 "get_zone_info": false, 00:23:11.338 "zone_management": false, 00:23:11.338 "zone_append": false, 00:23:11.338 "compare": false, 00:23:11.338 "compare_and_write": false, 00:23:11.338 "abort": true, 00:23:11.338 "seek_hole": false, 00:23:11.338 "seek_data": false, 00:23:11.338 "copy": true, 00:23:11.338 "nvme_iov_md": false 00:23:11.338 }, 00:23:11.338 "memory_domains": [ 00:23:11.338 { 00:23:11.338 "dma_device_id": "system", 00:23:11.338 "dma_device_type": 1 00:23:11.338 }, 00:23:11.338 { 00:23:11.338 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:11.338 "dma_device_type": 2 00:23:11.338 } 00:23:11.338 ], 00:23:11.338 "driver_specific": { 00:23:11.338 "passthru": { 00:23:11.338 "name": "pt2", 00:23:11.338 "base_bdev_name": "malloc2" 00:23:11.338 } 00:23:11.338 } 00:23:11.338 }' 00:23:11.338 14:15:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:11.338 14:15:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:11.596 14:15:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:11.596 14:15:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:11.596 14:15:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:11.596 14:15:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:11.596 14:15:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:11.596 14:15:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:11.596 14:15:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:11.596 14:15:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:11.596 14:15:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:11.853 14:15:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:11.853 14:15:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:11.853 14:15:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:11.853 14:15:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:23:12.110 14:15:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:12.110 "name": "pt3", 00:23:12.110 "aliases": [ 00:23:12.110 "00000000-0000-0000-0000-000000000003" 00:23:12.110 ], 00:23:12.110 "product_name": "passthru", 00:23:12.110 "block_size": 512, 00:23:12.110 "num_blocks": 65536, 00:23:12.110 "uuid": "00000000-0000-0000-0000-000000000003", 00:23:12.110 "assigned_rate_limits": { 00:23:12.110 "rw_ios_per_sec": 0, 00:23:12.110 "rw_mbytes_per_sec": 0, 00:23:12.110 "r_mbytes_per_sec": 0, 00:23:12.110 "w_mbytes_per_sec": 0 00:23:12.110 }, 00:23:12.110 "claimed": true, 00:23:12.110 "claim_type": "exclusive_write", 00:23:12.110 "zoned": false, 00:23:12.110 "supported_io_types": { 00:23:12.110 "read": true, 00:23:12.110 "write": true, 00:23:12.110 "unmap": true, 00:23:12.110 "flush": true, 00:23:12.110 "reset": true, 00:23:12.110 "nvme_admin": false, 00:23:12.110 "nvme_io": false, 00:23:12.110 "nvme_io_md": false, 00:23:12.110 "write_zeroes": true, 00:23:12.110 "zcopy": true, 00:23:12.110 "get_zone_info": false, 00:23:12.110 "zone_management": false, 00:23:12.110 "zone_append": false, 00:23:12.110 "compare": false, 00:23:12.110 "compare_and_write": false, 00:23:12.110 "abort": true, 00:23:12.110 "seek_hole": false, 00:23:12.110 "seek_data": false, 00:23:12.110 "copy": true, 00:23:12.110 "nvme_iov_md": false 00:23:12.110 }, 00:23:12.110 "memory_domains": [ 00:23:12.110 { 00:23:12.110 "dma_device_id": "system", 00:23:12.110 "dma_device_type": 1 00:23:12.110 }, 00:23:12.110 { 00:23:12.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:12.110 "dma_device_type": 2 00:23:12.110 } 00:23:12.110 ], 00:23:12.110 "driver_specific": { 00:23:12.110 "passthru": { 00:23:12.110 "name": "pt3", 00:23:12.110 "base_bdev_name": "malloc3" 00:23:12.110 } 00:23:12.110 } 00:23:12.110 }' 00:23:12.110 14:15:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:12.110 14:15:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:12.110 14:15:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:12.110 14:15:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:12.110 14:15:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:12.368 14:15:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:12.368 14:15:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:12.368 14:15:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:12.368 14:15:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:12.368 14:15:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:12.368 14:15:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:12.368 14:15:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:12.368 14:15:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:12.368 14:15:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:23:12.368 14:15:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:12.627 14:15:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:12.627 "name": "pt4", 00:23:12.627 "aliases": [ 00:23:12.627 "00000000-0000-0000-0000-000000000004" 00:23:12.627 ], 00:23:12.627 "product_name": "passthru", 00:23:12.627 "block_size": 512, 00:23:12.627 "num_blocks": 65536, 00:23:12.627 "uuid": "00000000-0000-0000-0000-000000000004", 00:23:12.627 "assigned_rate_limits": { 00:23:12.627 "rw_ios_per_sec": 0, 00:23:12.627 "rw_mbytes_per_sec": 0, 00:23:12.627 "r_mbytes_per_sec": 0, 00:23:12.627 "w_mbytes_per_sec": 0 00:23:12.627 }, 00:23:12.627 "claimed": true, 00:23:12.627 "claim_type": "exclusive_write", 00:23:12.627 "zoned": false, 00:23:12.627 "supported_io_types": { 00:23:12.627 "read": true, 00:23:12.627 "write": true, 00:23:12.627 "unmap": true, 00:23:12.627 "flush": true, 00:23:12.627 "reset": true, 00:23:12.627 "nvme_admin": false, 00:23:12.627 "nvme_io": false, 00:23:12.627 "nvme_io_md": false, 00:23:12.627 "write_zeroes": true, 00:23:12.627 "zcopy": true, 00:23:12.627 "get_zone_info": false, 00:23:12.627 "zone_management": false, 00:23:12.627 "zone_append": false, 00:23:12.627 "compare": false, 00:23:12.627 "compare_and_write": false, 00:23:12.627 "abort": true, 00:23:12.627 "seek_hole": false, 00:23:12.627 "seek_data": false, 00:23:12.627 "copy": true, 00:23:12.627 "nvme_iov_md": false 00:23:12.627 }, 00:23:12.627 "memory_domains": [ 00:23:12.627 { 00:23:12.627 "dma_device_id": "system", 00:23:12.627 "dma_device_type": 1 00:23:12.627 }, 00:23:12.627 { 00:23:12.627 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:12.627 "dma_device_type": 2 00:23:12.627 } 00:23:12.627 ], 00:23:12.627 "driver_specific": { 00:23:12.627 "passthru": { 00:23:12.627 "name": "pt4", 00:23:12.627 "base_bdev_name": "malloc4" 00:23:12.627 } 00:23:12.627 } 00:23:12.627 }' 00:23:12.627 14:15:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:12.627 14:15:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:12.885 14:15:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:12.885 14:15:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:12.885 14:15:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:12.885 14:15:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:12.885 14:15:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:12.885 14:15:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:12.885 14:15:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:12.885 14:15:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:12.885 14:15:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:12.885 14:15:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:13.143 14:15:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:23:13.143 14:15:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:13.143 [2024-07-15 14:15:59.106747] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:13.143 14:15:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 2b108422-f410-4b33-b26a-097a206897a8 '!=' 2b108422-f410-4b33-b26a-097a206897a8 ']' 00:23:13.143 14:15:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid0 00:23:13.143 14:15:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:23:13.143 14:15:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:23:13.143 14:15:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 202605 00:23:13.143 14:15:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 202605 ']' 00:23:13.143 14:15:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 202605 00:23:13.143 14:15:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:23:13.143 14:15:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:13.143 14:15:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 202605 00:23:13.402 14:15:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:13.402 14:15:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:13.402 14:15:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 202605' 00:23:13.402 killing process with pid 202605 00:23:13.402 14:15:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 202605 00:23:13.402 [2024-07-15 14:15:59.154429] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:13.402 14:15:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 202605 00:23:13.402 [2024-07-15 14:15:59.154644] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:13.402 [2024-07-15 14:15:59.154701] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:13.402 [2024-07-15 14:15:59.154712] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:23:13.660 [2024-07-15 14:15:59.484393] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:15.038 14:16:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:23:15.038 00:23:15.038 real 0m18.910s 00:23:15.038 user 0m33.995s 00:23:15.038 sys 0m2.129s 00:23:15.038 14:16:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:15.038 14:16:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:15.038 ************************************ 00:23:15.038 END TEST raid_superblock_test 00:23:15.038 ************************************ 00:23:15.038 14:16:00 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:23:15.038 14:16:00 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:23:15.038 14:16:00 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:23:15.038 14:16:00 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:15.038 14:16:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:15.038 ************************************ 00:23:15.038 START TEST raid_read_error_test 00:23:15.038 ************************************ 00:23:15.038 14:16:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 4 read 00:23:15.038 14:16:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:23:15.038 14:16:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:23:15.038 14:16:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:23:15.038 14:16:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:23:15.038 14:16:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:23:15.038 14:16:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:23:15.038 14:16:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:23:15.038 14:16:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:23:15.038 14:16:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:23:15.038 14:16:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:23:15.038 14:16:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:23:15.038 14:16:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:23:15.038 14:16:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:23:15.038 14:16:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:23:15.038 14:16:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev4 00:23:15.038 14:16:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:23:15.038 14:16:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:23:15.038 14:16:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:23:15.038 14:16:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:23:15.038 14:16:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:23:15.038 14:16:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:23:15.038 14:16:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:23:15.039 14:16:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:23:15.039 14:16:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:23:15.039 14:16:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:23:15.039 14:16:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:23:15.039 14:16:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:23:15.039 14:16:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:23:15.039 14:16:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.Y3GyBf1l6s 00:23:15.039 14:16:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=203165 00:23:15.039 14:16:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 203165 /var/tmp/spdk-raid.sock 00:23:15.039 14:16:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 203165 ']' 00:23:15.039 14:16:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:23:15.039 14:16:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:15.039 14:16:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:15.039 14:16:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:15.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:15.039 14:16:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:15.039 14:16:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:15.039 [2024-07-15 14:16:00.717848] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:23:15.039 [2024-07-15 14:16:00.718177] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid203165 ] 00:23:15.039 [2024-07-15 14:16:00.870884] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:15.297 [2024-07-15 14:16:01.114550] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:15.556 [2024-07-15 14:16:01.309283] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:15.814 14:16:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:15.814 14:16:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:23:15.814 14:16:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:23:15.814 14:16:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:16.071 BaseBdev1_malloc 00:23:16.071 14:16:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:23:16.329 true 00:23:16.587 14:16:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:23:16.587 [2024-07-15 14:16:02.580214] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:23:16.587 [2024-07-15 14:16:02.580622] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:16.587 [2024-07-15 14:16:02.580710] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:23:16.587 [2024-07-15 14:16:02.580962] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:16.587 [2024-07-15 14:16:02.582942] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:16.587 [2024-07-15 14:16:02.583140] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:16.587 BaseBdev1 00:23:16.846 14:16:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:23:16.846 14:16:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:17.105 BaseBdev2_malloc 00:23:17.105 14:16:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:23:17.374 true 00:23:17.374 14:16:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:23:17.632 [2024-07-15 14:16:03.467030] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:23:17.632 [2024-07-15 14:16:03.467482] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:17.632 [2024-07-15 14:16:03.467687] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:23:17.632 [2024-07-15 14:16:03.467845] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:17.632 [2024-07-15 14:16:03.469955] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:17.632 [2024-07-15 14:16:03.470207] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:17.632 BaseBdev2 00:23:17.632 14:16:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:23:17.632 14:16:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:23:17.891 BaseBdev3_malloc 00:23:17.891 14:16:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:23:18.149 true 00:23:18.149 14:16:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:23:18.408 [2024-07-15 14:16:04.243031] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:23:18.408 [2024-07-15 14:16:04.243371] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:18.408 [2024-07-15 14:16:04.243451] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:23:18.408 [2024-07-15 14:16:04.243716] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:18.408 [2024-07-15 14:16:04.245650] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:18.408 [2024-07-15 14:16:04.245853] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:23:18.408 BaseBdev3 00:23:18.408 14:16:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:23:18.408 14:16:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:23:18.667 BaseBdev4_malloc 00:23:18.667 14:16:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:23:18.926 true 00:23:18.926 14:16:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:23:19.184 [2024-07-15 14:16:05.033314] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:23:19.184 [2024-07-15 14:16:05.033605] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:19.184 [2024-07-15 14:16:05.033775] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:23:19.184 [2024-07-15 14:16:05.033915] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:19.184 [2024-07-15 14:16:05.035784] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:19.184 [2024-07-15 14:16:05.035960] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:23:19.184 BaseBdev4 00:23:19.184 14:16:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:23:19.460 [2024-07-15 14:16:05.293404] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:19.460 [2024-07-15 14:16:05.295136] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:19.460 [2024-07-15 14:16:05.295348] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:19.460 [2024-07-15 14:16:05.295444] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:19.460 [2024-07-15 14:16:05.295778] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a280 00:23:19.460 [2024-07-15 14:16:05.295834] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:23:19.460 [2024-07-15 14:16:05.296058] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:23:19.460 [2024-07-15 14:16:05.296432] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a280 00:23:19.460 [2024-07-15 14:16:05.296566] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a280 00:23:19.460 [2024-07-15 14:16:05.296817] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:19.460 14:16:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:23:19.460 14:16:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:19.460 14:16:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:19.460 14:16:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:19.460 14:16:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:19.460 14:16:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:19.460 14:16:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:19.460 14:16:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:19.460 14:16:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:19.460 14:16:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:19.460 14:16:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:19.460 14:16:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:19.719 14:16:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:19.719 "name": "raid_bdev1", 00:23:19.719 "uuid": "72ea65ec-834e-4733-a1d8-3c8c24e1ee25", 00:23:19.719 "strip_size_kb": 64, 00:23:19.719 "state": "online", 00:23:19.719 "raid_level": "raid0", 00:23:19.719 "superblock": true, 00:23:19.719 "num_base_bdevs": 4, 00:23:19.719 "num_base_bdevs_discovered": 4, 00:23:19.719 "num_base_bdevs_operational": 4, 00:23:19.719 "base_bdevs_list": [ 00:23:19.719 { 00:23:19.719 "name": "BaseBdev1", 00:23:19.719 "uuid": "5c0f73ca-f87b-59c7-88fc-a1e947965e85", 00:23:19.719 "is_configured": true, 00:23:19.719 "data_offset": 2048, 00:23:19.719 "data_size": 63488 00:23:19.719 }, 00:23:19.719 { 00:23:19.719 "name": "BaseBdev2", 00:23:19.719 "uuid": "3163ff93-baee-5da1-b087-a8ab1b466640", 00:23:19.719 "is_configured": true, 00:23:19.719 "data_offset": 2048, 00:23:19.719 "data_size": 63488 00:23:19.719 }, 00:23:19.719 { 00:23:19.719 "name": "BaseBdev3", 00:23:19.719 "uuid": "f6cb49e9-debf-5aa8-a105-bde3b09bae06", 00:23:19.719 "is_configured": true, 00:23:19.719 "data_offset": 2048, 00:23:19.719 "data_size": 63488 00:23:19.719 }, 00:23:19.719 { 00:23:19.719 "name": "BaseBdev4", 00:23:19.719 "uuid": "071dd393-f1a2-543a-80e9-98b444651d10", 00:23:19.719 "is_configured": true, 00:23:19.719 "data_offset": 2048, 00:23:19.719 "data_size": 63488 00:23:19.719 } 00:23:19.719 ] 00:23:19.719 }' 00:23:19.719 14:16:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:19.719 14:16:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:20.285 14:16:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:23:20.285 14:16:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:23:20.544 [2024-07-15 14:16:06.298683] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:23:21.481 14:16:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:23:21.481 14:16:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:23:21.481 14:16:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:23:21.481 14:16:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:23:21.481 14:16:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:23:21.481 14:16:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:21.481 14:16:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:21.481 14:16:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:21.481 14:16:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:21.481 14:16:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:21.481 14:16:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:21.481 14:16:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:21.481 14:16:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:21.481 14:16:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:21.481 14:16:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:21.481 14:16:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:22.047 14:16:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:22.047 "name": "raid_bdev1", 00:23:22.047 "uuid": "72ea65ec-834e-4733-a1d8-3c8c24e1ee25", 00:23:22.047 "strip_size_kb": 64, 00:23:22.047 "state": "online", 00:23:22.047 "raid_level": "raid0", 00:23:22.047 "superblock": true, 00:23:22.047 "num_base_bdevs": 4, 00:23:22.047 "num_base_bdevs_discovered": 4, 00:23:22.047 "num_base_bdevs_operational": 4, 00:23:22.047 "base_bdevs_list": [ 00:23:22.047 { 00:23:22.047 "name": "BaseBdev1", 00:23:22.047 "uuid": "5c0f73ca-f87b-59c7-88fc-a1e947965e85", 00:23:22.047 "is_configured": true, 00:23:22.047 "data_offset": 2048, 00:23:22.047 "data_size": 63488 00:23:22.047 }, 00:23:22.047 { 00:23:22.047 "name": "BaseBdev2", 00:23:22.047 "uuid": "3163ff93-baee-5da1-b087-a8ab1b466640", 00:23:22.047 "is_configured": true, 00:23:22.047 "data_offset": 2048, 00:23:22.047 "data_size": 63488 00:23:22.047 }, 00:23:22.047 { 00:23:22.047 "name": "BaseBdev3", 00:23:22.047 "uuid": "f6cb49e9-debf-5aa8-a105-bde3b09bae06", 00:23:22.047 "is_configured": true, 00:23:22.047 "data_offset": 2048, 00:23:22.047 "data_size": 63488 00:23:22.047 }, 00:23:22.047 { 00:23:22.047 "name": "BaseBdev4", 00:23:22.047 "uuid": "071dd393-f1a2-543a-80e9-98b444651d10", 00:23:22.047 "is_configured": true, 00:23:22.047 "data_offset": 2048, 00:23:22.047 "data_size": 63488 00:23:22.047 } 00:23:22.047 ] 00:23:22.047 }' 00:23:22.047 14:16:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:22.047 14:16:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:22.613 14:16:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:22.613 [2024-07-15 14:16:08.603085] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:22.613 [2024-07-15 14:16:08.603337] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:22.613 [2024-07-15 14:16:08.604774] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:22.613 [2024-07-15 14:16:08.604948] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:22.613 [2024-07-15 14:16:08.605042] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:22.613 [2024-07-15 14:16:08.605152] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a280 name raid_bdev1, state offline 00:23:22.613 0 00:23:22.871 14:16:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 203165 00:23:22.871 14:16:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 203165 ']' 00:23:22.871 14:16:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 203165 00:23:22.871 14:16:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:23:22.871 14:16:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:22.871 14:16:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 203165 00:23:22.871 14:16:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:22.871 14:16:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:22.871 14:16:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 203165' 00:23:22.871 killing process with pid 203165 00:23:22.871 14:16:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 203165 00:23:22.871 [2024-07-15 14:16:08.643286] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:22.871 14:16:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 203165 00:23:23.147 [2024-07-15 14:16:08.922541] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:24.561 14:16:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.Y3GyBf1l6s 00:23:24.561 14:16:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:23:24.561 14:16:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:23:24.561 14:16:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.43 00:23:24.561 14:16:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:23:24.561 14:16:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:23:24.561 14:16:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:23:24.561 14:16:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.43 != \0\.\0\0 ]] 00:23:24.561 00:23:24.561 real 0m9.475s 00:23:24.561 user 0m14.692s 00:23:24.561 sys 0m1.054s 00:23:24.561 14:16:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:24.561 14:16:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:24.561 ************************************ 00:23:24.561 END TEST raid_read_error_test 00:23:24.561 ************************************ 00:23:24.561 14:16:10 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:23:24.561 14:16:10 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:23:24.561 14:16:10 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:23:24.562 14:16:10 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:24.562 14:16:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:24.562 ************************************ 00:23:24.562 START TEST raid_write_error_test 00:23:24.562 ************************************ 00:23:24.562 14:16:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 4 write 00:23:24.562 14:16:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:23:24.562 14:16:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:23:24.562 14:16:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:23:24.562 14:16:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:23:24.562 14:16:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:23:24.562 14:16:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:23:24.562 14:16:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:23:24.562 14:16:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:23:24.562 14:16:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:23:24.562 14:16:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:23:24.562 14:16:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:23:24.562 14:16:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:23:24.562 14:16:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:23:24.562 14:16:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:23:24.562 14:16:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev4 00:23:24.562 14:16:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:23:24.562 14:16:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:23:24.562 14:16:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:23:24.562 14:16:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:23:24.562 14:16:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:23:24.562 14:16:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:23:24.562 14:16:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:23:24.562 14:16:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:23:24.562 14:16:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:23:24.562 14:16:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:23:24.562 14:16:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:23:24.562 14:16:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:23:24.562 14:16:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:23:24.562 14:16:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.ZcwMFzhYYj 00:23:24.562 14:16:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=203383 00:23:24.562 14:16:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:23:24.562 14:16:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 203383 /var/tmp/spdk-raid.sock 00:23:24.562 14:16:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 203383 ']' 00:23:24.562 14:16:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:24.562 14:16:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:24.562 14:16:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:24.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:24.562 14:16:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:24.562 14:16:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:24.562 [2024-07-15 14:16:10.245653] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:23:24.562 [2024-07-15 14:16:10.246062] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid203383 ] 00:23:24.562 [2024-07-15 14:16:10.400611] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:24.821 [2024-07-15 14:16:10.617965] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:25.079 [2024-07-15 14:16:10.832116] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:25.337 14:16:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:25.337 14:16:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:23:25.337 14:16:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:23:25.337 14:16:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:25.904 BaseBdev1_malloc 00:23:25.904 14:16:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:23:25.904 true 00:23:25.904 14:16:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:23:26.162 [2024-07-15 14:16:12.117431] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:23:26.162 [2024-07-15 14:16:12.118191] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:26.162 [2024-07-15 14:16:12.118557] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:23:26.162 [2024-07-15 14:16:12.118798] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:26.162 [2024-07-15 14:16:12.120812] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:26.162 [2024-07-15 14:16:12.121149] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:26.162 BaseBdev1 00:23:26.162 14:16:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:23:26.162 14:16:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:26.419 BaseBdev2_malloc 00:23:26.420 14:16:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:23:26.678 true 00:23:26.678 14:16:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:23:26.958 [2024-07-15 14:16:12.878442] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:23:26.958 [2024-07-15 14:16:12.878901] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:26.958 [2024-07-15 14:16:12.879137] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:23:26.958 [2024-07-15 14:16:12.879345] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:26.958 [2024-07-15 14:16:12.881257] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:26.958 [2024-07-15 14:16:12.881483] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:26.958 BaseBdev2 00:23:26.958 14:16:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:23:26.958 14:16:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:23:27.216 BaseBdev3_malloc 00:23:27.481 14:16:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:23:27.481 true 00:23:27.481 14:16:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:23:27.741 [2024-07-15 14:16:13.693183] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:23:27.741 [2024-07-15 14:16:13.693787] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:27.741 [2024-07-15 14:16:13.694017] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:23:27.741 [2024-07-15 14:16:13.694269] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:27.741 [2024-07-15 14:16:13.696182] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:27.741 [2024-07-15 14:16:13.696412] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:23:27.741 BaseBdev3 00:23:27.741 14:16:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:23:27.741 14:16:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:23:27.999 BaseBdev4_malloc 00:23:27.999 14:16:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:23:28.566 true 00:23:28.566 14:16:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:23:28.566 [2024-07-15 14:16:14.512023] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:23:28.566 [2024-07-15 14:16:14.512537] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:28.566 [2024-07-15 14:16:14.512791] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:23:28.566 [2024-07-15 14:16:14.513024] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:28.566 [2024-07-15 14:16:14.514909] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:28.566 [2024-07-15 14:16:14.515139] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:23:28.566 BaseBdev4 00:23:28.566 14:16:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:23:28.826 [2024-07-15 14:16:14.752133] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:28.826 [2024-07-15 14:16:14.753908] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:28.826 [2024-07-15 14:16:14.754104] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:28.826 [2024-07-15 14:16:14.754198] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:28.826 [2024-07-15 14:16:14.754442] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a280 00:23:28.826 [2024-07-15 14:16:14.754494] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:23:28.826 [2024-07-15 14:16:14.754708] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:23:28.826 [2024-07-15 14:16:14.755092] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a280 00:23:28.826 [2024-07-15 14:16:14.755218] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a280 00:23:28.826 [2024-07-15 14:16:14.755450] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:28.826 14:16:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:23:28.826 14:16:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:28.826 14:16:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:28.826 14:16:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:28.826 14:16:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:28.826 14:16:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:28.826 14:16:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:28.826 14:16:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:28.826 14:16:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:28.826 14:16:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:28.826 14:16:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:28.826 14:16:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:29.084 14:16:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:29.084 "name": "raid_bdev1", 00:23:29.084 "uuid": "0513e2c6-3949-40f5-9fdb-b0d4a3d7e6f1", 00:23:29.084 "strip_size_kb": 64, 00:23:29.084 "state": "online", 00:23:29.084 "raid_level": "raid0", 00:23:29.084 "superblock": true, 00:23:29.084 "num_base_bdevs": 4, 00:23:29.084 "num_base_bdevs_discovered": 4, 00:23:29.084 "num_base_bdevs_operational": 4, 00:23:29.084 "base_bdevs_list": [ 00:23:29.084 { 00:23:29.084 "name": "BaseBdev1", 00:23:29.084 "uuid": "20d6a889-64da-5fe4-b379-60429ecf2665", 00:23:29.084 "is_configured": true, 00:23:29.084 "data_offset": 2048, 00:23:29.084 "data_size": 63488 00:23:29.084 }, 00:23:29.084 { 00:23:29.084 "name": "BaseBdev2", 00:23:29.084 "uuid": "3012f40f-0b75-548f-a576-07cb5042f59e", 00:23:29.084 "is_configured": true, 00:23:29.084 "data_offset": 2048, 00:23:29.084 "data_size": 63488 00:23:29.084 }, 00:23:29.084 { 00:23:29.084 "name": "BaseBdev3", 00:23:29.084 "uuid": "63563c0d-7c37-54e9-b568-9efeca401cd9", 00:23:29.084 "is_configured": true, 00:23:29.084 "data_offset": 2048, 00:23:29.084 "data_size": 63488 00:23:29.084 }, 00:23:29.084 { 00:23:29.084 "name": "BaseBdev4", 00:23:29.084 "uuid": "582eb4d4-f8f2-5051-a045-5254ac1d73bb", 00:23:29.084 "is_configured": true, 00:23:29.084 "data_offset": 2048, 00:23:29.084 "data_size": 63488 00:23:29.084 } 00:23:29.084 ] 00:23:29.084 }' 00:23:29.084 14:16:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:29.084 14:16:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:29.651 14:16:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:23:29.651 14:16:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:23:29.911 [2024-07-15 14:16:15.701402] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:23:30.847 14:16:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:23:31.106 14:16:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:23:31.106 14:16:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:23:31.106 14:16:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:23:31.106 14:16:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:23:31.106 14:16:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:31.106 14:16:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:31.106 14:16:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:23:31.106 14:16:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:31.106 14:16:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:31.106 14:16:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:31.106 14:16:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:31.106 14:16:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:31.106 14:16:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:31.106 14:16:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:31.106 14:16:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:31.366 14:16:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:31.366 "name": "raid_bdev1", 00:23:31.366 "uuid": "0513e2c6-3949-40f5-9fdb-b0d4a3d7e6f1", 00:23:31.366 "strip_size_kb": 64, 00:23:31.366 "state": "online", 00:23:31.366 "raid_level": "raid0", 00:23:31.366 "superblock": true, 00:23:31.366 "num_base_bdevs": 4, 00:23:31.366 "num_base_bdevs_discovered": 4, 00:23:31.366 "num_base_bdevs_operational": 4, 00:23:31.366 "base_bdevs_list": [ 00:23:31.366 { 00:23:31.366 "name": "BaseBdev1", 00:23:31.366 "uuid": "20d6a889-64da-5fe4-b379-60429ecf2665", 00:23:31.366 "is_configured": true, 00:23:31.366 "data_offset": 2048, 00:23:31.366 "data_size": 63488 00:23:31.366 }, 00:23:31.366 { 00:23:31.366 "name": "BaseBdev2", 00:23:31.366 "uuid": "3012f40f-0b75-548f-a576-07cb5042f59e", 00:23:31.366 "is_configured": true, 00:23:31.366 "data_offset": 2048, 00:23:31.366 "data_size": 63488 00:23:31.366 }, 00:23:31.366 { 00:23:31.366 "name": "BaseBdev3", 00:23:31.366 "uuid": "63563c0d-7c37-54e9-b568-9efeca401cd9", 00:23:31.366 "is_configured": true, 00:23:31.366 "data_offset": 2048, 00:23:31.366 "data_size": 63488 00:23:31.366 }, 00:23:31.366 { 00:23:31.366 "name": "BaseBdev4", 00:23:31.366 "uuid": "582eb4d4-f8f2-5051-a045-5254ac1d73bb", 00:23:31.366 "is_configured": true, 00:23:31.366 "data_offset": 2048, 00:23:31.366 "data_size": 63488 00:23:31.366 } 00:23:31.366 ] 00:23:31.366 }' 00:23:31.366 14:16:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:31.366 14:16:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:31.932 14:16:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:32.190 [2024-07-15 14:16:18.193698] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:32.190 [2024-07-15 14:16:18.193988] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:32.449 [2024-07-15 14:16:18.195519] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:32.449 [2024-07-15 14:16:18.195694] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:32.449 [2024-07-15 14:16:18.195784] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:32.449 [2024-07-15 14:16:18.195894] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a280 name raid_bdev1, state offline 00:23:32.449 0 00:23:32.449 14:16:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 203383 00:23:32.449 14:16:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 203383 ']' 00:23:32.449 14:16:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 203383 00:23:32.449 14:16:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:23:32.449 14:16:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:32.449 14:16:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 203383 00:23:32.449 14:16:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:32.449 14:16:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:32.449 14:16:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 203383' 00:23:32.449 killing process with pid 203383 00:23:32.449 14:16:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 203383 00:23:32.449 14:16:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 203383 00:23:32.449 [2024-07-15 14:16:18.246998] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:32.707 [2024-07-15 14:16:18.531700] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:34.082 14:16:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.ZcwMFzhYYj 00:23:34.082 14:16:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:23:34.082 14:16:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:23:34.082 14:16:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.40 00:23:34.082 14:16:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:23:34.082 14:16:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:23:34.082 14:16:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:23:34.082 14:16:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.40 != \0\.\0\0 ]] 00:23:34.082 00:23:34.082 real 0m9.524s 00:23:34.082 user 0m14.796s 00:23:34.082 sys 0m1.080s 00:23:34.082 14:16:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:34.082 14:16:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:23:34.082 ************************************ 00:23:34.082 END TEST raid_write_error_test 00:23:34.082 ************************************ 00:23:34.082 14:16:19 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:23:34.082 14:16:19 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:23:34.082 14:16:19 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:23:34.082 14:16:19 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:23:34.082 14:16:19 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:34.082 14:16:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:34.082 ************************************ 00:23:34.082 START TEST raid_state_function_test 00:23:34.082 ************************************ 00:23:34.082 14:16:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 4 false 00:23:34.082 14:16:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:23:34.082 14:16:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:23:34.082 14:16:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:23:34.082 14:16:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:23:34.082 14:16:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:23:34.082 14:16:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:34.082 14:16:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:23:34.082 14:16:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:23:34.082 14:16:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:34.082 14:16:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:23:34.082 14:16:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:23:34.082 14:16:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:34.082 14:16:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:23:34.082 14:16:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:23:34.082 14:16:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:34.082 14:16:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev4 00:23:34.082 14:16:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:23:34.082 14:16:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:34.082 14:16:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:23:34.082 14:16:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:23:34.082 14:16:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:23:34.082 14:16:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:23:34.082 14:16:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:23:34.082 14:16:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:23:34.082 14:16:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:23:34.082 14:16:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:23:34.082 14:16:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:23:34.082 14:16:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:23:34.082 14:16:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:23:34.082 14:16:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=203596 00:23:34.082 14:16:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 203596' 00:23:34.082 Process raid pid: 203596 00:23:34.082 14:16:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:23:34.082 14:16:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 203596 /var/tmp/spdk-raid.sock 00:23:34.082 14:16:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 203596 ']' 00:23:34.082 14:16:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:34.082 14:16:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:34.082 14:16:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:34.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:34.082 14:16:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:34.082 14:16:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:34.082 [2024-07-15 14:16:19.831843] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:23:34.082 [2024-07-15 14:16:19.832305] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:34.082 [2024-07-15 14:16:19.999442] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:34.340 [2024-07-15 14:16:20.218675] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:34.598 [2024-07-15 14:16:20.417716] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:34.904 14:16:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:34.904 14:16:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:23:34.904 14:16:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:35.163 [2024-07-15 14:16:21.102915] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:35.163 [2024-07-15 14:16:21.103503] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:35.163 [2024-07-15 14:16:21.103674] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:35.163 [2024-07-15 14:16:21.103823] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:35.163 [2024-07-15 14:16:21.103964] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:35.163 [2024-07-15 14:16:21.104092] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:35.163 [2024-07-15 14:16:21.104210] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:35.163 [2024-07-15 14:16:21.104338] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:35.163 14:16:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:35.163 14:16:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:35.163 14:16:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:35.163 14:16:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:35.163 14:16:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:35.163 14:16:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:35.163 14:16:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:35.163 14:16:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:35.163 14:16:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:35.163 14:16:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:35.163 14:16:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:35.163 14:16:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:35.421 14:16:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:35.422 "name": "Existed_Raid", 00:23:35.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:35.422 "strip_size_kb": 64, 00:23:35.422 "state": "configuring", 00:23:35.422 "raid_level": "concat", 00:23:35.422 "superblock": false, 00:23:35.422 "num_base_bdevs": 4, 00:23:35.422 "num_base_bdevs_discovered": 0, 00:23:35.422 "num_base_bdevs_operational": 4, 00:23:35.422 "base_bdevs_list": [ 00:23:35.422 { 00:23:35.422 "name": "BaseBdev1", 00:23:35.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:35.422 "is_configured": false, 00:23:35.422 "data_offset": 0, 00:23:35.422 "data_size": 0 00:23:35.422 }, 00:23:35.422 { 00:23:35.422 "name": "BaseBdev2", 00:23:35.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:35.422 "is_configured": false, 00:23:35.422 "data_offset": 0, 00:23:35.422 "data_size": 0 00:23:35.422 }, 00:23:35.422 { 00:23:35.422 "name": "BaseBdev3", 00:23:35.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:35.422 "is_configured": false, 00:23:35.422 "data_offset": 0, 00:23:35.422 "data_size": 0 00:23:35.422 }, 00:23:35.422 { 00:23:35.422 "name": "BaseBdev4", 00:23:35.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:35.422 "is_configured": false, 00:23:35.422 "data_offset": 0, 00:23:35.422 "data_size": 0 00:23:35.422 } 00:23:35.422 ] 00:23:35.422 }' 00:23:35.422 14:16:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:35.422 14:16:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:35.988 14:16:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:36.247 [2024-07-15 14:16:22.183088] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:36.247 [2024-07-15 14:16:22.183477] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:23:36.247 14:16:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:36.506 [2024-07-15 14:16:22.415078] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:36.506 [2024-07-15 14:16:22.415713] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:36.506 [2024-07-15 14:16:22.415907] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:36.506 [2024-07-15 14:16:22.416045] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:36.506 [2024-07-15 14:16:22.416220] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:36.506 [2024-07-15 14:16:22.416369] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:36.506 [2024-07-15 14:16:22.416482] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:36.506 [2024-07-15 14:16:22.416628] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:36.506 14:16:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:36.765 [2024-07-15 14:16:22.682326] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:36.765 BaseBdev1 00:23:36.765 14:16:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:23:36.765 14:16:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:23:36.765 14:16:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:36.765 14:16:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:23:36.765 14:16:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:36.765 14:16:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:36.765 14:16:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:37.023 14:16:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:37.282 [ 00:23:37.282 { 00:23:37.282 "name": "BaseBdev1", 00:23:37.282 "aliases": [ 00:23:37.282 "9bec3225-93de-4c78-a4f8-8e1467a1a103" 00:23:37.282 ], 00:23:37.282 "product_name": "Malloc disk", 00:23:37.282 "block_size": 512, 00:23:37.282 "num_blocks": 65536, 00:23:37.282 "uuid": "9bec3225-93de-4c78-a4f8-8e1467a1a103", 00:23:37.282 "assigned_rate_limits": { 00:23:37.282 "rw_ios_per_sec": 0, 00:23:37.282 "rw_mbytes_per_sec": 0, 00:23:37.282 "r_mbytes_per_sec": 0, 00:23:37.282 "w_mbytes_per_sec": 0 00:23:37.282 }, 00:23:37.282 "claimed": true, 00:23:37.282 "claim_type": "exclusive_write", 00:23:37.282 "zoned": false, 00:23:37.282 "supported_io_types": { 00:23:37.282 "read": true, 00:23:37.282 "write": true, 00:23:37.282 "unmap": true, 00:23:37.282 "flush": true, 00:23:37.282 "reset": true, 00:23:37.282 "nvme_admin": false, 00:23:37.282 "nvme_io": false, 00:23:37.282 "nvme_io_md": false, 00:23:37.282 "write_zeroes": true, 00:23:37.282 "zcopy": true, 00:23:37.282 "get_zone_info": false, 00:23:37.282 "zone_management": false, 00:23:37.282 "zone_append": false, 00:23:37.282 "compare": false, 00:23:37.282 "compare_and_write": false, 00:23:37.282 "abort": true, 00:23:37.282 "seek_hole": false, 00:23:37.282 "seek_data": false, 00:23:37.282 "copy": true, 00:23:37.282 "nvme_iov_md": false 00:23:37.282 }, 00:23:37.282 "memory_domains": [ 00:23:37.282 { 00:23:37.282 "dma_device_id": "system", 00:23:37.282 "dma_device_type": 1 00:23:37.282 }, 00:23:37.282 { 00:23:37.282 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:37.282 "dma_device_type": 2 00:23:37.282 } 00:23:37.282 ], 00:23:37.282 "driver_specific": {} 00:23:37.282 } 00:23:37.282 ] 00:23:37.282 14:16:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:23:37.282 14:16:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:37.282 14:16:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:37.282 14:16:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:37.282 14:16:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:37.282 14:16:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:37.282 14:16:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:37.282 14:16:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:37.282 14:16:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:37.282 14:16:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:37.282 14:16:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:37.282 14:16:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:37.282 14:16:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:37.541 14:16:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:37.541 "name": "Existed_Raid", 00:23:37.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:37.541 "strip_size_kb": 64, 00:23:37.541 "state": "configuring", 00:23:37.541 "raid_level": "concat", 00:23:37.541 "superblock": false, 00:23:37.541 "num_base_bdevs": 4, 00:23:37.541 "num_base_bdevs_discovered": 1, 00:23:37.541 "num_base_bdevs_operational": 4, 00:23:37.541 "base_bdevs_list": [ 00:23:37.541 { 00:23:37.541 "name": "BaseBdev1", 00:23:37.541 "uuid": "9bec3225-93de-4c78-a4f8-8e1467a1a103", 00:23:37.541 "is_configured": true, 00:23:37.541 "data_offset": 0, 00:23:37.541 "data_size": 65536 00:23:37.541 }, 00:23:37.541 { 00:23:37.541 "name": "BaseBdev2", 00:23:37.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:37.541 "is_configured": false, 00:23:37.541 "data_offset": 0, 00:23:37.541 "data_size": 0 00:23:37.541 }, 00:23:37.541 { 00:23:37.541 "name": "BaseBdev3", 00:23:37.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:37.541 "is_configured": false, 00:23:37.541 "data_offset": 0, 00:23:37.541 "data_size": 0 00:23:37.541 }, 00:23:37.541 { 00:23:37.541 "name": "BaseBdev4", 00:23:37.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:37.541 "is_configured": false, 00:23:37.541 "data_offset": 0, 00:23:37.541 "data_size": 0 00:23:37.541 } 00:23:37.541 ] 00:23:37.541 }' 00:23:37.541 14:16:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:37.541 14:16:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:38.106 14:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:38.365 [2024-07-15 14:16:24.354610] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:38.365 [2024-07-15 14:16:24.354878] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:23:38.624 14:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:38.624 [2024-07-15 14:16:24.594677] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:38.624 [2024-07-15 14:16:24.596432] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:38.624 [2024-07-15 14:16:24.597022] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:38.624 [2024-07-15 14:16:24.597160] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:38.624 [2024-07-15 14:16:24.597305] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:38.624 [2024-07-15 14:16:24.597418] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:38.624 [2024-07-15 14:16:24.597545] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:38.624 14:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:23:38.624 14:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:38.624 14:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:38.624 14:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:38.624 14:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:38.624 14:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:38.624 14:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:38.624 14:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:38.624 14:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:38.624 14:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:38.624 14:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:38.624 14:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:38.624 14:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:38.624 14:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:38.896 14:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:38.896 "name": "Existed_Raid", 00:23:38.896 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:38.896 "strip_size_kb": 64, 00:23:38.896 "state": "configuring", 00:23:38.896 "raid_level": "concat", 00:23:38.896 "superblock": false, 00:23:38.896 "num_base_bdevs": 4, 00:23:38.896 "num_base_bdevs_discovered": 1, 00:23:38.896 "num_base_bdevs_operational": 4, 00:23:38.896 "base_bdevs_list": [ 00:23:38.896 { 00:23:38.896 "name": "BaseBdev1", 00:23:38.896 "uuid": "9bec3225-93de-4c78-a4f8-8e1467a1a103", 00:23:38.896 "is_configured": true, 00:23:38.896 "data_offset": 0, 00:23:38.896 "data_size": 65536 00:23:38.896 }, 00:23:38.896 { 00:23:38.896 "name": "BaseBdev2", 00:23:38.896 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:38.896 "is_configured": false, 00:23:38.896 "data_offset": 0, 00:23:38.896 "data_size": 0 00:23:38.896 }, 00:23:38.896 { 00:23:38.896 "name": "BaseBdev3", 00:23:38.896 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:38.896 "is_configured": false, 00:23:38.896 "data_offset": 0, 00:23:38.896 "data_size": 0 00:23:38.896 }, 00:23:38.896 { 00:23:38.896 "name": "BaseBdev4", 00:23:38.896 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:38.896 "is_configured": false, 00:23:38.896 "data_offset": 0, 00:23:38.896 "data_size": 0 00:23:38.896 } 00:23:38.896 ] 00:23:38.896 }' 00:23:38.896 14:16:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:38.896 14:16:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:39.476 14:16:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:23:39.734 [2024-07-15 14:16:25.721061] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:39.734 BaseBdev2 00:23:39.993 14:16:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:23:39.993 14:16:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:23:39.993 14:16:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:39.993 14:16:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:23:39.993 14:16:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:39.993 14:16:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:39.993 14:16:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:39.993 14:16:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:40.251 [ 00:23:40.251 { 00:23:40.251 "name": "BaseBdev2", 00:23:40.251 "aliases": [ 00:23:40.251 "ab0743cb-f528-4560-884a-4c5b0a010f2e" 00:23:40.251 ], 00:23:40.251 "product_name": "Malloc disk", 00:23:40.251 "block_size": 512, 00:23:40.251 "num_blocks": 65536, 00:23:40.251 "uuid": "ab0743cb-f528-4560-884a-4c5b0a010f2e", 00:23:40.251 "assigned_rate_limits": { 00:23:40.251 "rw_ios_per_sec": 0, 00:23:40.251 "rw_mbytes_per_sec": 0, 00:23:40.251 "r_mbytes_per_sec": 0, 00:23:40.251 "w_mbytes_per_sec": 0 00:23:40.251 }, 00:23:40.251 "claimed": true, 00:23:40.251 "claim_type": "exclusive_write", 00:23:40.251 "zoned": false, 00:23:40.251 "supported_io_types": { 00:23:40.251 "read": true, 00:23:40.251 "write": true, 00:23:40.251 "unmap": true, 00:23:40.251 "flush": true, 00:23:40.251 "reset": true, 00:23:40.251 "nvme_admin": false, 00:23:40.251 "nvme_io": false, 00:23:40.251 "nvme_io_md": false, 00:23:40.251 "write_zeroes": true, 00:23:40.251 "zcopy": true, 00:23:40.251 "get_zone_info": false, 00:23:40.251 "zone_management": false, 00:23:40.251 "zone_append": false, 00:23:40.251 "compare": false, 00:23:40.251 "compare_and_write": false, 00:23:40.251 "abort": true, 00:23:40.251 "seek_hole": false, 00:23:40.251 "seek_data": false, 00:23:40.251 "copy": true, 00:23:40.251 "nvme_iov_md": false 00:23:40.251 }, 00:23:40.251 "memory_domains": [ 00:23:40.251 { 00:23:40.251 "dma_device_id": "system", 00:23:40.251 "dma_device_type": 1 00:23:40.251 }, 00:23:40.251 { 00:23:40.251 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:40.251 "dma_device_type": 2 00:23:40.251 } 00:23:40.251 ], 00:23:40.251 "driver_specific": {} 00:23:40.251 } 00:23:40.251 ] 00:23:40.251 14:16:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:23:40.251 14:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:23:40.251 14:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:40.251 14:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:40.251 14:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:40.251 14:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:40.251 14:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:40.251 14:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:40.251 14:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:40.251 14:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:40.251 14:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:40.251 14:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:40.251 14:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:40.251 14:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:40.251 14:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:40.510 14:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:40.510 "name": "Existed_Raid", 00:23:40.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:40.510 "strip_size_kb": 64, 00:23:40.510 "state": "configuring", 00:23:40.510 "raid_level": "concat", 00:23:40.510 "superblock": false, 00:23:40.510 "num_base_bdevs": 4, 00:23:40.510 "num_base_bdevs_discovered": 2, 00:23:40.510 "num_base_bdevs_operational": 4, 00:23:40.510 "base_bdevs_list": [ 00:23:40.510 { 00:23:40.510 "name": "BaseBdev1", 00:23:40.510 "uuid": "9bec3225-93de-4c78-a4f8-8e1467a1a103", 00:23:40.510 "is_configured": true, 00:23:40.510 "data_offset": 0, 00:23:40.510 "data_size": 65536 00:23:40.510 }, 00:23:40.510 { 00:23:40.510 "name": "BaseBdev2", 00:23:40.510 "uuid": "ab0743cb-f528-4560-884a-4c5b0a010f2e", 00:23:40.510 "is_configured": true, 00:23:40.510 "data_offset": 0, 00:23:40.510 "data_size": 65536 00:23:40.510 }, 00:23:40.510 { 00:23:40.510 "name": "BaseBdev3", 00:23:40.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:40.510 "is_configured": false, 00:23:40.510 "data_offset": 0, 00:23:40.510 "data_size": 0 00:23:40.510 }, 00:23:40.510 { 00:23:40.510 "name": "BaseBdev4", 00:23:40.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:40.510 "is_configured": false, 00:23:40.510 "data_offset": 0, 00:23:40.510 "data_size": 0 00:23:40.510 } 00:23:40.510 ] 00:23:40.510 }' 00:23:40.510 14:16:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:40.510 14:16:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:41.446 14:16:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:23:41.446 [2024-07-15 14:16:27.414277] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:41.446 BaseBdev3 00:23:41.446 14:16:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:23:41.446 14:16:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:23:41.446 14:16:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:41.446 14:16:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:23:41.446 14:16:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:41.446 14:16:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:41.446 14:16:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:41.706 14:16:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:41.965 [ 00:23:41.965 { 00:23:41.965 "name": "BaseBdev3", 00:23:41.965 "aliases": [ 00:23:41.965 "be770841-9f13-4b68-b1d3-366197ce747f" 00:23:41.965 ], 00:23:41.965 "product_name": "Malloc disk", 00:23:41.965 "block_size": 512, 00:23:41.965 "num_blocks": 65536, 00:23:41.965 "uuid": "be770841-9f13-4b68-b1d3-366197ce747f", 00:23:41.965 "assigned_rate_limits": { 00:23:41.965 "rw_ios_per_sec": 0, 00:23:41.965 "rw_mbytes_per_sec": 0, 00:23:41.965 "r_mbytes_per_sec": 0, 00:23:41.965 "w_mbytes_per_sec": 0 00:23:41.965 }, 00:23:41.965 "claimed": true, 00:23:41.965 "claim_type": "exclusive_write", 00:23:41.965 "zoned": false, 00:23:41.965 "supported_io_types": { 00:23:41.965 "read": true, 00:23:41.965 "write": true, 00:23:41.965 "unmap": true, 00:23:41.965 "flush": true, 00:23:41.965 "reset": true, 00:23:41.965 "nvme_admin": false, 00:23:41.965 "nvme_io": false, 00:23:41.965 "nvme_io_md": false, 00:23:41.965 "write_zeroes": true, 00:23:41.965 "zcopy": true, 00:23:41.965 "get_zone_info": false, 00:23:41.965 "zone_management": false, 00:23:41.965 "zone_append": false, 00:23:41.965 "compare": false, 00:23:41.965 "compare_and_write": false, 00:23:41.965 "abort": true, 00:23:41.965 "seek_hole": false, 00:23:41.965 "seek_data": false, 00:23:41.965 "copy": true, 00:23:41.965 "nvme_iov_md": false 00:23:41.965 }, 00:23:41.965 "memory_domains": [ 00:23:41.965 { 00:23:41.965 "dma_device_id": "system", 00:23:41.965 "dma_device_type": 1 00:23:41.965 }, 00:23:41.965 { 00:23:41.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:41.965 "dma_device_type": 2 00:23:41.965 } 00:23:41.965 ], 00:23:41.965 "driver_specific": {} 00:23:41.965 } 00:23:41.965 ] 00:23:41.965 14:16:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:23:41.965 14:16:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:23:41.965 14:16:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:41.965 14:16:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:41.965 14:16:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:41.965 14:16:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:41.965 14:16:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:41.965 14:16:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:41.965 14:16:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:41.965 14:16:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:41.965 14:16:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:41.965 14:16:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:41.965 14:16:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:41.965 14:16:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:41.965 14:16:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:42.533 14:16:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:42.533 "name": "Existed_Raid", 00:23:42.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:42.533 "strip_size_kb": 64, 00:23:42.533 "state": "configuring", 00:23:42.533 "raid_level": "concat", 00:23:42.533 "superblock": false, 00:23:42.533 "num_base_bdevs": 4, 00:23:42.533 "num_base_bdevs_discovered": 3, 00:23:42.533 "num_base_bdevs_operational": 4, 00:23:42.533 "base_bdevs_list": [ 00:23:42.533 { 00:23:42.533 "name": "BaseBdev1", 00:23:42.533 "uuid": "9bec3225-93de-4c78-a4f8-8e1467a1a103", 00:23:42.533 "is_configured": true, 00:23:42.533 "data_offset": 0, 00:23:42.533 "data_size": 65536 00:23:42.533 }, 00:23:42.533 { 00:23:42.533 "name": "BaseBdev2", 00:23:42.533 "uuid": "ab0743cb-f528-4560-884a-4c5b0a010f2e", 00:23:42.533 "is_configured": true, 00:23:42.533 "data_offset": 0, 00:23:42.533 "data_size": 65536 00:23:42.533 }, 00:23:42.533 { 00:23:42.533 "name": "BaseBdev3", 00:23:42.533 "uuid": "be770841-9f13-4b68-b1d3-366197ce747f", 00:23:42.533 "is_configured": true, 00:23:42.533 "data_offset": 0, 00:23:42.533 "data_size": 65536 00:23:42.533 }, 00:23:42.533 { 00:23:42.533 "name": "BaseBdev4", 00:23:42.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:42.533 "is_configured": false, 00:23:42.533 "data_offset": 0, 00:23:42.533 "data_size": 0 00:23:42.533 } 00:23:42.533 ] 00:23:42.533 }' 00:23:42.533 14:16:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:42.533 14:16:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:43.101 14:16:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:23:43.360 [2024-07-15 14:16:29.140111] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:43.360 [2024-07-15 14:16:29.140321] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:23:43.360 [2024-07-15 14:16:29.140371] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:23:43.360 [2024-07-15 14:16:29.140592] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:23:43.360 [2024-07-15 14:16:29.140996] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:23:43.360 [2024-07-15 14:16:29.141125] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:23:43.360 [2024-07-15 14:16:29.141419] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:43.360 BaseBdev4 00:23:43.360 14:16:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:23:43.360 14:16:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:23:43.360 14:16:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:43.360 14:16:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:23:43.360 14:16:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:43.360 14:16:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:43.360 14:16:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:43.618 14:16:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:23:43.877 [ 00:23:43.877 { 00:23:43.877 "name": "BaseBdev4", 00:23:43.877 "aliases": [ 00:23:43.877 "fb3a32c3-68e9-4dfc-93fe-b28e1048b2de" 00:23:43.877 ], 00:23:43.877 "product_name": "Malloc disk", 00:23:43.877 "block_size": 512, 00:23:43.877 "num_blocks": 65536, 00:23:43.877 "uuid": "fb3a32c3-68e9-4dfc-93fe-b28e1048b2de", 00:23:43.877 "assigned_rate_limits": { 00:23:43.877 "rw_ios_per_sec": 0, 00:23:43.877 "rw_mbytes_per_sec": 0, 00:23:43.877 "r_mbytes_per_sec": 0, 00:23:43.877 "w_mbytes_per_sec": 0 00:23:43.877 }, 00:23:43.877 "claimed": true, 00:23:43.877 "claim_type": "exclusive_write", 00:23:43.877 "zoned": false, 00:23:43.877 "supported_io_types": { 00:23:43.877 "read": true, 00:23:43.877 "write": true, 00:23:43.877 "unmap": true, 00:23:43.877 "flush": true, 00:23:43.877 "reset": true, 00:23:43.877 "nvme_admin": false, 00:23:43.877 "nvme_io": false, 00:23:43.877 "nvme_io_md": false, 00:23:43.877 "write_zeroes": true, 00:23:43.877 "zcopy": true, 00:23:43.877 "get_zone_info": false, 00:23:43.877 "zone_management": false, 00:23:43.877 "zone_append": false, 00:23:43.877 "compare": false, 00:23:43.877 "compare_and_write": false, 00:23:43.877 "abort": true, 00:23:43.877 "seek_hole": false, 00:23:43.877 "seek_data": false, 00:23:43.877 "copy": true, 00:23:43.877 "nvme_iov_md": false 00:23:43.877 }, 00:23:43.877 "memory_domains": [ 00:23:43.877 { 00:23:43.877 "dma_device_id": "system", 00:23:43.877 "dma_device_type": 1 00:23:43.877 }, 00:23:43.877 { 00:23:43.877 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:43.877 "dma_device_type": 2 00:23:43.877 } 00:23:43.877 ], 00:23:43.877 "driver_specific": {} 00:23:43.877 } 00:23:43.877 ] 00:23:43.877 14:16:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:23:43.877 14:16:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:23:43.877 14:16:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:43.877 14:16:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:23:43.877 14:16:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:43.877 14:16:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:43.877 14:16:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:43.877 14:16:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:43.877 14:16:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:43.877 14:16:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:43.877 14:16:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:43.877 14:16:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:43.877 14:16:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:43.877 14:16:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:43.877 14:16:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:44.135 14:16:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:44.135 "name": "Existed_Raid", 00:23:44.135 "uuid": "7fe913ae-d903-4682-96e8-343cee575c9a", 00:23:44.135 "strip_size_kb": 64, 00:23:44.135 "state": "online", 00:23:44.135 "raid_level": "concat", 00:23:44.135 "superblock": false, 00:23:44.135 "num_base_bdevs": 4, 00:23:44.135 "num_base_bdevs_discovered": 4, 00:23:44.135 "num_base_bdevs_operational": 4, 00:23:44.135 "base_bdevs_list": [ 00:23:44.135 { 00:23:44.135 "name": "BaseBdev1", 00:23:44.135 "uuid": "9bec3225-93de-4c78-a4f8-8e1467a1a103", 00:23:44.135 "is_configured": true, 00:23:44.135 "data_offset": 0, 00:23:44.135 "data_size": 65536 00:23:44.135 }, 00:23:44.135 { 00:23:44.135 "name": "BaseBdev2", 00:23:44.135 "uuid": "ab0743cb-f528-4560-884a-4c5b0a010f2e", 00:23:44.135 "is_configured": true, 00:23:44.135 "data_offset": 0, 00:23:44.135 "data_size": 65536 00:23:44.135 }, 00:23:44.135 { 00:23:44.135 "name": "BaseBdev3", 00:23:44.135 "uuid": "be770841-9f13-4b68-b1d3-366197ce747f", 00:23:44.135 "is_configured": true, 00:23:44.135 "data_offset": 0, 00:23:44.135 "data_size": 65536 00:23:44.136 }, 00:23:44.136 { 00:23:44.136 "name": "BaseBdev4", 00:23:44.136 "uuid": "fb3a32c3-68e9-4dfc-93fe-b28e1048b2de", 00:23:44.136 "is_configured": true, 00:23:44.136 "data_offset": 0, 00:23:44.136 "data_size": 65536 00:23:44.136 } 00:23:44.136 ] 00:23:44.136 }' 00:23:44.136 14:16:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:44.136 14:16:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:44.701 14:16:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:23:44.701 14:16:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:23:44.702 14:16:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:23:44.702 14:16:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:23:44.702 14:16:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:23:44.702 14:16:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:23:44.702 14:16:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:23:44.702 14:16:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:23:44.959 [2024-07-15 14:16:30.844627] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:44.959 14:16:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:23:44.959 "name": "Existed_Raid", 00:23:44.959 "aliases": [ 00:23:44.959 "7fe913ae-d903-4682-96e8-343cee575c9a" 00:23:44.959 ], 00:23:44.959 "product_name": "Raid Volume", 00:23:44.959 "block_size": 512, 00:23:44.959 "num_blocks": 262144, 00:23:44.959 "uuid": "7fe913ae-d903-4682-96e8-343cee575c9a", 00:23:44.959 "assigned_rate_limits": { 00:23:44.959 "rw_ios_per_sec": 0, 00:23:44.959 "rw_mbytes_per_sec": 0, 00:23:44.959 "r_mbytes_per_sec": 0, 00:23:44.959 "w_mbytes_per_sec": 0 00:23:44.959 }, 00:23:44.959 "claimed": false, 00:23:44.959 "zoned": false, 00:23:44.959 "supported_io_types": { 00:23:44.959 "read": true, 00:23:44.959 "write": true, 00:23:44.959 "unmap": true, 00:23:44.959 "flush": true, 00:23:44.959 "reset": true, 00:23:44.959 "nvme_admin": false, 00:23:44.959 "nvme_io": false, 00:23:44.959 "nvme_io_md": false, 00:23:44.959 "write_zeroes": true, 00:23:44.959 "zcopy": false, 00:23:44.959 "get_zone_info": false, 00:23:44.959 "zone_management": false, 00:23:44.959 "zone_append": false, 00:23:44.959 "compare": false, 00:23:44.959 "compare_and_write": false, 00:23:44.959 "abort": false, 00:23:44.959 "seek_hole": false, 00:23:44.959 "seek_data": false, 00:23:44.959 "copy": false, 00:23:44.959 "nvme_iov_md": false 00:23:44.959 }, 00:23:44.959 "memory_domains": [ 00:23:44.959 { 00:23:44.959 "dma_device_id": "system", 00:23:44.959 "dma_device_type": 1 00:23:44.959 }, 00:23:44.959 { 00:23:44.959 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:44.959 "dma_device_type": 2 00:23:44.959 }, 00:23:44.959 { 00:23:44.959 "dma_device_id": "system", 00:23:44.959 "dma_device_type": 1 00:23:44.959 }, 00:23:44.959 { 00:23:44.959 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:44.959 "dma_device_type": 2 00:23:44.959 }, 00:23:44.959 { 00:23:44.959 "dma_device_id": "system", 00:23:44.959 "dma_device_type": 1 00:23:44.959 }, 00:23:44.959 { 00:23:44.959 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:44.959 "dma_device_type": 2 00:23:44.959 }, 00:23:44.959 { 00:23:44.959 "dma_device_id": "system", 00:23:44.959 "dma_device_type": 1 00:23:44.959 }, 00:23:44.959 { 00:23:44.959 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:44.959 "dma_device_type": 2 00:23:44.959 } 00:23:44.959 ], 00:23:44.959 "driver_specific": { 00:23:44.959 "raid": { 00:23:44.959 "uuid": "7fe913ae-d903-4682-96e8-343cee575c9a", 00:23:44.959 "strip_size_kb": 64, 00:23:44.959 "state": "online", 00:23:44.959 "raid_level": "concat", 00:23:44.959 "superblock": false, 00:23:44.959 "num_base_bdevs": 4, 00:23:44.959 "num_base_bdevs_discovered": 4, 00:23:44.959 "num_base_bdevs_operational": 4, 00:23:44.959 "base_bdevs_list": [ 00:23:44.959 { 00:23:44.959 "name": "BaseBdev1", 00:23:44.959 "uuid": "9bec3225-93de-4c78-a4f8-8e1467a1a103", 00:23:44.959 "is_configured": true, 00:23:44.959 "data_offset": 0, 00:23:44.959 "data_size": 65536 00:23:44.959 }, 00:23:44.959 { 00:23:44.959 "name": "BaseBdev2", 00:23:44.959 "uuid": "ab0743cb-f528-4560-884a-4c5b0a010f2e", 00:23:44.959 "is_configured": true, 00:23:44.959 "data_offset": 0, 00:23:44.959 "data_size": 65536 00:23:44.959 }, 00:23:44.959 { 00:23:44.959 "name": "BaseBdev3", 00:23:44.959 "uuid": "be770841-9f13-4b68-b1d3-366197ce747f", 00:23:44.959 "is_configured": true, 00:23:44.959 "data_offset": 0, 00:23:44.959 "data_size": 65536 00:23:44.959 }, 00:23:44.959 { 00:23:44.959 "name": "BaseBdev4", 00:23:44.959 "uuid": "fb3a32c3-68e9-4dfc-93fe-b28e1048b2de", 00:23:44.959 "is_configured": true, 00:23:44.959 "data_offset": 0, 00:23:44.959 "data_size": 65536 00:23:44.959 } 00:23:44.959 ] 00:23:44.959 } 00:23:44.959 } 00:23:44.959 }' 00:23:44.959 14:16:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:44.959 14:16:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:23:44.959 BaseBdev2 00:23:44.959 BaseBdev3 00:23:44.959 BaseBdev4' 00:23:44.959 14:16:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:44.959 14:16:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:23:44.959 14:16:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:45.216 14:16:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:45.216 "name": "BaseBdev1", 00:23:45.216 "aliases": [ 00:23:45.216 "9bec3225-93de-4c78-a4f8-8e1467a1a103" 00:23:45.216 ], 00:23:45.216 "product_name": "Malloc disk", 00:23:45.216 "block_size": 512, 00:23:45.216 "num_blocks": 65536, 00:23:45.216 "uuid": "9bec3225-93de-4c78-a4f8-8e1467a1a103", 00:23:45.216 "assigned_rate_limits": { 00:23:45.216 "rw_ios_per_sec": 0, 00:23:45.216 "rw_mbytes_per_sec": 0, 00:23:45.216 "r_mbytes_per_sec": 0, 00:23:45.216 "w_mbytes_per_sec": 0 00:23:45.216 }, 00:23:45.216 "claimed": true, 00:23:45.216 "claim_type": "exclusive_write", 00:23:45.216 "zoned": false, 00:23:45.216 "supported_io_types": { 00:23:45.216 "read": true, 00:23:45.216 "write": true, 00:23:45.216 "unmap": true, 00:23:45.216 "flush": true, 00:23:45.216 "reset": true, 00:23:45.216 "nvme_admin": false, 00:23:45.216 "nvme_io": false, 00:23:45.216 "nvme_io_md": false, 00:23:45.216 "write_zeroes": true, 00:23:45.216 "zcopy": true, 00:23:45.216 "get_zone_info": false, 00:23:45.216 "zone_management": false, 00:23:45.216 "zone_append": false, 00:23:45.216 "compare": false, 00:23:45.216 "compare_and_write": false, 00:23:45.216 "abort": true, 00:23:45.216 "seek_hole": false, 00:23:45.216 "seek_data": false, 00:23:45.216 "copy": true, 00:23:45.216 "nvme_iov_md": false 00:23:45.216 }, 00:23:45.216 "memory_domains": [ 00:23:45.216 { 00:23:45.216 "dma_device_id": "system", 00:23:45.216 "dma_device_type": 1 00:23:45.216 }, 00:23:45.216 { 00:23:45.216 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:45.216 "dma_device_type": 2 00:23:45.216 } 00:23:45.216 ], 00:23:45.216 "driver_specific": {} 00:23:45.216 }' 00:23:45.216 14:16:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:45.472 14:16:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:45.472 14:16:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:45.472 14:16:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:45.472 14:16:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:45.473 14:16:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:45.473 14:16:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:45.473 14:16:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:45.473 14:16:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:45.473 14:16:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:45.729 14:16:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:45.729 14:16:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:45.729 14:16:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:45.729 14:16:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:23:45.729 14:16:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:45.987 14:16:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:45.987 "name": "BaseBdev2", 00:23:45.987 "aliases": [ 00:23:45.987 "ab0743cb-f528-4560-884a-4c5b0a010f2e" 00:23:45.987 ], 00:23:45.987 "product_name": "Malloc disk", 00:23:45.987 "block_size": 512, 00:23:45.987 "num_blocks": 65536, 00:23:45.987 "uuid": "ab0743cb-f528-4560-884a-4c5b0a010f2e", 00:23:45.987 "assigned_rate_limits": { 00:23:45.987 "rw_ios_per_sec": 0, 00:23:45.987 "rw_mbytes_per_sec": 0, 00:23:45.987 "r_mbytes_per_sec": 0, 00:23:45.987 "w_mbytes_per_sec": 0 00:23:45.987 }, 00:23:45.987 "claimed": true, 00:23:45.987 "claim_type": "exclusive_write", 00:23:45.987 "zoned": false, 00:23:45.987 "supported_io_types": { 00:23:45.987 "read": true, 00:23:45.987 "write": true, 00:23:45.987 "unmap": true, 00:23:45.987 "flush": true, 00:23:45.987 "reset": true, 00:23:45.987 "nvme_admin": false, 00:23:45.987 "nvme_io": false, 00:23:45.987 "nvme_io_md": false, 00:23:45.987 "write_zeroes": true, 00:23:45.987 "zcopy": true, 00:23:45.987 "get_zone_info": false, 00:23:45.987 "zone_management": false, 00:23:45.987 "zone_append": false, 00:23:45.987 "compare": false, 00:23:45.987 "compare_and_write": false, 00:23:45.987 "abort": true, 00:23:45.987 "seek_hole": false, 00:23:45.987 "seek_data": false, 00:23:45.987 "copy": true, 00:23:45.987 "nvme_iov_md": false 00:23:45.987 }, 00:23:45.987 "memory_domains": [ 00:23:45.987 { 00:23:45.987 "dma_device_id": "system", 00:23:45.987 "dma_device_type": 1 00:23:45.987 }, 00:23:45.987 { 00:23:45.987 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:45.987 "dma_device_type": 2 00:23:45.987 } 00:23:45.987 ], 00:23:45.987 "driver_specific": {} 00:23:45.987 }' 00:23:45.987 14:16:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:45.987 14:16:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:45.987 14:16:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:45.987 14:16:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:45.987 14:16:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:45.987 14:16:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:45.987 14:16:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:46.245 14:16:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:46.245 14:16:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:46.245 14:16:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:46.245 14:16:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:46.245 14:16:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:46.245 14:16:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:46.245 14:16:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:23:46.245 14:16:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:46.523 14:16:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:46.523 "name": "BaseBdev3", 00:23:46.523 "aliases": [ 00:23:46.523 "be770841-9f13-4b68-b1d3-366197ce747f" 00:23:46.523 ], 00:23:46.523 "product_name": "Malloc disk", 00:23:46.523 "block_size": 512, 00:23:46.523 "num_blocks": 65536, 00:23:46.523 "uuid": "be770841-9f13-4b68-b1d3-366197ce747f", 00:23:46.523 "assigned_rate_limits": { 00:23:46.523 "rw_ios_per_sec": 0, 00:23:46.523 "rw_mbytes_per_sec": 0, 00:23:46.523 "r_mbytes_per_sec": 0, 00:23:46.523 "w_mbytes_per_sec": 0 00:23:46.523 }, 00:23:46.523 "claimed": true, 00:23:46.523 "claim_type": "exclusive_write", 00:23:46.523 "zoned": false, 00:23:46.523 "supported_io_types": { 00:23:46.523 "read": true, 00:23:46.523 "write": true, 00:23:46.523 "unmap": true, 00:23:46.523 "flush": true, 00:23:46.523 "reset": true, 00:23:46.523 "nvme_admin": false, 00:23:46.523 "nvme_io": false, 00:23:46.523 "nvme_io_md": false, 00:23:46.523 "write_zeroes": true, 00:23:46.523 "zcopy": true, 00:23:46.523 "get_zone_info": false, 00:23:46.523 "zone_management": false, 00:23:46.523 "zone_append": false, 00:23:46.523 "compare": false, 00:23:46.523 "compare_and_write": false, 00:23:46.523 "abort": true, 00:23:46.523 "seek_hole": false, 00:23:46.523 "seek_data": false, 00:23:46.523 "copy": true, 00:23:46.523 "nvme_iov_md": false 00:23:46.523 }, 00:23:46.523 "memory_domains": [ 00:23:46.523 { 00:23:46.523 "dma_device_id": "system", 00:23:46.523 "dma_device_type": 1 00:23:46.523 }, 00:23:46.523 { 00:23:46.523 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:46.523 "dma_device_type": 2 00:23:46.523 } 00:23:46.523 ], 00:23:46.524 "driver_specific": {} 00:23:46.524 }' 00:23:46.524 14:16:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:46.524 14:16:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:46.524 14:16:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:46.524 14:16:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:46.782 14:16:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:46.782 14:16:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:46.782 14:16:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:46.782 14:16:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:46.782 14:16:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:46.782 14:16:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:46.782 14:16:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:46.782 14:16:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:46.782 14:16:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:46.782 14:16:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:23:46.782 14:16:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:47.353 14:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:47.353 "name": "BaseBdev4", 00:23:47.353 "aliases": [ 00:23:47.353 "fb3a32c3-68e9-4dfc-93fe-b28e1048b2de" 00:23:47.353 ], 00:23:47.353 "product_name": "Malloc disk", 00:23:47.353 "block_size": 512, 00:23:47.353 "num_blocks": 65536, 00:23:47.353 "uuid": "fb3a32c3-68e9-4dfc-93fe-b28e1048b2de", 00:23:47.353 "assigned_rate_limits": { 00:23:47.353 "rw_ios_per_sec": 0, 00:23:47.353 "rw_mbytes_per_sec": 0, 00:23:47.353 "r_mbytes_per_sec": 0, 00:23:47.353 "w_mbytes_per_sec": 0 00:23:47.353 }, 00:23:47.353 "claimed": true, 00:23:47.353 "claim_type": "exclusive_write", 00:23:47.353 "zoned": false, 00:23:47.353 "supported_io_types": { 00:23:47.353 "read": true, 00:23:47.353 "write": true, 00:23:47.353 "unmap": true, 00:23:47.353 "flush": true, 00:23:47.353 "reset": true, 00:23:47.353 "nvme_admin": false, 00:23:47.353 "nvme_io": false, 00:23:47.353 "nvme_io_md": false, 00:23:47.353 "write_zeroes": true, 00:23:47.353 "zcopy": true, 00:23:47.353 "get_zone_info": false, 00:23:47.353 "zone_management": false, 00:23:47.354 "zone_append": false, 00:23:47.354 "compare": false, 00:23:47.354 "compare_and_write": false, 00:23:47.354 "abort": true, 00:23:47.354 "seek_hole": false, 00:23:47.354 "seek_data": false, 00:23:47.354 "copy": true, 00:23:47.354 "nvme_iov_md": false 00:23:47.354 }, 00:23:47.354 "memory_domains": [ 00:23:47.354 { 00:23:47.354 "dma_device_id": "system", 00:23:47.354 "dma_device_type": 1 00:23:47.354 }, 00:23:47.354 { 00:23:47.354 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:47.354 "dma_device_type": 2 00:23:47.354 } 00:23:47.354 ], 00:23:47.354 "driver_specific": {} 00:23:47.354 }' 00:23:47.354 14:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:47.354 14:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:47.354 14:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:47.354 14:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:47.354 14:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:47.354 14:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:47.354 14:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:47.354 14:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:47.617 14:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:47.617 14:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:47.617 14:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:47.617 14:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:47.617 14:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:23:47.875 [2024-07-15 14:16:33.736914] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:47.875 [2024-07-15 14:16:33.737144] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:47.875 [2024-07-15 14:16:33.737301] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:47.875 14:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:23:47.875 14:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:23:47.875 14:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:23:47.875 14:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:23:47.875 14:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:23:47.875 14:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:23:47.875 14:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:47.875 14:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:23:47.875 14:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:47.875 14:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:47.875 14:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:47.875 14:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:47.875 14:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:47.875 14:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:47.875 14:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:47.875 14:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:47.875 14:16:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:48.132 14:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:48.132 "name": "Existed_Raid", 00:23:48.132 "uuid": "7fe913ae-d903-4682-96e8-343cee575c9a", 00:23:48.132 "strip_size_kb": 64, 00:23:48.132 "state": "offline", 00:23:48.132 "raid_level": "concat", 00:23:48.132 "superblock": false, 00:23:48.132 "num_base_bdevs": 4, 00:23:48.132 "num_base_bdevs_discovered": 3, 00:23:48.132 "num_base_bdevs_operational": 3, 00:23:48.132 "base_bdevs_list": [ 00:23:48.132 { 00:23:48.132 "name": null, 00:23:48.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:48.132 "is_configured": false, 00:23:48.132 "data_offset": 0, 00:23:48.132 "data_size": 65536 00:23:48.132 }, 00:23:48.132 { 00:23:48.132 "name": "BaseBdev2", 00:23:48.132 "uuid": "ab0743cb-f528-4560-884a-4c5b0a010f2e", 00:23:48.132 "is_configured": true, 00:23:48.132 "data_offset": 0, 00:23:48.132 "data_size": 65536 00:23:48.132 }, 00:23:48.132 { 00:23:48.132 "name": "BaseBdev3", 00:23:48.132 "uuid": "be770841-9f13-4b68-b1d3-366197ce747f", 00:23:48.132 "is_configured": true, 00:23:48.132 "data_offset": 0, 00:23:48.132 "data_size": 65536 00:23:48.132 }, 00:23:48.132 { 00:23:48.132 "name": "BaseBdev4", 00:23:48.132 "uuid": "fb3a32c3-68e9-4dfc-93fe-b28e1048b2de", 00:23:48.132 "is_configured": true, 00:23:48.132 "data_offset": 0, 00:23:48.132 "data_size": 65536 00:23:48.132 } 00:23:48.132 ] 00:23:48.132 }' 00:23:48.132 14:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:48.132 14:16:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:49.065 14:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:23:49.065 14:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:23:49.065 14:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:23:49.065 14:16:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:49.065 14:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:23:49.065 14:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:49.065 14:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:23:49.346 [2024-07-15 14:16:35.220575] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:49.346 14:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:23:49.346 14:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:23:49.346 14:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:23:49.346 14:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:49.911 14:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:23:49.911 14:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:49.911 14:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:23:49.911 [2024-07-15 14:16:35.834297] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:50.169 14:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:23:50.169 14:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:23:50.169 14:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:50.169 14:16:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:23:50.426 14:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:23:50.426 14:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:50.426 14:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:23:50.426 [2024-07-15 14:16:36.427711] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:23:50.426 [2024-07-15 14:16:36.427946] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:23:50.683 14:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:23:50.683 14:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:23:50.683 14:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:50.683 14:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:23:50.941 14:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:23:50.941 14:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:23:50.941 14:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:23:50.941 14:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:23:50.941 14:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:23:50.941 14:16:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:23:51.200 BaseBdev2 00:23:51.200 14:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:23:51.200 14:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:23:51.200 14:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:51.200 14:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:23:51.200 14:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:51.200 14:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:51.200 14:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:51.459 14:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:51.717 [ 00:23:51.717 { 00:23:51.717 "name": "BaseBdev2", 00:23:51.717 "aliases": [ 00:23:51.717 "ea53b3a0-5db1-420b-94bb-e0e821be8224" 00:23:51.717 ], 00:23:51.717 "product_name": "Malloc disk", 00:23:51.717 "block_size": 512, 00:23:51.717 "num_blocks": 65536, 00:23:51.717 "uuid": "ea53b3a0-5db1-420b-94bb-e0e821be8224", 00:23:51.717 "assigned_rate_limits": { 00:23:51.717 "rw_ios_per_sec": 0, 00:23:51.717 "rw_mbytes_per_sec": 0, 00:23:51.717 "r_mbytes_per_sec": 0, 00:23:51.717 "w_mbytes_per_sec": 0 00:23:51.717 }, 00:23:51.717 "claimed": false, 00:23:51.717 "zoned": false, 00:23:51.717 "supported_io_types": { 00:23:51.717 "read": true, 00:23:51.717 "write": true, 00:23:51.717 "unmap": true, 00:23:51.717 "flush": true, 00:23:51.717 "reset": true, 00:23:51.717 "nvme_admin": false, 00:23:51.717 "nvme_io": false, 00:23:51.717 "nvme_io_md": false, 00:23:51.717 "write_zeroes": true, 00:23:51.717 "zcopy": true, 00:23:51.717 "get_zone_info": false, 00:23:51.717 "zone_management": false, 00:23:51.717 "zone_append": false, 00:23:51.717 "compare": false, 00:23:51.717 "compare_and_write": false, 00:23:51.717 "abort": true, 00:23:51.717 "seek_hole": false, 00:23:51.717 "seek_data": false, 00:23:51.717 "copy": true, 00:23:51.717 "nvme_iov_md": false 00:23:51.717 }, 00:23:51.717 "memory_domains": [ 00:23:51.717 { 00:23:51.717 "dma_device_id": "system", 00:23:51.717 "dma_device_type": 1 00:23:51.717 }, 00:23:51.717 { 00:23:51.717 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:51.717 "dma_device_type": 2 00:23:51.717 } 00:23:51.717 ], 00:23:51.717 "driver_specific": {} 00:23:51.717 } 00:23:51.717 ] 00:23:51.717 14:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:23:51.717 14:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:23:51.717 14:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:23:51.717 14:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:23:51.976 BaseBdev3 00:23:51.976 14:16:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:23:51.976 14:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:23:51.976 14:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:51.976 14:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:23:51.976 14:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:51.976 14:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:51.976 14:16:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:52.235 14:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:52.493 [ 00:23:52.493 { 00:23:52.493 "name": "BaseBdev3", 00:23:52.493 "aliases": [ 00:23:52.493 "5366cd64-3422-4566-8a78-26b31dd9bf09" 00:23:52.493 ], 00:23:52.493 "product_name": "Malloc disk", 00:23:52.493 "block_size": 512, 00:23:52.493 "num_blocks": 65536, 00:23:52.493 "uuid": "5366cd64-3422-4566-8a78-26b31dd9bf09", 00:23:52.493 "assigned_rate_limits": { 00:23:52.493 "rw_ios_per_sec": 0, 00:23:52.493 "rw_mbytes_per_sec": 0, 00:23:52.493 "r_mbytes_per_sec": 0, 00:23:52.493 "w_mbytes_per_sec": 0 00:23:52.493 }, 00:23:52.493 "claimed": false, 00:23:52.493 "zoned": false, 00:23:52.493 "supported_io_types": { 00:23:52.493 "read": true, 00:23:52.493 "write": true, 00:23:52.493 "unmap": true, 00:23:52.493 "flush": true, 00:23:52.493 "reset": true, 00:23:52.493 "nvme_admin": false, 00:23:52.493 "nvme_io": false, 00:23:52.493 "nvme_io_md": false, 00:23:52.493 "write_zeroes": true, 00:23:52.493 "zcopy": true, 00:23:52.493 "get_zone_info": false, 00:23:52.493 "zone_management": false, 00:23:52.493 "zone_append": false, 00:23:52.493 "compare": false, 00:23:52.493 "compare_and_write": false, 00:23:52.493 "abort": true, 00:23:52.493 "seek_hole": false, 00:23:52.493 "seek_data": false, 00:23:52.493 "copy": true, 00:23:52.493 "nvme_iov_md": false 00:23:52.493 }, 00:23:52.493 "memory_domains": [ 00:23:52.493 { 00:23:52.493 "dma_device_id": "system", 00:23:52.493 "dma_device_type": 1 00:23:52.493 }, 00:23:52.493 { 00:23:52.493 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:52.493 "dma_device_type": 2 00:23:52.493 } 00:23:52.493 ], 00:23:52.493 "driver_specific": {} 00:23:52.493 } 00:23:52.493 ] 00:23:52.493 14:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:23:52.493 14:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:23:52.493 14:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:23:52.493 14:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:23:53.059 BaseBdev4 00:23:53.059 14:16:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:23:53.059 14:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:23:53.059 14:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:53.059 14:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:23:53.059 14:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:53.059 14:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:53.059 14:16:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:53.317 14:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:23:53.575 [ 00:23:53.575 { 00:23:53.575 "name": "BaseBdev4", 00:23:53.575 "aliases": [ 00:23:53.575 "97cc474c-43b9-4242-9ef4-93f33493b0ad" 00:23:53.575 ], 00:23:53.575 "product_name": "Malloc disk", 00:23:53.575 "block_size": 512, 00:23:53.575 "num_blocks": 65536, 00:23:53.575 "uuid": "97cc474c-43b9-4242-9ef4-93f33493b0ad", 00:23:53.575 "assigned_rate_limits": { 00:23:53.575 "rw_ios_per_sec": 0, 00:23:53.575 "rw_mbytes_per_sec": 0, 00:23:53.575 "r_mbytes_per_sec": 0, 00:23:53.575 "w_mbytes_per_sec": 0 00:23:53.575 }, 00:23:53.575 "claimed": false, 00:23:53.575 "zoned": false, 00:23:53.575 "supported_io_types": { 00:23:53.575 "read": true, 00:23:53.575 "write": true, 00:23:53.575 "unmap": true, 00:23:53.575 "flush": true, 00:23:53.575 "reset": true, 00:23:53.575 "nvme_admin": false, 00:23:53.575 "nvme_io": false, 00:23:53.575 "nvme_io_md": false, 00:23:53.575 "write_zeroes": true, 00:23:53.575 "zcopy": true, 00:23:53.575 "get_zone_info": false, 00:23:53.575 "zone_management": false, 00:23:53.575 "zone_append": false, 00:23:53.575 "compare": false, 00:23:53.575 "compare_and_write": false, 00:23:53.575 "abort": true, 00:23:53.575 "seek_hole": false, 00:23:53.575 "seek_data": false, 00:23:53.575 "copy": true, 00:23:53.575 "nvme_iov_md": false 00:23:53.575 }, 00:23:53.575 "memory_domains": [ 00:23:53.575 { 00:23:53.575 "dma_device_id": "system", 00:23:53.575 "dma_device_type": 1 00:23:53.575 }, 00:23:53.575 { 00:23:53.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:53.575 "dma_device_type": 2 00:23:53.575 } 00:23:53.575 ], 00:23:53.575 "driver_specific": {} 00:23:53.575 } 00:23:53.575 ] 00:23:53.575 14:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:23:53.575 14:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:23:53.575 14:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:23:53.575 14:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:53.575 [2024-07-15 14:16:39.575501] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:53.575 [2024-07-15 14:16:39.575847] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:53.575 [2024-07-15 14:16:39.575993] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:53.575 [2024-07-15 14:16:39.577550] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:53.575 [2024-07-15 14:16:39.577776] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:53.831 14:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:53.831 14:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:53.831 14:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:53.831 14:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:53.831 14:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:53.831 14:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:53.831 14:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:53.831 14:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:53.831 14:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:53.831 14:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:53.831 14:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:53.831 14:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:54.087 14:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:54.087 "name": "Existed_Raid", 00:23:54.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:54.087 "strip_size_kb": 64, 00:23:54.087 "state": "configuring", 00:23:54.087 "raid_level": "concat", 00:23:54.087 "superblock": false, 00:23:54.087 "num_base_bdevs": 4, 00:23:54.087 "num_base_bdevs_discovered": 3, 00:23:54.087 "num_base_bdevs_operational": 4, 00:23:54.087 "base_bdevs_list": [ 00:23:54.087 { 00:23:54.087 "name": "BaseBdev1", 00:23:54.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:54.087 "is_configured": false, 00:23:54.087 "data_offset": 0, 00:23:54.087 "data_size": 0 00:23:54.087 }, 00:23:54.087 { 00:23:54.087 "name": "BaseBdev2", 00:23:54.087 "uuid": "ea53b3a0-5db1-420b-94bb-e0e821be8224", 00:23:54.088 "is_configured": true, 00:23:54.088 "data_offset": 0, 00:23:54.088 "data_size": 65536 00:23:54.088 }, 00:23:54.088 { 00:23:54.088 "name": "BaseBdev3", 00:23:54.088 "uuid": "5366cd64-3422-4566-8a78-26b31dd9bf09", 00:23:54.088 "is_configured": true, 00:23:54.088 "data_offset": 0, 00:23:54.088 "data_size": 65536 00:23:54.088 }, 00:23:54.088 { 00:23:54.088 "name": "BaseBdev4", 00:23:54.088 "uuid": "97cc474c-43b9-4242-9ef4-93f33493b0ad", 00:23:54.088 "is_configured": true, 00:23:54.088 "data_offset": 0, 00:23:54.088 "data_size": 65536 00:23:54.088 } 00:23:54.088 ] 00:23:54.088 }' 00:23:54.088 14:16:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:54.088 14:16:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:54.651 14:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:23:54.910 [2024-07-15 14:16:40.745129] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:54.910 14:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:54.910 14:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:54.910 14:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:54.910 14:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:54.910 14:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:54.910 14:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:54.910 14:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:54.910 14:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:54.910 14:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:54.910 14:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:54.910 14:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:54.910 14:16:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:55.169 14:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:55.169 "name": "Existed_Raid", 00:23:55.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:55.169 "strip_size_kb": 64, 00:23:55.169 "state": "configuring", 00:23:55.169 "raid_level": "concat", 00:23:55.169 "superblock": false, 00:23:55.169 "num_base_bdevs": 4, 00:23:55.169 "num_base_bdevs_discovered": 2, 00:23:55.169 "num_base_bdevs_operational": 4, 00:23:55.169 "base_bdevs_list": [ 00:23:55.169 { 00:23:55.169 "name": "BaseBdev1", 00:23:55.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:55.169 "is_configured": false, 00:23:55.169 "data_offset": 0, 00:23:55.169 "data_size": 0 00:23:55.169 }, 00:23:55.169 { 00:23:55.169 "name": null, 00:23:55.169 "uuid": "ea53b3a0-5db1-420b-94bb-e0e821be8224", 00:23:55.169 "is_configured": false, 00:23:55.169 "data_offset": 0, 00:23:55.169 "data_size": 65536 00:23:55.169 }, 00:23:55.169 { 00:23:55.169 "name": "BaseBdev3", 00:23:55.169 "uuid": "5366cd64-3422-4566-8a78-26b31dd9bf09", 00:23:55.169 "is_configured": true, 00:23:55.169 "data_offset": 0, 00:23:55.169 "data_size": 65536 00:23:55.169 }, 00:23:55.169 { 00:23:55.169 "name": "BaseBdev4", 00:23:55.169 "uuid": "97cc474c-43b9-4242-9ef4-93f33493b0ad", 00:23:55.169 "is_configured": true, 00:23:55.169 "data_offset": 0, 00:23:55.169 "data_size": 65536 00:23:55.169 } 00:23:55.169 ] 00:23:55.169 }' 00:23:55.169 14:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:55.169 14:16:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:55.735 14:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:55.735 14:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:23:55.992 14:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:23:55.992 14:16:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:56.332 [2024-07-15 14:16:42.248553] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:56.332 BaseBdev1 00:23:56.332 14:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:23:56.332 14:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:23:56.620 14:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:56.620 14:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:23:56.620 14:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:56.620 14:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:56.620 14:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:56.620 14:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:56.878 [ 00:23:56.878 { 00:23:56.878 "name": "BaseBdev1", 00:23:56.878 "aliases": [ 00:23:56.878 "890f57f4-a638-4885-a395-6f7dbb32990d" 00:23:56.878 ], 00:23:56.878 "product_name": "Malloc disk", 00:23:56.878 "block_size": 512, 00:23:56.878 "num_blocks": 65536, 00:23:56.878 "uuid": "890f57f4-a638-4885-a395-6f7dbb32990d", 00:23:56.878 "assigned_rate_limits": { 00:23:56.878 "rw_ios_per_sec": 0, 00:23:56.878 "rw_mbytes_per_sec": 0, 00:23:56.878 "r_mbytes_per_sec": 0, 00:23:56.878 "w_mbytes_per_sec": 0 00:23:56.878 }, 00:23:56.878 "claimed": true, 00:23:56.878 "claim_type": "exclusive_write", 00:23:56.878 "zoned": false, 00:23:56.878 "supported_io_types": { 00:23:56.878 "read": true, 00:23:56.878 "write": true, 00:23:56.878 "unmap": true, 00:23:56.878 "flush": true, 00:23:56.878 "reset": true, 00:23:56.878 "nvme_admin": false, 00:23:56.878 "nvme_io": false, 00:23:56.878 "nvme_io_md": false, 00:23:56.878 "write_zeroes": true, 00:23:56.878 "zcopy": true, 00:23:56.878 "get_zone_info": false, 00:23:56.878 "zone_management": false, 00:23:56.878 "zone_append": false, 00:23:56.878 "compare": false, 00:23:56.878 "compare_and_write": false, 00:23:56.878 "abort": true, 00:23:56.878 "seek_hole": false, 00:23:56.878 "seek_data": false, 00:23:56.878 "copy": true, 00:23:56.878 "nvme_iov_md": false 00:23:56.878 }, 00:23:56.878 "memory_domains": [ 00:23:56.878 { 00:23:56.878 "dma_device_id": "system", 00:23:56.878 "dma_device_type": 1 00:23:56.878 }, 00:23:56.878 { 00:23:56.878 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:56.878 "dma_device_type": 2 00:23:56.878 } 00:23:56.878 ], 00:23:56.878 "driver_specific": {} 00:23:56.878 } 00:23:56.878 ] 00:23:56.878 14:16:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:23:56.878 14:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:56.878 14:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:56.878 14:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:56.878 14:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:56.878 14:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:56.878 14:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:56.878 14:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:56.878 14:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:56.878 14:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:56.878 14:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:56.878 14:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:56.878 14:16:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:57.136 14:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:57.136 "name": "Existed_Raid", 00:23:57.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:57.136 "strip_size_kb": 64, 00:23:57.136 "state": "configuring", 00:23:57.136 "raid_level": "concat", 00:23:57.136 "superblock": false, 00:23:57.136 "num_base_bdevs": 4, 00:23:57.136 "num_base_bdevs_discovered": 3, 00:23:57.136 "num_base_bdevs_operational": 4, 00:23:57.136 "base_bdevs_list": [ 00:23:57.136 { 00:23:57.136 "name": "BaseBdev1", 00:23:57.136 "uuid": "890f57f4-a638-4885-a395-6f7dbb32990d", 00:23:57.136 "is_configured": true, 00:23:57.136 "data_offset": 0, 00:23:57.136 "data_size": 65536 00:23:57.136 }, 00:23:57.136 { 00:23:57.136 "name": null, 00:23:57.136 "uuid": "ea53b3a0-5db1-420b-94bb-e0e821be8224", 00:23:57.136 "is_configured": false, 00:23:57.136 "data_offset": 0, 00:23:57.136 "data_size": 65536 00:23:57.136 }, 00:23:57.136 { 00:23:57.136 "name": "BaseBdev3", 00:23:57.136 "uuid": "5366cd64-3422-4566-8a78-26b31dd9bf09", 00:23:57.136 "is_configured": true, 00:23:57.136 "data_offset": 0, 00:23:57.136 "data_size": 65536 00:23:57.136 }, 00:23:57.136 { 00:23:57.136 "name": "BaseBdev4", 00:23:57.136 "uuid": "97cc474c-43b9-4242-9ef4-93f33493b0ad", 00:23:57.136 "is_configured": true, 00:23:57.136 "data_offset": 0, 00:23:57.136 "data_size": 65536 00:23:57.136 } 00:23:57.136 ] 00:23:57.136 }' 00:23:57.136 14:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:57.136 14:16:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:58.069 14:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:23:58.069 14:16:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:58.069 14:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:23:58.069 14:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:23:58.636 [2024-07-15 14:16:44.349093] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:58.636 14:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:58.636 14:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:58.636 14:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:58.636 14:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:58.636 14:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:58.636 14:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:58.636 14:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:58.636 14:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:58.636 14:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:58.636 14:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:58.636 14:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:58.636 14:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:58.636 14:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:58.636 "name": "Existed_Raid", 00:23:58.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:58.636 "strip_size_kb": 64, 00:23:58.636 "state": "configuring", 00:23:58.636 "raid_level": "concat", 00:23:58.636 "superblock": false, 00:23:58.636 "num_base_bdevs": 4, 00:23:58.636 "num_base_bdevs_discovered": 2, 00:23:58.636 "num_base_bdevs_operational": 4, 00:23:58.636 "base_bdevs_list": [ 00:23:58.636 { 00:23:58.636 "name": "BaseBdev1", 00:23:58.636 "uuid": "890f57f4-a638-4885-a395-6f7dbb32990d", 00:23:58.636 "is_configured": true, 00:23:58.636 "data_offset": 0, 00:23:58.636 "data_size": 65536 00:23:58.636 }, 00:23:58.636 { 00:23:58.636 "name": null, 00:23:58.636 "uuid": "ea53b3a0-5db1-420b-94bb-e0e821be8224", 00:23:58.636 "is_configured": false, 00:23:58.636 "data_offset": 0, 00:23:58.636 "data_size": 65536 00:23:58.636 }, 00:23:58.636 { 00:23:58.636 "name": null, 00:23:58.636 "uuid": "5366cd64-3422-4566-8a78-26b31dd9bf09", 00:23:58.636 "is_configured": false, 00:23:58.636 "data_offset": 0, 00:23:58.636 "data_size": 65536 00:23:58.636 }, 00:23:58.636 { 00:23:58.636 "name": "BaseBdev4", 00:23:58.636 "uuid": "97cc474c-43b9-4242-9ef4-93f33493b0ad", 00:23:58.636 "is_configured": true, 00:23:58.636 "data_offset": 0, 00:23:58.636 "data_size": 65536 00:23:58.636 } 00:23:58.636 ] 00:23:58.636 }' 00:23:58.636 14:16:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:58.636 14:16:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:59.567 14:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:59.567 14:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:23:59.567 14:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:23:59.567 14:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:23:59.885 [2024-07-15 14:16:45.729352] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:59.885 14:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:59.885 14:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:59.885 14:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:59.885 14:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:59.885 14:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:59.885 14:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:59.885 14:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:59.885 14:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:59.885 14:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:59.885 14:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:59.885 14:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:59.885 14:16:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:00.141 14:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:00.141 "name": "Existed_Raid", 00:24:00.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:00.141 "strip_size_kb": 64, 00:24:00.141 "state": "configuring", 00:24:00.141 "raid_level": "concat", 00:24:00.141 "superblock": false, 00:24:00.141 "num_base_bdevs": 4, 00:24:00.141 "num_base_bdevs_discovered": 3, 00:24:00.141 "num_base_bdevs_operational": 4, 00:24:00.141 "base_bdevs_list": [ 00:24:00.141 { 00:24:00.141 "name": "BaseBdev1", 00:24:00.141 "uuid": "890f57f4-a638-4885-a395-6f7dbb32990d", 00:24:00.141 "is_configured": true, 00:24:00.141 "data_offset": 0, 00:24:00.141 "data_size": 65536 00:24:00.141 }, 00:24:00.141 { 00:24:00.141 "name": null, 00:24:00.141 "uuid": "ea53b3a0-5db1-420b-94bb-e0e821be8224", 00:24:00.141 "is_configured": false, 00:24:00.141 "data_offset": 0, 00:24:00.141 "data_size": 65536 00:24:00.141 }, 00:24:00.141 { 00:24:00.141 "name": "BaseBdev3", 00:24:00.141 "uuid": "5366cd64-3422-4566-8a78-26b31dd9bf09", 00:24:00.141 "is_configured": true, 00:24:00.141 "data_offset": 0, 00:24:00.141 "data_size": 65536 00:24:00.141 }, 00:24:00.141 { 00:24:00.141 "name": "BaseBdev4", 00:24:00.141 "uuid": "97cc474c-43b9-4242-9ef4-93f33493b0ad", 00:24:00.141 "is_configured": true, 00:24:00.141 "data_offset": 0, 00:24:00.141 "data_size": 65536 00:24:00.141 } 00:24:00.141 ] 00:24:00.141 }' 00:24:00.141 14:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:00.141 14:16:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:01.072 14:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:01.072 14:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:24:01.072 14:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:24:01.072 14:16:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:24:01.330 [2024-07-15 14:16:47.213574] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:01.330 14:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:01.330 14:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:01.330 14:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:01.330 14:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:01.330 14:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:01.330 14:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:01.330 14:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:01.330 14:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:01.330 14:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:01.330 14:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:01.330 14:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:01.330 14:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:01.648 14:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:01.648 "name": "Existed_Raid", 00:24:01.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:01.648 "strip_size_kb": 64, 00:24:01.648 "state": "configuring", 00:24:01.648 "raid_level": "concat", 00:24:01.648 "superblock": false, 00:24:01.648 "num_base_bdevs": 4, 00:24:01.648 "num_base_bdevs_discovered": 2, 00:24:01.648 "num_base_bdevs_operational": 4, 00:24:01.648 "base_bdevs_list": [ 00:24:01.648 { 00:24:01.648 "name": null, 00:24:01.648 "uuid": "890f57f4-a638-4885-a395-6f7dbb32990d", 00:24:01.648 "is_configured": false, 00:24:01.648 "data_offset": 0, 00:24:01.648 "data_size": 65536 00:24:01.648 }, 00:24:01.648 { 00:24:01.648 "name": null, 00:24:01.648 "uuid": "ea53b3a0-5db1-420b-94bb-e0e821be8224", 00:24:01.648 "is_configured": false, 00:24:01.648 "data_offset": 0, 00:24:01.648 "data_size": 65536 00:24:01.648 }, 00:24:01.648 { 00:24:01.648 "name": "BaseBdev3", 00:24:01.648 "uuid": "5366cd64-3422-4566-8a78-26b31dd9bf09", 00:24:01.648 "is_configured": true, 00:24:01.648 "data_offset": 0, 00:24:01.648 "data_size": 65536 00:24:01.648 }, 00:24:01.648 { 00:24:01.648 "name": "BaseBdev4", 00:24:01.648 "uuid": "97cc474c-43b9-4242-9ef4-93f33493b0ad", 00:24:01.648 "is_configured": true, 00:24:01.648 "data_offset": 0, 00:24:01.648 "data_size": 65536 00:24:01.648 } 00:24:01.648 ] 00:24:01.648 }' 00:24:01.648 14:16:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:01.648 14:16:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:02.600 14:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:02.600 14:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:24:02.600 14:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:24:02.600 14:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:24:02.858 [2024-07-15 14:16:48.684155] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:02.858 14:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:02.858 14:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:02.858 14:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:02.858 14:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:02.858 14:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:02.858 14:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:02.858 14:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:02.858 14:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:02.858 14:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:02.858 14:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:02.858 14:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:02.858 14:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:03.126 14:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:03.126 "name": "Existed_Raid", 00:24:03.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:03.126 "strip_size_kb": 64, 00:24:03.126 "state": "configuring", 00:24:03.126 "raid_level": "concat", 00:24:03.126 "superblock": false, 00:24:03.126 "num_base_bdevs": 4, 00:24:03.126 "num_base_bdevs_discovered": 3, 00:24:03.126 "num_base_bdevs_operational": 4, 00:24:03.126 "base_bdevs_list": [ 00:24:03.126 { 00:24:03.126 "name": null, 00:24:03.126 "uuid": "890f57f4-a638-4885-a395-6f7dbb32990d", 00:24:03.126 "is_configured": false, 00:24:03.126 "data_offset": 0, 00:24:03.126 "data_size": 65536 00:24:03.126 }, 00:24:03.126 { 00:24:03.126 "name": "BaseBdev2", 00:24:03.126 "uuid": "ea53b3a0-5db1-420b-94bb-e0e821be8224", 00:24:03.126 "is_configured": true, 00:24:03.126 "data_offset": 0, 00:24:03.126 "data_size": 65536 00:24:03.126 }, 00:24:03.126 { 00:24:03.126 "name": "BaseBdev3", 00:24:03.126 "uuid": "5366cd64-3422-4566-8a78-26b31dd9bf09", 00:24:03.126 "is_configured": true, 00:24:03.126 "data_offset": 0, 00:24:03.126 "data_size": 65536 00:24:03.126 }, 00:24:03.126 { 00:24:03.126 "name": "BaseBdev4", 00:24:03.126 "uuid": "97cc474c-43b9-4242-9ef4-93f33493b0ad", 00:24:03.126 "is_configured": true, 00:24:03.126 "data_offset": 0, 00:24:03.126 "data_size": 65536 00:24:03.126 } 00:24:03.126 ] 00:24:03.126 }' 00:24:03.126 14:16:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:03.126 14:16:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:03.696 14:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:03.696 14:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:24:03.954 14:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:24:03.954 14:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:24:03.954 14:16:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:04.211 14:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 890f57f4-a638-4885-a395-6f7dbb32990d 00:24:04.469 [2024-07-15 14:16:50.356610] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:24:04.469 [2024-07-15 14:16:50.357154] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:24:04.469 [2024-07-15 14:16:50.357295] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:24:04.469 [2024-07-15 14:16:50.357578] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:24:04.469 [2024-07-15 14:16:50.358014] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:24:04.469 [2024-07-15 14:16:50.358184] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000009380 00:24:04.469 [2024-07-15 14:16:50.358558] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:04.469 NewBaseBdev 00:24:04.469 14:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:24:04.469 14:16:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:24:04.469 14:16:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:04.469 14:16:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:24:04.469 14:16:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:04.469 14:16:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:04.469 14:16:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:04.728 14:16:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:24:04.987 [ 00:24:04.987 { 00:24:04.987 "name": "NewBaseBdev", 00:24:04.987 "aliases": [ 00:24:04.987 "890f57f4-a638-4885-a395-6f7dbb32990d" 00:24:04.987 ], 00:24:04.987 "product_name": "Malloc disk", 00:24:04.987 "block_size": 512, 00:24:04.987 "num_blocks": 65536, 00:24:04.987 "uuid": "890f57f4-a638-4885-a395-6f7dbb32990d", 00:24:04.987 "assigned_rate_limits": { 00:24:04.987 "rw_ios_per_sec": 0, 00:24:04.987 "rw_mbytes_per_sec": 0, 00:24:04.987 "r_mbytes_per_sec": 0, 00:24:04.987 "w_mbytes_per_sec": 0 00:24:04.987 }, 00:24:04.987 "claimed": true, 00:24:04.987 "claim_type": "exclusive_write", 00:24:04.987 "zoned": false, 00:24:04.987 "supported_io_types": { 00:24:04.987 "read": true, 00:24:04.987 "write": true, 00:24:04.987 "unmap": true, 00:24:04.987 "flush": true, 00:24:04.987 "reset": true, 00:24:04.987 "nvme_admin": false, 00:24:04.987 "nvme_io": false, 00:24:04.987 "nvme_io_md": false, 00:24:04.987 "write_zeroes": true, 00:24:04.987 "zcopy": true, 00:24:04.987 "get_zone_info": false, 00:24:04.987 "zone_management": false, 00:24:04.987 "zone_append": false, 00:24:04.987 "compare": false, 00:24:04.987 "compare_and_write": false, 00:24:04.987 "abort": true, 00:24:04.987 "seek_hole": false, 00:24:04.987 "seek_data": false, 00:24:04.987 "copy": true, 00:24:04.987 "nvme_iov_md": false 00:24:04.987 }, 00:24:04.987 "memory_domains": [ 00:24:04.987 { 00:24:04.987 "dma_device_id": "system", 00:24:04.987 "dma_device_type": 1 00:24:04.987 }, 00:24:04.987 { 00:24:04.987 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:04.987 "dma_device_type": 2 00:24:04.987 } 00:24:04.987 ], 00:24:04.987 "driver_specific": {} 00:24:04.987 } 00:24:04.987 ] 00:24:04.987 14:16:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:24:04.987 14:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:24:04.987 14:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:04.987 14:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:04.987 14:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:04.987 14:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:04.987 14:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:04.987 14:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:04.987 14:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:04.987 14:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:04.987 14:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:04.987 14:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:04.987 14:16:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:05.246 14:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:05.246 "name": "Existed_Raid", 00:24:05.246 "uuid": "367ec41f-2356-4918-be49-f6e8431c55fd", 00:24:05.246 "strip_size_kb": 64, 00:24:05.246 "state": "online", 00:24:05.246 "raid_level": "concat", 00:24:05.246 "superblock": false, 00:24:05.246 "num_base_bdevs": 4, 00:24:05.246 "num_base_bdevs_discovered": 4, 00:24:05.246 "num_base_bdevs_operational": 4, 00:24:05.246 "base_bdevs_list": [ 00:24:05.246 { 00:24:05.246 "name": "NewBaseBdev", 00:24:05.246 "uuid": "890f57f4-a638-4885-a395-6f7dbb32990d", 00:24:05.246 "is_configured": true, 00:24:05.246 "data_offset": 0, 00:24:05.246 "data_size": 65536 00:24:05.246 }, 00:24:05.246 { 00:24:05.246 "name": "BaseBdev2", 00:24:05.246 "uuid": "ea53b3a0-5db1-420b-94bb-e0e821be8224", 00:24:05.246 "is_configured": true, 00:24:05.246 "data_offset": 0, 00:24:05.246 "data_size": 65536 00:24:05.246 }, 00:24:05.246 { 00:24:05.246 "name": "BaseBdev3", 00:24:05.246 "uuid": "5366cd64-3422-4566-8a78-26b31dd9bf09", 00:24:05.246 "is_configured": true, 00:24:05.246 "data_offset": 0, 00:24:05.246 "data_size": 65536 00:24:05.246 }, 00:24:05.246 { 00:24:05.246 "name": "BaseBdev4", 00:24:05.246 "uuid": "97cc474c-43b9-4242-9ef4-93f33493b0ad", 00:24:05.246 "is_configured": true, 00:24:05.246 "data_offset": 0, 00:24:05.246 "data_size": 65536 00:24:05.246 } 00:24:05.246 ] 00:24:05.246 }' 00:24:05.246 14:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:05.246 14:16:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:06.193 14:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:24:06.193 14:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:24:06.193 14:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:24:06.193 14:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:24:06.193 14:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:24:06.193 14:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:24:06.193 14:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:24:06.193 14:16:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:24:06.193 [2024-07-15 14:16:52.158353] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:06.193 14:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:24:06.193 "name": "Existed_Raid", 00:24:06.193 "aliases": [ 00:24:06.193 "367ec41f-2356-4918-be49-f6e8431c55fd" 00:24:06.193 ], 00:24:06.193 "product_name": "Raid Volume", 00:24:06.193 "block_size": 512, 00:24:06.193 "num_blocks": 262144, 00:24:06.193 "uuid": "367ec41f-2356-4918-be49-f6e8431c55fd", 00:24:06.193 "assigned_rate_limits": { 00:24:06.193 "rw_ios_per_sec": 0, 00:24:06.193 "rw_mbytes_per_sec": 0, 00:24:06.193 "r_mbytes_per_sec": 0, 00:24:06.193 "w_mbytes_per_sec": 0 00:24:06.193 }, 00:24:06.193 "claimed": false, 00:24:06.193 "zoned": false, 00:24:06.193 "supported_io_types": { 00:24:06.193 "read": true, 00:24:06.193 "write": true, 00:24:06.193 "unmap": true, 00:24:06.193 "flush": true, 00:24:06.193 "reset": true, 00:24:06.193 "nvme_admin": false, 00:24:06.193 "nvme_io": false, 00:24:06.193 "nvme_io_md": false, 00:24:06.193 "write_zeroes": true, 00:24:06.193 "zcopy": false, 00:24:06.193 "get_zone_info": false, 00:24:06.193 "zone_management": false, 00:24:06.193 "zone_append": false, 00:24:06.193 "compare": false, 00:24:06.193 "compare_and_write": false, 00:24:06.193 "abort": false, 00:24:06.193 "seek_hole": false, 00:24:06.193 "seek_data": false, 00:24:06.193 "copy": false, 00:24:06.193 "nvme_iov_md": false 00:24:06.193 }, 00:24:06.193 "memory_domains": [ 00:24:06.193 { 00:24:06.193 "dma_device_id": "system", 00:24:06.193 "dma_device_type": 1 00:24:06.193 }, 00:24:06.193 { 00:24:06.193 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:06.193 "dma_device_type": 2 00:24:06.193 }, 00:24:06.193 { 00:24:06.193 "dma_device_id": "system", 00:24:06.193 "dma_device_type": 1 00:24:06.193 }, 00:24:06.193 { 00:24:06.193 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:06.193 "dma_device_type": 2 00:24:06.193 }, 00:24:06.193 { 00:24:06.193 "dma_device_id": "system", 00:24:06.193 "dma_device_type": 1 00:24:06.193 }, 00:24:06.193 { 00:24:06.193 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:06.193 "dma_device_type": 2 00:24:06.193 }, 00:24:06.193 { 00:24:06.193 "dma_device_id": "system", 00:24:06.193 "dma_device_type": 1 00:24:06.193 }, 00:24:06.193 { 00:24:06.193 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:06.193 "dma_device_type": 2 00:24:06.193 } 00:24:06.193 ], 00:24:06.193 "driver_specific": { 00:24:06.193 "raid": { 00:24:06.193 "uuid": "367ec41f-2356-4918-be49-f6e8431c55fd", 00:24:06.193 "strip_size_kb": 64, 00:24:06.193 "state": "online", 00:24:06.193 "raid_level": "concat", 00:24:06.193 "superblock": false, 00:24:06.193 "num_base_bdevs": 4, 00:24:06.193 "num_base_bdevs_discovered": 4, 00:24:06.193 "num_base_bdevs_operational": 4, 00:24:06.193 "base_bdevs_list": [ 00:24:06.193 { 00:24:06.193 "name": "NewBaseBdev", 00:24:06.193 "uuid": "890f57f4-a638-4885-a395-6f7dbb32990d", 00:24:06.193 "is_configured": true, 00:24:06.193 "data_offset": 0, 00:24:06.193 "data_size": 65536 00:24:06.193 }, 00:24:06.193 { 00:24:06.193 "name": "BaseBdev2", 00:24:06.193 "uuid": "ea53b3a0-5db1-420b-94bb-e0e821be8224", 00:24:06.193 "is_configured": true, 00:24:06.193 "data_offset": 0, 00:24:06.193 "data_size": 65536 00:24:06.193 }, 00:24:06.193 { 00:24:06.193 "name": "BaseBdev3", 00:24:06.193 "uuid": "5366cd64-3422-4566-8a78-26b31dd9bf09", 00:24:06.193 "is_configured": true, 00:24:06.193 "data_offset": 0, 00:24:06.193 "data_size": 65536 00:24:06.193 }, 00:24:06.193 { 00:24:06.193 "name": "BaseBdev4", 00:24:06.193 "uuid": "97cc474c-43b9-4242-9ef4-93f33493b0ad", 00:24:06.193 "is_configured": true, 00:24:06.194 "data_offset": 0, 00:24:06.194 "data_size": 65536 00:24:06.194 } 00:24:06.194 ] 00:24:06.194 } 00:24:06.194 } 00:24:06.194 }' 00:24:06.194 14:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:06.451 14:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:24:06.451 BaseBdev2 00:24:06.451 BaseBdev3 00:24:06.451 BaseBdev4' 00:24:06.451 14:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:06.451 14:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:24:06.451 14:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:06.710 14:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:06.710 "name": "NewBaseBdev", 00:24:06.710 "aliases": [ 00:24:06.710 "890f57f4-a638-4885-a395-6f7dbb32990d" 00:24:06.710 ], 00:24:06.710 "product_name": "Malloc disk", 00:24:06.710 "block_size": 512, 00:24:06.710 "num_blocks": 65536, 00:24:06.710 "uuid": "890f57f4-a638-4885-a395-6f7dbb32990d", 00:24:06.710 "assigned_rate_limits": { 00:24:06.710 "rw_ios_per_sec": 0, 00:24:06.710 "rw_mbytes_per_sec": 0, 00:24:06.710 "r_mbytes_per_sec": 0, 00:24:06.710 "w_mbytes_per_sec": 0 00:24:06.710 }, 00:24:06.710 "claimed": true, 00:24:06.710 "claim_type": "exclusive_write", 00:24:06.710 "zoned": false, 00:24:06.710 "supported_io_types": { 00:24:06.710 "read": true, 00:24:06.710 "write": true, 00:24:06.710 "unmap": true, 00:24:06.710 "flush": true, 00:24:06.710 "reset": true, 00:24:06.710 "nvme_admin": false, 00:24:06.710 "nvme_io": false, 00:24:06.710 "nvme_io_md": false, 00:24:06.710 "write_zeroes": true, 00:24:06.710 "zcopy": true, 00:24:06.710 "get_zone_info": false, 00:24:06.710 "zone_management": false, 00:24:06.710 "zone_append": false, 00:24:06.710 "compare": false, 00:24:06.710 "compare_and_write": false, 00:24:06.710 "abort": true, 00:24:06.710 "seek_hole": false, 00:24:06.710 "seek_data": false, 00:24:06.710 "copy": true, 00:24:06.710 "nvme_iov_md": false 00:24:06.710 }, 00:24:06.710 "memory_domains": [ 00:24:06.710 { 00:24:06.710 "dma_device_id": "system", 00:24:06.710 "dma_device_type": 1 00:24:06.710 }, 00:24:06.710 { 00:24:06.710 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:06.710 "dma_device_type": 2 00:24:06.710 } 00:24:06.710 ], 00:24:06.710 "driver_specific": {} 00:24:06.710 }' 00:24:06.710 14:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:06.710 14:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:06.710 14:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:06.710 14:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:06.710 14:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:06.710 14:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:06.710 14:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:06.710 14:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:06.969 14:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:06.969 14:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:06.969 14:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:06.969 14:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:06.969 14:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:06.969 14:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:24:06.969 14:16:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:07.227 14:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:07.227 "name": "BaseBdev2", 00:24:07.227 "aliases": [ 00:24:07.227 "ea53b3a0-5db1-420b-94bb-e0e821be8224" 00:24:07.227 ], 00:24:07.227 "product_name": "Malloc disk", 00:24:07.227 "block_size": 512, 00:24:07.227 "num_blocks": 65536, 00:24:07.227 "uuid": "ea53b3a0-5db1-420b-94bb-e0e821be8224", 00:24:07.227 "assigned_rate_limits": { 00:24:07.227 "rw_ios_per_sec": 0, 00:24:07.227 "rw_mbytes_per_sec": 0, 00:24:07.227 "r_mbytes_per_sec": 0, 00:24:07.227 "w_mbytes_per_sec": 0 00:24:07.227 }, 00:24:07.227 "claimed": true, 00:24:07.227 "claim_type": "exclusive_write", 00:24:07.227 "zoned": false, 00:24:07.227 "supported_io_types": { 00:24:07.227 "read": true, 00:24:07.227 "write": true, 00:24:07.227 "unmap": true, 00:24:07.227 "flush": true, 00:24:07.227 "reset": true, 00:24:07.227 "nvme_admin": false, 00:24:07.227 "nvme_io": false, 00:24:07.227 "nvme_io_md": false, 00:24:07.227 "write_zeroes": true, 00:24:07.227 "zcopy": true, 00:24:07.227 "get_zone_info": false, 00:24:07.227 "zone_management": false, 00:24:07.227 "zone_append": false, 00:24:07.227 "compare": false, 00:24:07.227 "compare_and_write": false, 00:24:07.227 "abort": true, 00:24:07.227 "seek_hole": false, 00:24:07.227 "seek_data": false, 00:24:07.227 "copy": true, 00:24:07.228 "nvme_iov_md": false 00:24:07.228 }, 00:24:07.228 "memory_domains": [ 00:24:07.228 { 00:24:07.228 "dma_device_id": "system", 00:24:07.228 "dma_device_type": 1 00:24:07.228 }, 00:24:07.228 { 00:24:07.228 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:07.228 "dma_device_type": 2 00:24:07.228 } 00:24:07.228 ], 00:24:07.228 "driver_specific": {} 00:24:07.228 }' 00:24:07.228 14:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:07.228 14:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:07.484 14:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:07.484 14:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:07.484 14:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:07.484 14:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:07.484 14:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:07.484 14:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:07.484 14:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:07.484 14:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:07.741 14:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:07.741 14:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:07.741 14:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:07.741 14:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:24:07.741 14:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:07.999 14:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:07.999 "name": "BaseBdev3", 00:24:07.999 "aliases": [ 00:24:07.999 "5366cd64-3422-4566-8a78-26b31dd9bf09" 00:24:07.999 ], 00:24:07.999 "product_name": "Malloc disk", 00:24:07.999 "block_size": 512, 00:24:07.999 "num_blocks": 65536, 00:24:07.999 "uuid": "5366cd64-3422-4566-8a78-26b31dd9bf09", 00:24:07.999 "assigned_rate_limits": { 00:24:07.999 "rw_ios_per_sec": 0, 00:24:07.999 "rw_mbytes_per_sec": 0, 00:24:07.999 "r_mbytes_per_sec": 0, 00:24:07.999 "w_mbytes_per_sec": 0 00:24:07.999 }, 00:24:07.999 "claimed": true, 00:24:07.999 "claim_type": "exclusive_write", 00:24:07.999 "zoned": false, 00:24:07.999 "supported_io_types": { 00:24:07.999 "read": true, 00:24:07.999 "write": true, 00:24:07.999 "unmap": true, 00:24:07.999 "flush": true, 00:24:07.999 "reset": true, 00:24:07.999 "nvme_admin": false, 00:24:07.999 "nvme_io": false, 00:24:07.999 "nvme_io_md": false, 00:24:07.999 "write_zeroes": true, 00:24:07.999 "zcopy": true, 00:24:07.999 "get_zone_info": false, 00:24:07.999 "zone_management": false, 00:24:07.999 "zone_append": false, 00:24:07.999 "compare": false, 00:24:07.999 "compare_and_write": false, 00:24:07.999 "abort": true, 00:24:07.999 "seek_hole": false, 00:24:07.999 "seek_data": false, 00:24:07.999 "copy": true, 00:24:07.999 "nvme_iov_md": false 00:24:07.999 }, 00:24:07.999 "memory_domains": [ 00:24:07.999 { 00:24:07.999 "dma_device_id": "system", 00:24:07.999 "dma_device_type": 1 00:24:07.999 }, 00:24:07.999 { 00:24:07.999 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:07.999 "dma_device_type": 2 00:24:07.999 } 00:24:07.999 ], 00:24:07.999 "driver_specific": {} 00:24:07.999 }' 00:24:07.999 14:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:07.999 14:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:07.999 14:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:07.999 14:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:07.999 14:16:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:08.258 14:16:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:08.258 14:16:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:08.258 14:16:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:08.258 14:16:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:08.258 14:16:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:08.258 14:16:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:08.258 14:16:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:08.258 14:16:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:08.258 14:16:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:24:08.258 14:16:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:08.517 14:16:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:08.517 "name": "BaseBdev4", 00:24:08.517 "aliases": [ 00:24:08.517 "97cc474c-43b9-4242-9ef4-93f33493b0ad" 00:24:08.517 ], 00:24:08.517 "product_name": "Malloc disk", 00:24:08.517 "block_size": 512, 00:24:08.517 "num_blocks": 65536, 00:24:08.517 "uuid": "97cc474c-43b9-4242-9ef4-93f33493b0ad", 00:24:08.517 "assigned_rate_limits": { 00:24:08.517 "rw_ios_per_sec": 0, 00:24:08.517 "rw_mbytes_per_sec": 0, 00:24:08.517 "r_mbytes_per_sec": 0, 00:24:08.517 "w_mbytes_per_sec": 0 00:24:08.517 }, 00:24:08.517 "claimed": true, 00:24:08.517 "claim_type": "exclusive_write", 00:24:08.517 "zoned": false, 00:24:08.517 "supported_io_types": { 00:24:08.517 "read": true, 00:24:08.517 "write": true, 00:24:08.517 "unmap": true, 00:24:08.517 "flush": true, 00:24:08.517 "reset": true, 00:24:08.517 "nvme_admin": false, 00:24:08.517 "nvme_io": false, 00:24:08.517 "nvme_io_md": false, 00:24:08.517 "write_zeroes": true, 00:24:08.517 "zcopy": true, 00:24:08.517 "get_zone_info": false, 00:24:08.517 "zone_management": false, 00:24:08.517 "zone_append": false, 00:24:08.517 "compare": false, 00:24:08.517 "compare_and_write": false, 00:24:08.517 "abort": true, 00:24:08.517 "seek_hole": false, 00:24:08.517 "seek_data": false, 00:24:08.517 "copy": true, 00:24:08.517 "nvme_iov_md": false 00:24:08.517 }, 00:24:08.517 "memory_domains": [ 00:24:08.517 { 00:24:08.517 "dma_device_id": "system", 00:24:08.517 "dma_device_type": 1 00:24:08.517 }, 00:24:08.517 { 00:24:08.517 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:08.517 "dma_device_type": 2 00:24:08.517 } 00:24:08.517 ], 00:24:08.517 "driver_specific": {} 00:24:08.517 }' 00:24:08.517 14:16:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:08.776 14:16:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:08.776 14:16:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:08.776 14:16:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:08.776 14:16:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:08.776 14:16:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:08.776 14:16:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:08.776 14:16:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:09.035 14:16:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:09.035 14:16:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:09.035 14:16:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:09.035 14:16:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:09.035 14:16:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:09.294 [2024-07-15 14:16:55.169461] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:09.294 [2024-07-15 14:16:55.169693] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:09.294 [2024-07-15 14:16:55.169894] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:09.294 [2024-07-15 14:16:55.170050] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:09.294 [2024-07-15 14:16:55.170162] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name Existed_Raid, state offline 00:24:09.295 14:16:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 203596 00:24:09.295 14:16:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 203596 ']' 00:24:09.295 14:16:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 203596 00:24:09.295 14:16:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:24:09.295 14:16:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:09.295 14:16:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 203596 00:24:09.295 14:16:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:09.295 14:16:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:09.295 14:16:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 203596' 00:24:09.295 killing process with pid 203596 00:24:09.295 14:16:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 203596 00:24:09.295 [2024-07-15 14:16:55.216195] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:09.295 14:16:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 203596 00:24:09.862 [2024-07-15 14:16:55.557989] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:10.798 14:16:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:24:10.798 00:24:10.798 real 0m36.913s 00:24:10.798 user 1m8.107s 00:24:10.798 sys 0m4.201s 00:24:10.798 14:16:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:10.798 14:16:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:10.798 ************************************ 00:24:10.798 END TEST raid_state_function_test 00:24:10.798 ************************************ 00:24:10.798 14:16:56 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:24:10.798 14:16:56 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:24:10.798 14:16:56 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:24:10.798 14:16:56 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:10.798 14:16:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:10.798 ************************************ 00:24:10.798 START TEST raid_state_function_test_sb 00:24:10.798 ************************************ 00:24:10.798 14:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 4 true 00:24:10.798 14:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:24:10.798 14:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:24:10.798 14:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:24:10.798 14:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:24:10.798 14:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:24:10.798 14:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:24:10.798 14:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:24:10.798 14:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:24:10.798 14:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:24:10.798 14:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:24:10.798 14:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:24:10.798 14:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:24:10.798 14:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:24:10.798 14:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:24:10.798 14:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:24:10.798 14:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev4 00:24:10.798 14:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:24:10.798 14:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:24:10.798 14:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:24:10.798 14:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:24:10.798 14:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:24:10.798 14:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:24:10.798 14:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:24:10.798 14:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:24:10.798 14:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:24:10.798 14:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:24:10.798 14:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:24:10.798 14:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:24:10.798 14:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:24:10.798 14:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=204717 00:24:10.798 14:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:24:10.798 Process raid pid: 204717 00:24:10.798 14:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 204717' 00:24:10.798 14:16:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 204717 /var/tmp/spdk-raid.sock 00:24:10.798 14:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 204717 ']' 00:24:10.798 14:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:10.798 14:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:10.798 14:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:10.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:10.798 14:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:10.798 14:16:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:10.798 [2024-07-15 14:16:56.792460] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:24:10.798 [2024-07-15 14:16:56.792767] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:11.056 [2024-07-15 14:16:56.950156] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:11.313 [2024-07-15 14:16:57.200464] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:11.571 [2024-07-15 14:16:57.406002] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:12.137 14:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:12.137 14:16:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:24:12.137 14:16:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:12.395 [2024-07-15 14:16:58.163629] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:12.395 [2024-07-15 14:16:58.163956] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:12.395 [2024-07-15 14:16:58.164113] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:12.395 [2024-07-15 14:16:58.164283] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:12.395 [2024-07-15 14:16:58.164403] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:12.395 [2024-07-15 14:16:58.164561] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:12.395 [2024-07-15 14:16:58.164681] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:12.395 [2024-07-15 14:16:58.164836] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:12.395 14:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:12.395 14:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:12.395 14:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:12.395 14:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:12.395 14:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:12.395 14:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:12.395 14:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:12.395 14:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:12.395 14:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:12.395 14:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:12.395 14:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:12.395 14:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:12.697 14:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:12.697 "name": "Existed_Raid", 00:24:12.697 "uuid": "8097d11d-658d-4f71-afad-d14effc8e42a", 00:24:12.697 "strip_size_kb": 64, 00:24:12.697 "state": "configuring", 00:24:12.697 "raid_level": "concat", 00:24:12.697 "superblock": true, 00:24:12.697 "num_base_bdevs": 4, 00:24:12.697 "num_base_bdevs_discovered": 0, 00:24:12.697 "num_base_bdevs_operational": 4, 00:24:12.697 "base_bdevs_list": [ 00:24:12.697 { 00:24:12.697 "name": "BaseBdev1", 00:24:12.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:12.697 "is_configured": false, 00:24:12.697 "data_offset": 0, 00:24:12.697 "data_size": 0 00:24:12.697 }, 00:24:12.697 { 00:24:12.697 "name": "BaseBdev2", 00:24:12.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:12.697 "is_configured": false, 00:24:12.697 "data_offset": 0, 00:24:12.697 "data_size": 0 00:24:12.697 }, 00:24:12.697 { 00:24:12.697 "name": "BaseBdev3", 00:24:12.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:12.697 "is_configured": false, 00:24:12.697 "data_offset": 0, 00:24:12.697 "data_size": 0 00:24:12.697 }, 00:24:12.697 { 00:24:12.697 "name": "BaseBdev4", 00:24:12.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:12.697 "is_configured": false, 00:24:12.697 "data_offset": 0, 00:24:12.697 "data_size": 0 00:24:12.697 } 00:24:12.697 ] 00:24:12.697 }' 00:24:12.697 14:16:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:12.697 14:16:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:13.271 14:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:13.529 [2024-07-15 14:16:59.423733] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:13.529 [2024-07-15 14:16:59.423978] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:24:13.529 14:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:13.787 [2024-07-15 14:16:59.655825] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:13.787 [2024-07-15 14:16:59.656142] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:13.787 [2024-07-15 14:16:59.656274] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:13.787 [2024-07-15 14:16:59.656348] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:13.787 [2024-07-15 14:16:59.656525] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:13.787 [2024-07-15 14:16:59.656682] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:13.787 [2024-07-15 14:16:59.656807] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:13.787 [2024-07-15 14:16:59.656997] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:13.787 14:16:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:24:14.045 [2024-07-15 14:17:00.011230] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:14.045 BaseBdev1 00:24:14.045 14:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:24:14.045 14:17:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:24:14.045 14:17:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:14.045 14:17:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:24:14.045 14:17:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:14.045 14:17:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:14.045 14:17:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:14.611 14:17:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:14.611 [ 00:24:14.611 { 00:24:14.611 "name": "BaseBdev1", 00:24:14.611 "aliases": [ 00:24:14.611 "ecf0d0e1-6b71-4eec-bd1c-fc3c0e5d6fc7" 00:24:14.611 ], 00:24:14.611 "product_name": "Malloc disk", 00:24:14.611 "block_size": 512, 00:24:14.611 "num_blocks": 65536, 00:24:14.611 "uuid": "ecf0d0e1-6b71-4eec-bd1c-fc3c0e5d6fc7", 00:24:14.611 "assigned_rate_limits": { 00:24:14.611 "rw_ios_per_sec": 0, 00:24:14.611 "rw_mbytes_per_sec": 0, 00:24:14.611 "r_mbytes_per_sec": 0, 00:24:14.611 "w_mbytes_per_sec": 0 00:24:14.611 }, 00:24:14.611 "claimed": true, 00:24:14.611 "claim_type": "exclusive_write", 00:24:14.611 "zoned": false, 00:24:14.611 "supported_io_types": { 00:24:14.611 "read": true, 00:24:14.611 "write": true, 00:24:14.611 "unmap": true, 00:24:14.611 "flush": true, 00:24:14.611 "reset": true, 00:24:14.611 "nvme_admin": false, 00:24:14.611 "nvme_io": false, 00:24:14.611 "nvme_io_md": false, 00:24:14.611 "write_zeroes": true, 00:24:14.611 "zcopy": true, 00:24:14.611 "get_zone_info": false, 00:24:14.611 "zone_management": false, 00:24:14.611 "zone_append": false, 00:24:14.611 "compare": false, 00:24:14.611 "compare_and_write": false, 00:24:14.611 "abort": true, 00:24:14.611 "seek_hole": false, 00:24:14.611 "seek_data": false, 00:24:14.611 "copy": true, 00:24:14.611 "nvme_iov_md": false 00:24:14.611 }, 00:24:14.611 "memory_domains": [ 00:24:14.611 { 00:24:14.611 "dma_device_id": "system", 00:24:14.611 "dma_device_type": 1 00:24:14.611 }, 00:24:14.611 { 00:24:14.611 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:14.611 "dma_device_type": 2 00:24:14.611 } 00:24:14.611 ], 00:24:14.611 "driver_specific": {} 00:24:14.611 } 00:24:14.611 ] 00:24:14.611 14:17:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:24:14.611 14:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:14.611 14:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:14.611 14:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:14.611 14:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:14.611 14:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:14.611 14:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:14.611 14:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:14.611 14:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:14.611 14:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:14.611 14:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:14.611 14:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:14.611 14:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:14.869 14:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:14.869 "name": "Existed_Raid", 00:24:14.869 "uuid": "63bfc106-f49a-4c8b-88f5-99b3919d43b8", 00:24:14.869 "strip_size_kb": 64, 00:24:14.869 "state": "configuring", 00:24:14.869 "raid_level": "concat", 00:24:14.869 "superblock": true, 00:24:14.869 "num_base_bdevs": 4, 00:24:14.869 "num_base_bdevs_discovered": 1, 00:24:14.869 "num_base_bdevs_operational": 4, 00:24:14.869 "base_bdevs_list": [ 00:24:14.869 { 00:24:14.869 "name": "BaseBdev1", 00:24:14.869 "uuid": "ecf0d0e1-6b71-4eec-bd1c-fc3c0e5d6fc7", 00:24:14.869 "is_configured": true, 00:24:14.869 "data_offset": 2048, 00:24:14.869 "data_size": 63488 00:24:14.869 }, 00:24:14.869 { 00:24:14.869 "name": "BaseBdev2", 00:24:14.869 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:14.869 "is_configured": false, 00:24:14.869 "data_offset": 0, 00:24:14.869 "data_size": 0 00:24:14.869 }, 00:24:14.869 { 00:24:14.869 "name": "BaseBdev3", 00:24:14.869 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:14.869 "is_configured": false, 00:24:14.869 "data_offset": 0, 00:24:14.869 "data_size": 0 00:24:14.869 }, 00:24:14.869 { 00:24:14.869 "name": "BaseBdev4", 00:24:14.869 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:14.869 "is_configured": false, 00:24:14.869 "data_offset": 0, 00:24:14.869 "data_size": 0 00:24:14.869 } 00:24:14.869 ] 00:24:14.869 }' 00:24:14.869 14:17:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:14.869 14:17:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:15.803 14:17:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:15.803 [2024-07-15 14:17:01.723514] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:15.803 [2024-07-15 14:17:01.723833] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:24:15.803 14:17:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:16.061 [2024-07-15 14:17:01.967613] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:16.061 [2024-07-15 14:17:01.969459] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:16.061 [2024-07-15 14:17:01.970042] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:16.061 [2024-07-15 14:17:01.970187] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:16.061 [2024-07-15 14:17:01.970333] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:16.061 [2024-07-15 14:17:01.970485] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:16.061 [2024-07-15 14:17:01.970619] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:16.061 14:17:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:24:16.061 14:17:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:24:16.061 14:17:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:16.061 14:17:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:16.061 14:17:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:16.061 14:17:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:16.061 14:17:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:16.061 14:17:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:16.061 14:17:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:16.061 14:17:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:16.061 14:17:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:16.061 14:17:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:16.061 14:17:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:16.061 14:17:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:16.319 14:17:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:16.319 "name": "Existed_Raid", 00:24:16.319 "uuid": "5036b29f-48e5-4aaf-9d62-7950c695a44e", 00:24:16.319 "strip_size_kb": 64, 00:24:16.319 "state": "configuring", 00:24:16.319 "raid_level": "concat", 00:24:16.319 "superblock": true, 00:24:16.319 "num_base_bdevs": 4, 00:24:16.319 "num_base_bdevs_discovered": 1, 00:24:16.319 "num_base_bdevs_operational": 4, 00:24:16.319 "base_bdevs_list": [ 00:24:16.319 { 00:24:16.319 "name": "BaseBdev1", 00:24:16.319 "uuid": "ecf0d0e1-6b71-4eec-bd1c-fc3c0e5d6fc7", 00:24:16.319 "is_configured": true, 00:24:16.319 "data_offset": 2048, 00:24:16.319 "data_size": 63488 00:24:16.319 }, 00:24:16.319 { 00:24:16.319 "name": "BaseBdev2", 00:24:16.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:16.319 "is_configured": false, 00:24:16.319 "data_offset": 0, 00:24:16.319 "data_size": 0 00:24:16.319 }, 00:24:16.319 { 00:24:16.319 "name": "BaseBdev3", 00:24:16.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:16.319 "is_configured": false, 00:24:16.319 "data_offset": 0, 00:24:16.319 "data_size": 0 00:24:16.319 }, 00:24:16.319 { 00:24:16.319 "name": "BaseBdev4", 00:24:16.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:16.319 "is_configured": false, 00:24:16.319 "data_offset": 0, 00:24:16.319 "data_size": 0 00:24:16.319 } 00:24:16.320 ] 00:24:16.320 }' 00:24:16.320 14:17:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:16.320 14:17:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:16.886 14:17:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:24:17.145 [2024-07-15 14:17:03.136232] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:17.145 BaseBdev2 00:24:17.408 14:17:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:24:17.408 14:17:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:24:17.408 14:17:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:17.408 14:17:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:24:17.409 14:17:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:17.409 14:17:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:17.409 14:17:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:17.665 14:17:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:17.923 [ 00:24:17.923 { 00:24:17.923 "name": "BaseBdev2", 00:24:17.923 "aliases": [ 00:24:17.923 "b3faa60a-ba33-4982-a6ab-37813f5e2f25" 00:24:17.923 ], 00:24:17.923 "product_name": "Malloc disk", 00:24:17.923 "block_size": 512, 00:24:17.923 "num_blocks": 65536, 00:24:17.923 "uuid": "b3faa60a-ba33-4982-a6ab-37813f5e2f25", 00:24:17.923 "assigned_rate_limits": { 00:24:17.923 "rw_ios_per_sec": 0, 00:24:17.923 "rw_mbytes_per_sec": 0, 00:24:17.923 "r_mbytes_per_sec": 0, 00:24:17.923 "w_mbytes_per_sec": 0 00:24:17.923 }, 00:24:17.923 "claimed": true, 00:24:17.923 "claim_type": "exclusive_write", 00:24:17.923 "zoned": false, 00:24:17.923 "supported_io_types": { 00:24:17.923 "read": true, 00:24:17.923 "write": true, 00:24:17.923 "unmap": true, 00:24:17.923 "flush": true, 00:24:17.923 "reset": true, 00:24:17.923 "nvme_admin": false, 00:24:17.923 "nvme_io": false, 00:24:17.923 "nvme_io_md": false, 00:24:17.923 "write_zeroes": true, 00:24:17.923 "zcopy": true, 00:24:17.923 "get_zone_info": false, 00:24:17.923 "zone_management": false, 00:24:17.923 "zone_append": false, 00:24:17.923 "compare": false, 00:24:17.923 "compare_and_write": false, 00:24:17.923 "abort": true, 00:24:17.923 "seek_hole": false, 00:24:17.923 "seek_data": false, 00:24:17.923 "copy": true, 00:24:17.923 "nvme_iov_md": false 00:24:17.923 }, 00:24:17.923 "memory_domains": [ 00:24:17.923 { 00:24:17.923 "dma_device_id": "system", 00:24:17.923 "dma_device_type": 1 00:24:17.923 }, 00:24:17.923 { 00:24:17.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:17.923 "dma_device_type": 2 00:24:17.923 } 00:24:17.923 ], 00:24:17.923 "driver_specific": {} 00:24:17.923 } 00:24:17.923 ] 00:24:17.923 14:17:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:24:17.923 14:17:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:24:17.923 14:17:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:24:17.923 14:17:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:17.923 14:17:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:17.923 14:17:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:17.923 14:17:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:17.923 14:17:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:17.923 14:17:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:17.923 14:17:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:17.923 14:17:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:17.923 14:17:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:17.923 14:17:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:17.923 14:17:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:17.923 14:17:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:18.182 14:17:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:18.182 "name": "Existed_Raid", 00:24:18.182 "uuid": "5036b29f-48e5-4aaf-9d62-7950c695a44e", 00:24:18.182 "strip_size_kb": 64, 00:24:18.182 "state": "configuring", 00:24:18.182 "raid_level": "concat", 00:24:18.182 "superblock": true, 00:24:18.182 "num_base_bdevs": 4, 00:24:18.182 "num_base_bdevs_discovered": 2, 00:24:18.182 "num_base_bdevs_operational": 4, 00:24:18.182 "base_bdevs_list": [ 00:24:18.182 { 00:24:18.182 "name": "BaseBdev1", 00:24:18.182 "uuid": "ecf0d0e1-6b71-4eec-bd1c-fc3c0e5d6fc7", 00:24:18.182 "is_configured": true, 00:24:18.182 "data_offset": 2048, 00:24:18.182 "data_size": 63488 00:24:18.182 }, 00:24:18.182 { 00:24:18.182 "name": "BaseBdev2", 00:24:18.182 "uuid": "b3faa60a-ba33-4982-a6ab-37813f5e2f25", 00:24:18.182 "is_configured": true, 00:24:18.182 "data_offset": 2048, 00:24:18.182 "data_size": 63488 00:24:18.182 }, 00:24:18.182 { 00:24:18.182 "name": "BaseBdev3", 00:24:18.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:18.182 "is_configured": false, 00:24:18.182 "data_offset": 0, 00:24:18.182 "data_size": 0 00:24:18.182 }, 00:24:18.182 { 00:24:18.182 "name": "BaseBdev4", 00:24:18.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:18.182 "is_configured": false, 00:24:18.182 "data_offset": 0, 00:24:18.182 "data_size": 0 00:24:18.182 } 00:24:18.182 ] 00:24:18.182 }' 00:24:18.182 14:17:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:18.182 14:17:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:18.746 14:17:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:24:19.005 [2024-07-15 14:17:04.894806] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:19.005 BaseBdev3 00:24:19.005 14:17:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:24:19.005 14:17:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:24:19.005 14:17:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:19.005 14:17:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:24:19.005 14:17:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:19.005 14:17:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:19.005 14:17:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:19.262 14:17:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:24:19.828 [ 00:24:19.828 { 00:24:19.828 "name": "BaseBdev3", 00:24:19.828 "aliases": [ 00:24:19.828 "a4eebb70-b226-4c1c-a69f-bf5eb48e2489" 00:24:19.828 ], 00:24:19.828 "product_name": "Malloc disk", 00:24:19.828 "block_size": 512, 00:24:19.828 "num_blocks": 65536, 00:24:19.828 "uuid": "a4eebb70-b226-4c1c-a69f-bf5eb48e2489", 00:24:19.829 "assigned_rate_limits": { 00:24:19.829 "rw_ios_per_sec": 0, 00:24:19.829 "rw_mbytes_per_sec": 0, 00:24:19.829 "r_mbytes_per_sec": 0, 00:24:19.829 "w_mbytes_per_sec": 0 00:24:19.829 }, 00:24:19.829 "claimed": true, 00:24:19.829 "claim_type": "exclusive_write", 00:24:19.829 "zoned": false, 00:24:19.829 "supported_io_types": { 00:24:19.829 "read": true, 00:24:19.829 "write": true, 00:24:19.829 "unmap": true, 00:24:19.829 "flush": true, 00:24:19.829 "reset": true, 00:24:19.829 "nvme_admin": false, 00:24:19.829 "nvme_io": false, 00:24:19.829 "nvme_io_md": false, 00:24:19.829 "write_zeroes": true, 00:24:19.829 "zcopy": true, 00:24:19.829 "get_zone_info": false, 00:24:19.829 "zone_management": false, 00:24:19.829 "zone_append": false, 00:24:19.829 "compare": false, 00:24:19.829 "compare_and_write": false, 00:24:19.829 "abort": true, 00:24:19.829 "seek_hole": false, 00:24:19.829 "seek_data": false, 00:24:19.829 "copy": true, 00:24:19.829 "nvme_iov_md": false 00:24:19.829 }, 00:24:19.829 "memory_domains": [ 00:24:19.829 { 00:24:19.829 "dma_device_id": "system", 00:24:19.829 "dma_device_type": 1 00:24:19.829 }, 00:24:19.829 { 00:24:19.829 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:19.829 "dma_device_type": 2 00:24:19.829 } 00:24:19.829 ], 00:24:19.829 "driver_specific": {} 00:24:19.829 } 00:24:19.829 ] 00:24:19.829 14:17:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:24:19.829 14:17:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:24:19.829 14:17:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:24:19.829 14:17:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:19.829 14:17:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:19.829 14:17:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:19.829 14:17:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:19.829 14:17:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:19.829 14:17:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:19.829 14:17:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:19.829 14:17:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:19.829 14:17:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:19.829 14:17:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:19.829 14:17:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:19.829 14:17:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:20.088 14:17:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:20.088 "name": "Existed_Raid", 00:24:20.088 "uuid": "5036b29f-48e5-4aaf-9d62-7950c695a44e", 00:24:20.088 "strip_size_kb": 64, 00:24:20.088 "state": "configuring", 00:24:20.088 "raid_level": "concat", 00:24:20.088 "superblock": true, 00:24:20.088 "num_base_bdevs": 4, 00:24:20.088 "num_base_bdevs_discovered": 3, 00:24:20.088 "num_base_bdevs_operational": 4, 00:24:20.088 "base_bdevs_list": [ 00:24:20.088 { 00:24:20.088 "name": "BaseBdev1", 00:24:20.088 "uuid": "ecf0d0e1-6b71-4eec-bd1c-fc3c0e5d6fc7", 00:24:20.088 "is_configured": true, 00:24:20.088 "data_offset": 2048, 00:24:20.088 "data_size": 63488 00:24:20.088 }, 00:24:20.088 { 00:24:20.088 "name": "BaseBdev2", 00:24:20.088 "uuid": "b3faa60a-ba33-4982-a6ab-37813f5e2f25", 00:24:20.088 "is_configured": true, 00:24:20.088 "data_offset": 2048, 00:24:20.088 "data_size": 63488 00:24:20.088 }, 00:24:20.088 { 00:24:20.088 "name": "BaseBdev3", 00:24:20.088 "uuid": "a4eebb70-b226-4c1c-a69f-bf5eb48e2489", 00:24:20.088 "is_configured": true, 00:24:20.088 "data_offset": 2048, 00:24:20.088 "data_size": 63488 00:24:20.088 }, 00:24:20.088 { 00:24:20.088 "name": "BaseBdev4", 00:24:20.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:20.088 "is_configured": false, 00:24:20.088 "data_offset": 0, 00:24:20.088 "data_size": 0 00:24:20.088 } 00:24:20.088 ] 00:24:20.088 }' 00:24:20.088 14:17:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:20.088 14:17:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:20.654 14:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:24:20.912 [2024-07-15 14:17:06.764470] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:20.912 [2024-07-15 14:17:06.765011] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:24:20.912 [2024-07-15 14:17:06.765186] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:24:20.912 [2024-07-15 14:17:06.765352] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:24:20.912 [2024-07-15 14:17:06.765643] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:24:20.912 [2024-07-15 14:17:06.765694] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:24:20.912 [2024-07-15 14:17:06.765961] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:20.912 BaseBdev4 00:24:20.912 14:17:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:24:20.912 14:17:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:24:20.912 14:17:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:20.912 14:17:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:24:20.912 14:17:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:20.912 14:17:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:20.912 14:17:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:21.170 14:17:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:24:21.428 [ 00:24:21.428 { 00:24:21.428 "name": "BaseBdev4", 00:24:21.428 "aliases": [ 00:24:21.428 "53002bb6-3f91-43d9-86dd-0405d18462ac" 00:24:21.428 ], 00:24:21.428 "product_name": "Malloc disk", 00:24:21.428 "block_size": 512, 00:24:21.428 "num_blocks": 65536, 00:24:21.428 "uuid": "53002bb6-3f91-43d9-86dd-0405d18462ac", 00:24:21.428 "assigned_rate_limits": { 00:24:21.428 "rw_ios_per_sec": 0, 00:24:21.428 "rw_mbytes_per_sec": 0, 00:24:21.428 "r_mbytes_per_sec": 0, 00:24:21.428 "w_mbytes_per_sec": 0 00:24:21.428 }, 00:24:21.428 "claimed": true, 00:24:21.428 "claim_type": "exclusive_write", 00:24:21.428 "zoned": false, 00:24:21.428 "supported_io_types": { 00:24:21.428 "read": true, 00:24:21.428 "write": true, 00:24:21.428 "unmap": true, 00:24:21.428 "flush": true, 00:24:21.428 "reset": true, 00:24:21.428 "nvme_admin": false, 00:24:21.428 "nvme_io": false, 00:24:21.428 "nvme_io_md": false, 00:24:21.428 "write_zeroes": true, 00:24:21.428 "zcopy": true, 00:24:21.428 "get_zone_info": false, 00:24:21.428 "zone_management": false, 00:24:21.428 "zone_append": false, 00:24:21.428 "compare": false, 00:24:21.428 "compare_and_write": false, 00:24:21.428 "abort": true, 00:24:21.428 "seek_hole": false, 00:24:21.428 "seek_data": false, 00:24:21.428 "copy": true, 00:24:21.428 "nvme_iov_md": false 00:24:21.428 }, 00:24:21.428 "memory_domains": [ 00:24:21.428 { 00:24:21.428 "dma_device_id": "system", 00:24:21.428 "dma_device_type": 1 00:24:21.428 }, 00:24:21.428 { 00:24:21.428 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:21.428 "dma_device_type": 2 00:24:21.428 } 00:24:21.428 ], 00:24:21.428 "driver_specific": {} 00:24:21.428 } 00:24:21.428 ] 00:24:21.428 14:17:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:24:21.428 14:17:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:24:21.428 14:17:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:24:21.428 14:17:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:24:21.428 14:17:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:21.428 14:17:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:21.428 14:17:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:21.428 14:17:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:21.428 14:17:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:21.428 14:17:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:21.428 14:17:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:21.428 14:17:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:21.428 14:17:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:21.428 14:17:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:21.428 14:17:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:21.687 14:17:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:21.687 "name": "Existed_Raid", 00:24:21.687 "uuid": "5036b29f-48e5-4aaf-9d62-7950c695a44e", 00:24:21.687 "strip_size_kb": 64, 00:24:21.687 "state": "online", 00:24:21.687 "raid_level": "concat", 00:24:21.687 "superblock": true, 00:24:21.687 "num_base_bdevs": 4, 00:24:21.687 "num_base_bdevs_discovered": 4, 00:24:21.687 "num_base_bdevs_operational": 4, 00:24:21.687 "base_bdevs_list": [ 00:24:21.687 { 00:24:21.687 "name": "BaseBdev1", 00:24:21.687 "uuid": "ecf0d0e1-6b71-4eec-bd1c-fc3c0e5d6fc7", 00:24:21.687 "is_configured": true, 00:24:21.687 "data_offset": 2048, 00:24:21.687 "data_size": 63488 00:24:21.687 }, 00:24:21.687 { 00:24:21.687 "name": "BaseBdev2", 00:24:21.687 "uuid": "b3faa60a-ba33-4982-a6ab-37813f5e2f25", 00:24:21.687 "is_configured": true, 00:24:21.687 "data_offset": 2048, 00:24:21.687 "data_size": 63488 00:24:21.687 }, 00:24:21.687 { 00:24:21.687 "name": "BaseBdev3", 00:24:21.687 "uuid": "a4eebb70-b226-4c1c-a69f-bf5eb48e2489", 00:24:21.687 "is_configured": true, 00:24:21.687 "data_offset": 2048, 00:24:21.687 "data_size": 63488 00:24:21.687 }, 00:24:21.687 { 00:24:21.687 "name": "BaseBdev4", 00:24:21.687 "uuid": "53002bb6-3f91-43d9-86dd-0405d18462ac", 00:24:21.687 "is_configured": true, 00:24:21.687 "data_offset": 2048, 00:24:21.687 "data_size": 63488 00:24:21.687 } 00:24:21.687 ] 00:24:21.687 }' 00:24:21.687 14:17:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:21.687 14:17:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:22.618 14:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:24:22.618 14:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:24:22.618 14:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:24:22.618 14:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:24:22.618 14:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:24:22.619 14:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:24:22.619 14:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:24:22.619 14:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:24:22.619 [2024-07-15 14:17:08.589080] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:22.619 14:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:24:22.619 "name": "Existed_Raid", 00:24:22.619 "aliases": [ 00:24:22.619 "5036b29f-48e5-4aaf-9d62-7950c695a44e" 00:24:22.619 ], 00:24:22.619 "product_name": "Raid Volume", 00:24:22.619 "block_size": 512, 00:24:22.619 "num_blocks": 253952, 00:24:22.619 "uuid": "5036b29f-48e5-4aaf-9d62-7950c695a44e", 00:24:22.619 "assigned_rate_limits": { 00:24:22.619 "rw_ios_per_sec": 0, 00:24:22.619 "rw_mbytes_per_sec": 0, 00:24:22.619 "r_mbytes_per_sec": 0, 00:24:22.619 "w_mbytes_per_sec": 0 00:24:22.619 }, 00:24:22.619 "claimed": false, 00:24:22.619 "zoned": false, 00:24:22.619 "supported_io_types": { 00:24:22.619 "read": true, 00:24:22.619 "write": true, 00:24:22.619 "unmap": true, 00:24:22.619 "flush": true, 00:24:22.619 "reset": true, 00:24:22.619 "nvme_admin": false, 00:24:22.619 "nvme_io": false, 00:24:22.619 "nvme_io_md": false, 00:24:22.619 "write_zeroes": true, 00:24:22.619 "zcopy": false, 00:24:22.619 "get_zone_info": false, 00:24:22.619 "zone_management": false, 00:24:22.619 "zone_append": false, 00:24:22.619 "compare": false, 00:24:22.619 "compare_and_write": false, 00:24:22.619 "abort": false, 00:24:22.619 "seek_hole": false, 00:24:22.619 "seek_data": false, 00:24:22.619 "copy": false, 00:24:22.619 "nvme_iov_md": false 00:24:22.619 }, 00:24:22.619 "memory_domains": [ 00:24:22.619 { 00:24:22.619 "dma_device_id": "system", 00:24:22.619 "dma_device_type": 1 00:24:22.619 }, 00:24:22.619 { 00:24:22.619 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:22.619 "dma_device_type": 2 00:24:22.619 }, 00:24:22.619 { 00:24:22.619 "dma_device_id": "system", 00:24:22.619 "dma_device_type": 1 00:24:22.619 }, 00:24:22.619 { 00:24:22.619 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:22.619 "dma_device_type": 2 00:24:22.619 }, 00:24:22.619 { 00:24:22.619 "dma_device_id": "system", 00:24:22.619 "dma_device_type": 1 00:24:22.619 }, 00:24:22.619 { 00:24:22.619 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:22.619 "dma_device_type": 2 00:24:22.619 }, 00:24:22.619 { 00:24:22.619 "dma_device_id": "system", 00:24:22.619 "dma_device_type": 1 00:24:22.619 }, 00:24:22.619 { 00:24:22.619 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:22.619 "dma_device_type": 2 00:24:22.619 } 00:24:22.619 ], 00:24:22.619 "driver_specific": { 00:24:22.619 "raid": { 00:24:22.619 "uuid": "5036b29f-48e5-4aaf-9d62-7950c695a44e", 00:24:22.619 "strip_size_kb": 64, 00:24:22.619 "state": "online", 00:24:22.619 "raid_level": "concat", 00:24:22.619 "superblock": true, 00:24:22.619 "num_base_bdevs": 4, 00:24:22.619 "num_base_bdevs_discovered": 4, 00:24:22.619 "num_base_bdevs_operational": 4, 00:24:22.619 "base_bdevs_list": [ 00:24:22.619 { 00:24:22.619 "name": "BaseBdev1", 00:24:22.619 "uuid": "ecf0d0e1-6b71-4eec-bd1c-fc3c0e5d6fc7", 00:24:22.619 "is_configured": true, 00:24:22.619 "data_offset": 2048, 00:24:22.619 "data_size": 63488 00:24:22.619 }, 00:24:22.619 { 00:24:22.619 "name": "BaseBdev2", 00:24:22.619 "uuid": "b3faa60a-ba33-4982-a6ab-37813f5e2f25", 00:24:22.619 "is_configured": true, 00:24:22.619 "data_offset": 2048, 00:24:22.619 "data_size": 63488 00:24:22.619 }, 00:24:22.619 { 00:24:22.619 "name": "BaseBdev3", 00:24:22.619 "uuid": "a4eebb70-b226-4c1c-a69f-bf5eb48e2489", 00:24:22.619 "is_configured": true, 00:24:22.619 "data_offset": 2048, 00:24:22.619 "data_size": 63488 00:24:22.619 }, 00:24:22.619 { 00:24:22.619 "name": "BaseBdev4", 00:24:22.619 "uuid": "53002bb6-3f91-43d9-86dd-0405d18462ac", 00:24:22.619 "is_configured": true, 00:24:22.619 "data_offset": 2048, 00:24:22.619 "data_size": 63488 00:24:22.619 } 00:24:22.619 ] 00:24:22.619 } 00:24:22.619 } 00:24:22.619 }' 00:24:22.619 14:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:22.876 14:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:24:22.876 BaseBdev2 00:24:22.876 BaseBdev3 00:24:22.876 BaseBdev4' 00:24:22.876 14:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:22.876 14:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:24:22.876 14:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:23.138 14:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:23.138 "name": "BaseBdev1", 00:24:23.138 "aliases": [ 00:24:23.138 "ecf0d0e1-6b71-4eec-bd1c-fc3c0e5d6fc7" 00:24:23.138 ], 00:24:23.138 "product_name": "Malloc disk", 00:24:23.138 "block_size": 512, 00:24:23.138 "num_blocks": 65536, 00:24:23.138 "uuid": "ecf0d0e1-6b71-4eec-bd1c-fc3c0e5d6fc7", 00:24:23.138 "assigned_rate_limits": { 00:24:23.138 "rw_ios_per_sec": 0, 00:24:23.138 "rw_mbytes_per_sec": 0, 00:24:23.138 "r_mbytes_per_sec": 0, 00:24:23.138 "w_mbytes_per_sec": 0 00:24:23.138 }, 00:24:23.138 "claimed": true, 00:24:23.138 "claim_type": "exclusive_write", 00:24:23.138 "zoned": false, 00:24:23.138 "supported_io_types": { 00:24:23.138 "read": true, 00:24:23.138 "write": true, 00:24:23.138 "unmap": true, 00:24:23.138 "flush": true, 00:24:23.138 "reset": true, 00:24:23.138 "nvme_admin": false, 00:24:23.138 "nvme_io": false, 00:24:23.138 "nvme_io_md": false, 00:24:23.138 "write_zeroes": true, 00:24:23.138 "zcopy": true, 00:24:23.138 "get_zone_info": false, 00:24:23.138 "zone_management": false, 00:24:23.138 "zone_append": false, 00:24:23.138 "compare": false, 00:24:23.138 "compare_and_write": false, 00:24:23.138 "abort": true, 00:24:23.138 "seek_hole": false, 00:24:23.138 "seek_data": false, 00:24:23.138 "copy": true, 00:24:23.138 "nvme_iov_md": false 00:24:23.138 }, 00:24:23.138 "memory_domains": [ 00:24:23.138 { 00:24:23.138 "dma_device_id": "system", 00:24:23.138 "dma_device_type": 1 00:24:23.138 }, 00:24:23.138 { 00:24:23.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:23.138 "dma_device_type": 2 00:24:23.138 } 00:24:23.138 ], 00:24:23.138 "driver_specific": {} 00:24:23.138 }' 00:24:23.138 14:17:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:23.138 14:17:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:23.138 14:17:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:23.138 14:17:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:23.138 14:17:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:23.396 14:17:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:23.396 14:17:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:23.396 14:17:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:23.396 14:17:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:23.396 14:17:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:23.396 14:17:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:23.396 14:17:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:23.396 14:17:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:23.396 14:17:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:24:23.396 14:17:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:23.654 14:17:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:23.654 "name": "BaseBdev2", 00:24:23.654 "aliases": [ 00:24:23.654 "b3faa60a-ba33-4982-a6ab-37813f5e2f25" 00:24:23.654 ], 00:24:23.654 "product_name": "Malloc disk", 00:24:23.654 "block_size": 512, 00:24:23.654 "num_blocks": 65536, 00:24:23.654 "uuid": "b3faa60a-ba33-4982-a6ab-37813f5e2f25", 00:24:23.654 "assigned_rate_limits": { 00:24:23.654 "rw_ios_per_sec": 0, 00:24:23.654 "rw_mbytes_per_sec": 0, 00:24:23.654 "r_mbytes_per_sec": 0, 00:24:23.654 "w_mbytes_per_sec": 0 00:24:23.654 }, 00:24:23.654 "claimed": true, 00:24:23.654 "claim_type": "exclusive_write", 00:24:23.654 "zoned": false, 00:24:23.654 "supported_io_types": { 00:24:23.654 "read": true, 00:24:23.654 "write": true, 00:24:23.654 "unmap": true, 00:24:23.654 "flush": true, 00:24:23.654 "reset": true, 00:24:23.654 "nvme_admin": false, 00:24:23.654 "nvme_io": false, 00:24:23.654 "nvme_io_md": false, 00:24:23.654 "write_zeroes": true, 00:24:23.654 "zcopy": true, 00:24:23.654 "get_zone_info": false, 00:24:23.654 "zone_management": false, 00:24:23.654 "zone_append": false, 00:24:23.654 "compare": false, 00:24:23.654 "compare_and_write": false, 00:24:23.654 "abort": true, 00:24:23.654 "seek_hole": false, 00:24:23.654 "seek_data": false, 00:24:23.654 "copy": true, 00:24:23.654 "nvme_iov_md": false 00:24:23.654 }, 00:24:23.654 "memory_domains": [ 00:24:23.654 { 00:24:23.654 "dma_device_id": "system", 00:24:23.654 "dma_device_type": 1 00:24:23.654 }, 00:24:23.654 { 00:24:23.654 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:23.654 "dma_device_type": 2 00:24:23.654 } 00:24:23.654 ], 00:24:23.654 "driver_specific": {} 00:24:23.654 }' 00:24:23.654 14:17:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:23.912 14:17:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:23.912 14:17:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:23.912 14:17:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:23.912 14:17:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:23.912 14:17:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:23.912 14:17:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:23.912 14:17:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:24.170 14:17:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:24.170 14:17:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:24.170 14:17:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:24.170 14:17:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:24.170 14:17:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:24.170 14:17:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:24:24.170 14:17:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:24.426 14:17:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:24.426 "name": "BaseBdev3", 00:24:24.426 "aliases": [ 00:24:24.426 "a4eebb70-b226-4c1c-a69f-bf5eb48e2489" 00:24:24.426 ], 00:24:24.426 "product_name": "Malloc disk", 00:24:24.426 "block_size": 512, 00:24:24.426 "num_blocks": 65536, 00:24:24.426 "uuid": "a4eebb70-b226-4c1c-a69f-bf5eb48e2489", 00:24:24.426 "assigned_rate_limits": { 00:24:24.426 "rw_ios_per_sec": 0, 00:24:24.426 "rw_mbytes_per_sec": 0, 00:24:24.426 "r_mbytes_per_sec": 0, 00:24:24.426 "w_mbytes_per_sec": 0 00:24:24.426 }, 00:24:24.426 "claimed": true, 00:24:24.426 "claim_type": "exclusive_write", 00:24:24.426 "zoned": false, 00:24:24.426 "supported_io_types": { 00:24:24.426 "read": true, 00:24:24.426 "write": true, 00:24:24.426 "unmap": true, 00:24:24.426 "flush": true, 00:24:24.426 "reset": true, 00:24:24.426 "nvme_admin": false, 00:24:24.426 "nvme_io": false, 00:24:24.426 "nvme_io_md": false, 00:24:24.426 "write_zeroes": true, 00:24:24.426 "zcopy": true, 00:24:24.426 "get_zone_info": false, 00:24:24.426 "zone_management": false, 00:24:24.426 "zone_append": false, 00:24:24.426 "compare": false, 00:24:24.426 "compare_and_write": false, 00:24:24.427 "abort": true, 00:24:24.427 "seek_hole": false, 00:24:24.427 "seek_data": false, 00:24:24.427 "copy": true, 00:24:24.427 "nvme_iov_md": false 00:24:24.427 }, 00:24:24.427 "memory_domains": [ 00:24:24.427 { 00:24:24.427 "dma_device_id": "system", 00:24:24.427 "dma_device_type": 1 00:24:24.427 }, 00:24:24.427 { 00:24:24.427 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:24.427 "dma_device_type": 2 00:24:24.427 } 00:24:24.427 ], 00:24:24.427 "driver_specific": {} 00:24:24.427 }' 00:24:24.427 14:17:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:24.427 14:17:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:24.427 14:17:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:24.427 14:17:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:24.427 14:17:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:24.684 14:17:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:24.684 14:17:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:24.684 14:17:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:24.684 14:17:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:24.684 14:17:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:24.684 14:17:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:24.684 14:17:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:24.684 14:17:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:24.684 14:17:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:24:24.684 14:17:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:24.943 14:17:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:24.943 "name": "BaseBdev4", 00:24:24.943 "aliases": [ 00:24:24.943 "53002bb6-3f91-43d9-86dd-0405d18462ac" 00:24:24.943 ], 00:24:24.943 "product_name": "Malloc disk", 00:24:24.943 "block_size": 512, 00:24:24.943 "num_blocks": 65536, 00:24:24.943 "uuid": "53002bb6-3f91-43d9-86dd-0405d18462ac", 00:24:24.943 "assigned_rate_limits": { 00:24:24.943 "rw_ios_per_sec": 0, 00:24:24.943 "rw_mbytes_per_sec": 0, 00:24:24.943 "r_mbytes_per_sec": 0, 00:24:24.943 "w_mbytes_per_sec": 0 00:24:24.943 }, 00:24:24.943 "claimed": true, 00:24:24.943 "claim_type": "exclusive_write", 00:24:24.943 "zoned": false, 00:24:24.943 "supported_io_types": { 00:24:24.943 "read": true, 00:24:24.943 "write": true, 00:24:24.943 "unmap": true, 00:24:24.943 "flush": true, 00:24:24.943 "reset": true, 00:24:24.943 "nvme_admin": false, 00:24:24.943 "nvme_io": false, 00:24:24.943 "nvme_io_md": false, 00:24:24.943 "write_zeroes": true, 00:24:24.943 "zcopy": true, 00:24:24.943 "get_zone_info": false, 00:24:24.943 "zone_management": false, 00:24:24.943 "zone_append": false, 00:24:24.943 "compare": false, 00:24:24.943 "compare_and_write": false, 00:24:24.943 "abort": true, 00:24:24.943 "seek_hole": false, 00:24:24.943 "seek_data": false, 00:24:24.943 "copy": true, 00:24:24.943 "nvme_iov_md": false 00:24:24.943 }, 00:24:24.943 "memory_domains": [ 00:24:24.943 { 00:24:24.943 "dma_device_id": "system", 00:24:24.943 "dma_device_type": 1 00:24:24.943 }, 00:24:24.943 { 00:24:24.943 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:24.943 "dma_device_type": 2 00:24:24.943 } 00:24:24.943 ], 00:24:24.943 "driver_specific": {} 00:24:24.943 }' 00:24:24.943 14:17:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:25.202 14:17:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:25.202 14:17:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:25.202 14:17:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:25.202 14:17:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:25.202 14:17:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:25.202 14:17:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:25.202 14:17:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:25.469 14:17:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:25.469 14:17:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:25.469 14:17:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:25.469 14:17:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:25.469 14:17:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:24:25.755 [2024-07-15 14:17:11.557487] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:25.755 [2024-07-15 14:17:11.557712] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:25.755 [2024-07-15 14:17:11.557894] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:25.755 14:17:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:24:25.755 14:17:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:24:25.755 14:17:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:24:25.755 14:17:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:24:25.755 14:17:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:24:25.755 14:17:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:24:25.755 14:17:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:25.755 14:17:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:24:25.755 14:17:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:25.755 14:17:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:25.755 14:17:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:24:25.755 14:17:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:25.755 14:17:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:25.755 14:17:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:25.755 14:17:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:25.755 14:17:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:25.755 14:17:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:26.013 14:17:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:26.013 "name": "Existed_Raid", 00:24:26.013 "uuid": "5036b29f-48e5-4aaf-9d62-7950c695a44e", 00:24:26.013 "strip_size_kb": 64, 00:24:26.013 "state": "offline", 00:24:26.013 "raid_level": "concat", 00:24:26.013 "superblock": true, 00:24:26.013 "num_base_bdevs": 4, 00:24:26.013 "num_base_bdevs_discovered": 3, 00:24:26.013 "num_base_bdevs_operational": 3, 00:24:26.013 "base_bdevs_list": [ 00:24:26.013 { 00:24:26.013 "name": null, 00:24:26.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:26.013 "is_configured": false, 00:24:26.013 "data_offset": 2048, 00:24:26.013 "data_size": 63488 00:24:26.013 }, 00:24:26.013 { 00:24:26.013 "name": "BaseBdev2", 00:24:26.013 "uuid": "b3faa60a-ba33-4982-a6ab-37813f5e2f25", 00:24:26.013 "is_configured": true, 00:24:26.013 "data_offset": 2048, 00:24:26.013 "data_size": 63488 00:24:26.013 }, 00:24:26.013 { 00:24:26.013 "name": "BaseBdev3", 00:24:26.013 "uuid": "a4eebb70-b226-4c1c-a69f-bf5eb48e2489", 00:24:26.013 "is_configured": true, 00:24:26.013 "data_offset": 2048, 00:24:26.013 "data_size": 63488 00:24:26.013 }, 00:24:26.013 { 00:24:26.013 "name": "BaseBdev4", 00:24:26.013 "uuid": "53002bb6-3f91-43d9-86dd-0405d18462ac", 00:24:26.013 "is_configured": true, 00:24:26.013 "data_offset": 2048, 00:24:26.013 "data_size": 63488 00:24:26.013 } 00:24:26.013 ] 00:24:26.013 }' 00:24:26.013 14:17:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:26.013 14:17:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:26.948 14:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:24:26.948 14:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:24:26.948 14:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:26.948 14:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:24:26.948 14:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:24:26.948 14:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:26.948 14:17:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:24:27.207 [2024-07-15 14:17:13.119384] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:27.466 14:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:24:27.466 14:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:24:27.466 14:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:24:27.466 14:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:27.724 14:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:24:27.724 14:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:27.724 14:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:24:27.983 [2024-07-15 14:17:13.730012] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:24:27.983 14:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:24:27.983 14:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:24:27.983 14:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:27.983 14:17:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:24:28.241 14:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:24:28.241 14:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:28.241 14:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:24:28.500 [2024-07-15 14:17:14.412312] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:24:28.500 [2024-07-15 14:17:14.412575] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:24:28.758 14:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:24:28.758 14:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:24:28.758 14:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:28.758 14:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:24:29.069 14:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:24:29.069 14:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:24:29.069 14:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:24:29.069 14:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:24:29.069 14:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:24:29.069 14:17:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:24:29.069 BaseBdev2 00:24:29.343 14:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:24:29.343 14:17:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:24:29.343 14:17:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:29.343 14:17:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:24:29.343 14:17:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:29.343 14:17:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:29.343 14:17:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:29.343 14:17:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:29.600 [ 00:24:29.600 { 00:24:29.600 "name": "BaseBdev2", 00:24:29.600 "aliases": [ 00:24:29.600 "748a7839-f0a3-48ec-8898-503c1b2be5b9" 00:24:29.600 ], 00:24:29.600 "product_name": "Malloc disk", 00:24:29.600 "block_size": 512, 00:24:29.600 "num_blocks": 65536, 00:24:29.600 "uuid": "748a7839-f0a3-48ec-8898-503c1b2be5b9", 00:24:29.600 "assigned_rate_limits": { 00:24:29.600 "rw_ios_per_sec": 0, 00:24:29.600 "rw_mbytes_per_sec": 0, 00:24:29.600 "r_mbytes_per_sec": 0, 00:24:29.600 "w_mbytes_per_sec": 0 00:24:29.600 }, 00:24:29.600 "claimed": false, 00:24:29.600 "zoned": false, 00:24:29.600 "supported_io_types": { 00:24:29.600 "read": true, 00:24:29.600 "write": true, 00:24:29.600 "unmap": true, 00:24:29.600 "flush": true, 00:24:29.600 "reset": true, 00:24:29.600 "nvme_admin": false, 00:24:29.600 "nvme_io": false, 00:24:29.600 "nvme_io_md": false, 00:24:29.600 "write_zeroes": true, 00:24:29.600 "zcopy": true, 00:24:29.600 "get_zone_info": false, 00:24:29.600 "zone_management": false, 00:24:29.600 "zone_append": false, 00:24:29.600 "compare": false, 00:24:29.600 "compare_and_write": false, 00:24:29.600 "abort": true, 00:24:29.600 "seek_hole": false, 00:24:29.600 "seek_data": false, 00:24:29.600 "copy": true, 00:24:29.600 "nvme_iov_md": false 00:24:29.600 }, 00:24:29.600 "memory_domains": [ 00:24:29.600 { 00:24:29.600 "dma_device_id": "system", 00:24:29.600 "dma_device_type": 1 00:24:29.600 }, 00:24:29.600 { 00:24:29.600 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:29.600 "dma_device_type": 2 00:24:29.600 } 00:24:29.600 ], 00:24:29.600 "driver_specific": {} 00:24:29.600 } 00:24:29.600 ] 00:24:29.857 14:17:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:24:29.857 14:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:24:29.857 14:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:24:29.857 14:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:24:30.115 BaseBdev3 00:24:30.115 14:17:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:24:30.115 14:17:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:24:30.115 14:17:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:30.115 14:17:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:24:30.115 14:17:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:30.115 14:17:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:30.115 14:17:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:30.373 14:17:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:24:30.631 [ 00:24:30.631 { 00:24:30.631 "name": "BaseBdev3", 00:24:30.631 "aliases": [ 00:24:30.631 "d91b7dba-c205-451f-a7ba-40854d9fb95b" 00:24:30.631 ], 00:24:30.631 "product_name": "Malloc disk", 00:24:30.631 "block_size": 512, 00:24:30.631 "num_blocks": 65536, 00:24:30.631 "uuid": "d91b7dba-c205-451f-a7ba-40854d9fb95b", 00:24:30.631 "assigned_rate_limits": { 00:24:30.631 "rw_ios_per_sec": 0, 00:24:30.631 "rw_mbytes_per_sec": 0, 00:24:30.631 "r_mbytes_per_sec": 0, 00:24:30.631 "w_mbytes_per_sec": 0 00:24:30.631 }, 00:24:30.631 "claimed": false, 00:24:30.631 "zoned": false, 00:24:30.631 "supported_io_types": { 00:24:30.631 "read": true, 00:24:30.631 "write": true, 00:24:30.631 "unmap": true, 00:24:30.631 "flush": true, 00:24:30.631 "reset": true, 00:24:30.631 "nvme_admin": false, 00:24:30.631 "nvme_io": false, 00:24:30.631 "nvme_io_md": false, 00:24:30.631 "write_zeroes": true, 00:24:30.631 "zcopy": true, 00:24:30.631 "get_zone_info": false, 00:24:30.631 "zone_management": false, 00:24:30.631 "zone_append": false, 00:24:30.631 "compare": false, 00:24:30.631 "compare_and_write": false, 00:24:30.631 "abort": true, 00:24:30.631 "seek_hole": false, 00:24:30.631 "seek_data": false, 00:24:30.631 "copy": true, 00:24:30.631 "nvme_iov_md": false 00:24:30.631 }, 00:24:30.631 "memory_domains": [ 00:24:30.631 { 00:24:30.631 "dma_device_id": "system", 00:24:30.631 "dma_device_type": 1 00:24:30.631 }, 00:24:30.631 { 00:24:30.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:30.631 "dma_device_type": 2 00:24:30.631 } 00:24:30.631 ], 00:24:30.631 "driver_specific": {} 00:24:30.631 } 00:24:30.631 ] 00:24:30.631 14:17:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:24:30.631 14:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:24:30.631 14:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:24:30.631 14:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:24:30.889 BaseBdev4 00:24:30.889 14:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:24:30.889 14:17:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:24:30.889 14:17:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:30.889 14:17:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:24:30.889 14:17:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:30.889 14:17:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:30.889 14:17:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:31.145 14:17:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:24:31.402 [ 00:24:31.402 { 00:24:31.402 "name": "BaseBdev4", 00:24:31.402 "aliases": [ 00:24:31.402 "7f13259c-1831-471a-a55c-08dc257f001a" 00:24:31.402 ], 00:24:31.402 "product_name": "Malloc disk", 00:24:31.402 "block_size": 512, 00:24:31.402 "num_blocks": 65536, 00:24:31.402 "uuid": "7f13259c-1831-471a-a55c-08dc257f001a", 00:24:31.402 "assigned_rate_limits": { 00:24:31.402 "rw_ios_per_sec": 0, 00:24:31.402 "rw_mbytes_per_sec": 0, 00:24:31.402 "r_mbytes_per_sec": 0, 00:24:31.402 "w_mbytes_per_sec": 0 00:24:31.402 }, 00:24:31.402 "claimed": false, 00:24:31.402 "zoned": false, 00:24:31.402 "supported_io_types": { 00:24:31.402 "read": true, 00:24:31.402 "write": true, 00:24:31.402 "unmap": true, 00:24:31.402 "flush": true, 00:24:31.402 "reset": true, 00:24:31.402 "nvme_admin": false, 00:24:31.402 "nvme_io": false, 00:24:31.402 "nvme_io_md": false, 00:24:31.402 "write_zeroes": true, 00:24:31.402 "zcopy": true, 00:24:31.402 "get_zone_info": false, 00:24:31.402 "zone_management": false, 00:24:31.402 "zone_append": false, 00:24:31.402 "compare": false, 00:24:31.402 "compare_and_write": false, 00:24:31.402 "abort": true, 00:24:31.402 "seek_hole": false, 00:24:31.402 "seek_data": false, 00:24:31.402 "copy": true, 00:24:31.402 "nvme_iov_md": false 00:24:31.402 }, 00:24:31.402 "memory_domains": [ 00:24:31.402 { 00:24:31.402 "dma_device_id": "system", 00:24:31.402 "dma_device_type": 1 00:24:31.402 }, 00:24:31.402 { 00:24:31.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:31.402 "dma_device_type": 2 00:24:31.402 } 00:24:31.402 ], 00:24:31.402 "driver_specific": {} 00:24:31.402 } 00:24:31.402 ] 00:24:31.402 14:17:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:24:31.402 14:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:24:31.402 14:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:24:31.402 14:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:31.661 [2024-07-15 14:17:17.541692] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:31.661 [2024-07-15 14:17:17.542024] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:31.661 [2024-07-15 14:17:17.542158] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:31.661 [2024-07-15 14:17:17.543720] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:31.661 [2024-07-15 14:17:17.543922] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:31.661 14:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:31.661 14:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:31.661 14:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:31.661 14:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:31.661 14:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:31.661 14:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:31.661 14:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:31.661 14:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:31.661 14:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:31.661 14:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:31.661 14:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:31.661 14:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:31.919 14:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:31.919 "name": "Existed_Raid", 00:24:31.919 "uuid": "9d81343b-a934-4bf7-96f9-b309456c3b94", 00:24:31.919 "strip_size_kb": 64, 00:24:31.919 "state": "configuring", 00:24:31.919 "raid_level": "concat", 00:24:31.919 "superblock": true, 00:24:31.919 "num_base_bdevs": 4, 00:24:31.919 "num_base_bdevs_discovered": 3, 00:24:31.919 "num_base_bdevs_operational": 4, 00:24:31.919 "base_bdevs_list": [ 00:24:31.919 { 00:24:31.919 "name": "BaseBdev1", 00:24:31.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:31.919 "is_configured": false, 00:24:31.919 "data_offset": 0, 00:24:31.919 "data_size": 0 00:24:31.919 }, 00:24:31.920 { 00:24:31.920 "name": "BaseBdev2", 00:24:31.920 "uuid": "748a7839-f0a3-48ec-8898-503c1b2be5b9", 00:24:31.920 "is_configured": true, 00:24:31.920 "data_offset": 2048, 00:24:31.920 "data_size": 63488 00:24:31.920 }, 00:24:31.920 { 00:24:31.920 "name": "BaseBdev3", 00:24:31.920 "uuid": "d91b7dba-c205-451f-a7ba-40854d9fb95b", 00:24:31.920 "is_configured": true, 00:24:31.920 "data_offset": 2048, 00:24:31.920 "data_size": 63488 00:24:31.920 }, 00:24:31.920 { 00:24:31.920 "name": "BaseBdev4", 00:24:31.920 "uuid": "7f13259c-1831-471a-a55c-08dc257f001a", 00:24:31.920 "is_configured": true, 00:24:31.920 "data_offset": 2048, 00:24:31.920 "data_size": 63488 00:24:31.920 } 00:24:31.920 ] 00:24:31.920 }' 00:24:31.920 14:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:31.920 14:17:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:32.485 14:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:24:32.747 [2024-07-15 14:17:18.613812] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:32.747 14:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:32.747 14:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:32.747 14:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:32.747 14:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:32.747 14:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:32.747 14:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:32.747 14:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:32.747 14:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:32.747 14:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:32.747 14:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:32.747 14:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:32.747 14:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:33.005 14:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:33.005 "name": "Existed_Raid", 00:24:33.005 "uuid": "9d81343b-a934-4bf7-96f9-b309456c3b94", 00:24:33.005 "strip_size_kb": 64, 00:24:33.005 "state": "configuring", 00:24:33.005 "raid_level": "concat", 00:24:33.005 "superblock": true, 00:24:33.005 "num_base_bdevs": 4, 00:24:33.005 "num_base_bdevs_discovered": 2, 00:24:33.005 "num_base_bdevs_operational": 4, 00:24:33.005 "base_bdevs_list": [ 00:24:33.005 { 00:24:33.006 "name": "BaseBdev1", 00:24:33.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:33.006 "is_configured": false, 00:24:33.006 "data_offset": 0, 00:24:33.006 "data_size": 0 00:24:33.006 }, 00:24:33.006 { 00:24:33.006 "name": null, 00:24:33.006 "uuid": "748a7839-f0a3-48ec-8898-503c1b2be5b9", 00:24:33.006 "is_configured": false, 00:24:33.006 "data_offset": 2048, 00:24:33.006 "data_size": 63488 00:24:33.006 }, 00:24:33.006 { 00:24:33.006 "name": "BaseBdev3", 00:24:33.006 "uuid": "d91b7dba-c205-451f-a7ba-40854d9fb95b", 00:24:33.006 "is_configured": true, 00:24:33.006 "data_offset": 2048, 00:24:33.006 "data_size": 63488 00:24:33.006 }, 00:24:33.006 { 00:24:33.006 "name": "BaseBdev4", 00:24:33.006 "uuid": "7f13259c-1831-471a-a55c-08dc257f001a", 00:24:33.006 "is_configured": true, 00:24:33.006 "data_offset": 2048, 00:24:33.006 "data_size": 63488 00:24:33.006 } 00:24:33.006 ] 00:24:33.006 }' 00:24:33.006 14:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:33.006 14:17:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:33.572 14:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:33.572 14:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:24:33.831 14:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:24:33.831 14:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:24:34.090 [2024-07-15 14:17:20.058191] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:34.090 BaseBdev1 00:24:34.090 14:17:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:24:34.090 14:17:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:24:34.090 14:17:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:34.090 14:17:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:24:34.090 14:17:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:34.090 14:17:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:34.090 14:17:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:34.349 14:17:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:34.607 [ 00:24:34.607 { 00:24:34.607 "name": "BaseBdev1", 00:24:34.607 "aliases": [ 00:24:34.607 "8ca965bd-187d-43fd-866d-ea37dbf0d9a6" 00:24:34.607 ], 00:24:34.607 "product_name": "Malloc disk", 00:24:34.607 "block_size": 512, 00:24:34.607 "num_blocks": 65536, 00:24:34.607 "uuid": "8ca965bd-187d-43fd-866d-ea37dbf0d9a6", 00:24:34.607 "assigned_rate_limits": { 00:24:34.607 "rw_ios_per_sec": 0, 00:24:34.607 "rw_mbytes_per_sec": 0, 00:24:34.607 "r_mbytes_per_sec": 0, 00:24:34.607 "w_mbytes_per_sec": 0 00:24:34.607 }, 00:24:34.607 "claimed": true, 00:24:34.607 "claim_type": "exclusive_write", 00:24:34.607 "zoned": false, 00:24:34.607 "supported_io_types": { 00:24:34.607 "read": true, 00:24:34.607 "write": true, 00:24:34.607 "unmap": true, 00:24:34.607 "flush": true, 00:24:34.607 "reset": true, 00:24:34.607 "nvme_admin": false, 00:24:34.607 "nvme_io": false, 00:24:34.607 "nvme_io_md": false, 00:24:34.607 "write_zeroes": true, 00:24:34.607 "zcopy": true, 00:24:34.607 "get_zone_info": false, 00:24:34.607 "zone_management": false, 00:24:34.607 "zone_append": false, 00:24:34.607 "compare": false, 00:24:34.607 "compare_and_write": false, 00:24:34.607 "abort": true, 00:24:34.607 "seek_hole": false, 00:24:34.607 "seek_data": false, 00:24:34.607 "copy": true, 00:24:34.607 "nvme_iov_md": false 00:24:34.607 }, 00:24:34.607 "memory_domains": [ 00:24:34.607 { 00:24:34.607 "dma_device_id": "system", 00:24:34.607 "dma_device_type": 1 00:24:34.607 }, 00:24:34.607 { 00:24:34.607 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:34.608 "dma_device_type": 2 00:24:34.608 } 00:24:34.608 ], 00:24:34.608 "driver_specific": {} 00:24:34.608 } 00:24:34.608 ] 00:24:34.608 14:17:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:24:34.608 14:17:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:34.608 14:17:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:34.608 14:17:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:34.608 14:17:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:34.608 14:17:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:34.608 14:17:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:34.608 14:17:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:34.608 14:17:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:34.608 14:17:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:34.608 14:17:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:34.608 14:17:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:34.608 14:17:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:34.866 14:17:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:34.866 "name": "Existed_Raid", 00:24:34.866 "uuid": "9d81343b-a934-4bf7-96f9-b309456c3b94", 00:24:34.866 "strip_size_kb": 64, 00:24:34.866 "state": "configuring", 00:24:34.866 "raid_level": "concat", 00:24:34.866 "superblock": true, 00:24:34.866 "num_base_bdevs": 4, 00:24:34.866 "num_base_bdevs_discovered": 3, 00:24:34.866 "num_base_bdevs_operational": 4, 00:24:34.866 "base_bdevs_list": [ 00:24:34.866 { 00:24:34.866 "name": "BaseBdev1", 00:24:34.866 "uuid": "8ca965bd-187d-43fd-866d-ea37dbf0d9a6", 00:24:34.866 "is_configured": true, 00:24:34.866 "data_offset": 2048, 00:24:34.866 "data_size": 63488 00:24:34.866 }, 00:24:34.866 { 00:24:34.866 "name": null, 00:24:34.866 "uuid": "748a7839-f0a3-48ec-8898-503c1b2be5b9", 00:24:34.866 "is_configured": false, 00:24:34.866 "data_offset": 2048, 00:24:34.866 "data_size": 63488 00:24:34.866 }, 00:24:34.866 { 00:24:34.866 "name": "BaseBdev3", 00:24:34.866 "uuid": "d91b7dba-c205-451f-a7ba-40854d9fb95b", 00:24:34.866 "is_configured": true, 00:24:34.866 "data_offset": 2048, 00:24:34.866 "data_size": 63488 00:24:34.866 }, 00:24:34.866 { 00:24:34.866 "name": "BaseBdev4", 00:24:34.866 "uuid": "7f13259c-1831-471a-a55c-08dc257f001a", 00:24:34.866 "is_configured": true, 00:24:34.866 "data_offset": 2048, 00:24:34.866 "data_size": 63488 00:24:34.866 } 00:24:34.866 ] 00:24:34.866 }' 00:24:34.866 14:17:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:34.866 14:17:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:35.800 14:17:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:35.800 14:17:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:24:35.800 14:17:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:24:35.800 14:17:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:24:36.070 [2024-07-15 14:17:21.958582] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:24:36.070 14:17:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:36.070 14:17:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:36.070 14:17:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:36.070 14:17:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:36.070 14:17:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:36.070 14:17:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:36.070 14:17:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:36.070 14:17:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:36.070 14:17:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:36.070 14:17:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:36.070 14:17:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:36.070 14:17:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:36.357 14:17:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:36.357 "name": "Existed_Raid", 00:24:36.357 "uuid": "9d81343b-a934-4bf7-96f9-b309456c3b94", 00:24:36.357 "strip_size_kb": 64, 00:24:36.357 "state": "configuring", 00:24:36.357 "raid_level": "concat", 00:24:36.357 "superblock": true, 00:24:36.357 "num_base_bdevs": 4, 00:24:36.357 "num_base_bdevs_discovered": 2, 00:24:36.357 "num_base_bdevs_operational": 4, 00:24:36.357 "base_bdevs_list": [ 00:24:36.357 { 00:24:36.357 "name": "BaseBdev1", 00:24:36.357 "uuid": "8ca965bd-187d-43fd-866d-ea37dbf0d9a6", 00:24:36.357 "is_configured": true, 00:24:36.357 "data_offset": 2048, 00:24:36.357 "data_size": 63488 00:24:36.357 }, 00:24:36.357 { 00:24:36.357 "name": null, 00:24:36.357 "uuid": "748a7839-f0a3-48ec-8898-503c1b2be5b9", 00:24:36.357 "is_configured": false, 00:24:36.357 "data_offset": 2048, 00:24:36.357 "data_size": 63488 00:24:36.357 }, 00:24:36.357 { 00:24:36.357 "name": null, 00:24:36.357 "uuid": "d91b7dba-c205-451f-a7ba-40854d9fb95b", 00:24:36.357 "is_configured": false, 00:24:36.357 "data_offset": 2048, 00:24:36.357 "data_size": 63488 00:24:36.357 }, 00:24:36.357 { 00:24:36.357 "name": "BaseBdev4", 00:24:36.357 "uuid": "7f13259c-1831-471a-a55c-08dc257f001a", 00:24:36.357 "is_configured": true, 00:24:36.357 "data_offset": 2048, 00:24:36.357 "data_size": 63488 00:24:36.357 } 00:24:36.357 ] 00:24:36.357 }' 00:24:36.357 14:17:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:36.357 14:17:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:36.926 14:17:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:36.926 14:17:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:24:37.184 14:17:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:24:37.184 14:17:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:24:37.468 [2024-07-15 14:17:23.426812] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:37.468 14:17:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:37.468 14:17:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:37.468 14:17:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:37.468 14:17:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:37.468 14:17:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:37.468 14:17:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:37.468 14:17:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:37.468 14:17:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:37.468 14:17:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:37.468 14:17:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:37.469 14:17:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:37.469 14:17:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:38.034 14:17:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:38.034 "name": "Existed_Raid", 00:24:38.034 "uuid": "9d81343b-a934-4bf7-96f9-b309456c3b94", 00:24:38.034 "strip_size_kb": 64, 00:24:38.034 "state": "configuring", 00:24:38.034 "raid_level": "concat", 00:24:38.034 "superblock": true, 00:24:38.034 "num_base_bdevs": 4, 00:24:38.034 "num_base_bdevs_discovered": 3, 00:24:38.034 "num_base_bdevs_operational": 4, 00:24:38.034 "base_bdevs_list": [ 00:24:38.034 { 00:24:38.034 "name": "BaseBdev1", 00:24:38.034 "uuid": "8ca965bd-187d-43fd-866d-ea37dbf0d9a6", 00:24:38.034 "is_configured": true, 00:24:38.034 "data_offset": 2048, 00:24:38.034 "data_size": 63488 00:24:38.034 }, 00:24:38.034 { 00:24:38.034 "name": null, 00:24:38.034 "uuid": "748a7839-f0a3-48ec-8898-503c1b2be5b9", 00:24:38.034 "is_configured": false, 00:24:38.034 "data_offset": 2048, 00:24:38.034 "data_size": 63488 00:24:38.034 }, 00:24:38.034 { 00:24:38.034 "name": "BaseBdev3", 00:24:38.034 "uuid": "d91b7dba-c205-451f-a7ba-40854d9fb95b", 00:24:38.034 "is_configured": true, 00:24:38.034 "data_offset": 2048, 00:24:38.034 "data_size": 63488 00:24:38.034 }, 00:24:38.034 { 00:24:38.034 "name": "BaseBdev4", 00:24:38.034 "uuid": "7f13259c-1831-471a-a55c-08dc257f001a", 00:24:38.034 "is_configured": true, 00:24:38.034 "data_offset": 2048, 00:24:38.034 "data_size": 63488 00:24:38.034 } 00:24:38.034 ] 00:24:38.034 }' 00:24:38.034 14:17:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:38.034 14:17:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:38.623 14:17:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:24:38.623 14:17:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:38.883 14:17:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:24:38.883 14:17:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:24:39.142 [2024-07-15 14:17:24.959101] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:39.142 14:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:39.142 14:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:39.142 14:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:39.142 14:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:39.142 14:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:39.142 14:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:39.142 14:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:39.142 14:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:39.142 14:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:39.142 14:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:39.142 14:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:39.142 14:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:39.400 14:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:39.400 "name": "Existed_Raid", 00:24:39.400 "uuid": "9d81343b-a934-4bf7-96f9-b309456c3b94", 00:24:39.400 "strip_size_kb": 64, 00:24:39.400 "state": "configuring", 00:24:39.400 "raid_level": "concat", 00:24:39.400 "superblock": true, 00:24:39.400 "num_base_bdevs": 4, 00:24:39.400 "num_base_bdevs_discovered": 2, 00:24:39.400 "num_base_bdevs_operational": 4, 00:24:39.400 "base_bdevs_list": [ 00:24:39.400 { 00:24:39.400 "name": null, 00:24:39.400 "uuid": "8ca965bd-187d-43fd-866d-ea37dbf0d9a6", 00:24:39.400 "is_configured": false, 00:24:39.400 "data_offset": 2048, 00:24:39.400 "data_size": 63488 00:24:39.400 }, 00:24:39.400 { 00:24:39.400 "name": null, 00:24:39.400 "uuid": "748a7839-f0a3-48ec-8898-503c1b2be5b9", 00:24:39.400 "is_configured": false, 00:24:39.400 "data_offset": 2048, 00:24:39.400 "data_size": 63488 00:24:39.400 }, 00:24:39.400 { 00:24:39.400 "name": "BaseBdev3", 00:24:39.400 "uuid": "d91b7dba-c205-451f-a7ba-40854d9fb95b", 00:24:39.400 "is_configured": true, 00:24:39.400 "data_offset": 2048, 00:24:39.400 "data_size": 63488 00:24:39.400 }, 00:24:39.400 { 00:24:39.400 "name": "BaseBdev4", 00:24:39.400 "uuid": "7f13259c-1831-471a-a55c-08dc257f001a", 00:24:39.400 "is_configured": true, 00:24:39.400 "data_offset": 2048, 00:24:39.400 "data_size": 63488 00:24:39.400 } 00:24:39.400 ] 00:24:39.400 }' 00:24:39.400 14:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:39.400 14:17:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:40.334 14:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:24:40.334 14:17:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:40.334 14:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:24:40.334 14:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:24:40.592 [2024-07-15 14:17:26.466293] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:40.592 14:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:24:40.592 14:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:40.592 14:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:40.592 14:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:40.592 14:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:40.592 14:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:40.592 14:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:40.592 14:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:40.592 14:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:40.592 14:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:40.592 14:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:40.592 14:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:40.850 14:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:40.850 "name": "Existed_Raid", 00:24:40.850 "uuid": "9d81343b-a934-4bf7-96f9-b309456c3b94", 00:24:40.850 "strip_size_kb": 64, 00:24:40.850 "state": "configuring", 00:24:40.850 "raid_level": "concat", 00:24:40.850 "superblock": true, 00:24:40.850 "num_base_bdevs": 4, 00:24:40.850 "num_base_bdevs_discovered": 3, 00:24:40.850 "num_base_bdevs_operational": 4, 00:24:40.850 "base_bdevs_list": [ 00:24:40.850 { 00:24:40.850 "name": null, 00:24:40.850 "uuid": "8ca965bd-187d-43fd-866d-ea37dbf0d9a6", 00:24:40.851 "is_configured": false, 00:24:40.851 "data_offset": 2048, 00:24:40.851 "data_size": 63488 00:24:40.851 }, 00:24:40.851 { 00:24:40.851 "name": "BaseBdev2", 00:24:40.851 "uuid": "748a7839-f0a3-48ec-8898-503c1b2be5b9", 00:24:40.851 "is_configured": true, 00:24:40.851 "data_offset": 2048, 00:24:40.851 "data_size": 63488 00:24:40.851 }, 00:24:40.851 { 00:24:40.851 "name": "BaseBdev3", 00:24:40.851 "uuid": "d91b7dba-c205-451f-a7ba-40854d9fb95b", 00:24:40.851 "is_configured": true, 00:24:40.851 "data_offset": 2048, 00:24:40.851 "data_size": 63488 00:24:40.851 }, 00:24:40.851 { 00:24:40.851 "name": "BaseBdev4", 00:24:40.851 "uuid": "7f13259c-1831-471a-a55c-08dc257f001a", 00:24:40.851 "is_configured": true, 00:24:40.851 "data_offset": 2048, 00:24:40.851 "data_size": 63488 00:24:40.851 } 00:24:40.851 ] 00:24:40.851 }' 00:24:40.851 14:17:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:40.851 14:17:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:41.417 14:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:41.417 14:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:24:41.675 14:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:24:41.675 14:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:41.675 14:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:24:41.933 14:17:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 8ca965bd-187d-43fd-866d-ea37dbf0d9a6 00:24:42.191 [2024-07-15 14:17:28.106172] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:24:42.191 [2024-07-15 14:17:28.106558] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:24:42.191 [2024-07-15 14:17:28.106693] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:24:42.191 [2024-07-15 14:17:28.106924] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:24:42.191 [2024-07-15 14:17:28.107261] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:24:42.191 [2024-07-15 14:17:28.107388] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000009380 00:24:42.191 NewBaseBdev 00:24:42.191 [2024-07-15 14:17:28.107631] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:42.191 14:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:24:42.191 14:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:24:42.191 14:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:24:42.191 14:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:24:42.191 14:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:24:42.191 14:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:24:42.191 14:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:42.449 14:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:24:42.708 [ 00:24:42.708 { 00:24:42.708 "name": "NewBaseBdev", 00:24:42.708 "aliases": [ 00:24:42.708 "8ca965bd-187d-43fd-866d-ea37dbf0d9a6" 00:24:42.708 ], 00:24:42.708 "product_name": "Malloc disk", 00:24:42.708 "block_size": 512, 00:24:42.708 "num_blocks": 65536, 00:24:42.708 "uuid": "8ca965bd-187d-43fd-866d-ea37dbf0d9a6", 00:24:42.708 "assigned_rate_limits": { 00:24:42.708 "rw_ios_per_sec": 0, 00:24:42.708 "rw_mbytes_per_sec": 0, 00:24:42.708 "r_mbytes_per_sec": 0, 00:24:42.708 "w_mbytes_per_sec": 0 00:24:42.708 }, 00:24:42.708 "claimed": true, 00:24:42.708 "claim_type": "exclusive_write", 00:24:42.708 "zoned": false, 00:24:42.708 "supported_io_types": { 00:24:42.708 "read": true, 00:24:42.708 "write": true, 00:24:42.708 "unmap": true, 00:24:42.708 "flush": true, 00:24:42.708 "reset": true, 00:24:42.708 "nvme_admin": false, 00:24:42.708 "nvme_io": false, 00:24:42.708 "nvme_io_md": false, 00:24:42.708 "write_zeroes": true, 00:24:42.708 "zcopy": true, 00:24:42.708 "get_zone_info": false, 00:24:42.708 "zone_management": false, 00:24:42.708 "zone_append": false, 00:24:42.708 "compare": false, 00:24:42.708 "compare_and_write": false, 00:24:42.708 "abort": true, 00:24:42.708 "seek_hole": false, 00:24:42.708 "seek_data": false, 00:24:42.708 "copy": true, 00:24:42.708 "nvme_iov_md": false 00:24:42.708 }, 00:24:42.708 "memory_domains": [ 00:24:42.708 { 00:24:42.708 "dma_device_id": "system", 00:24:42.708 "dma_device_type": 1 00:24:42.708 }, 00:24:42.708 { 00:24:42.708 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:42.708 "dma_device_type": 2 00:24:42.708 } 00:24:42.708 ], 00:24:42.708 "driver_specific": {} 00:24:42.708 } 00:24:42.708 ] 00:24:42.708 14:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:24:42.708 14:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:24:42.708 14:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:42.708 14:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:42.708 14:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:42.708 14:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:42.708 14:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:42.708 14:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:42.708 14:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:42.708 14:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:42.708 14:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:42.708 14:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:42.708 14:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:42.966 14:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:42.966 "name": "Existed_Raid", 00:24:42.966 "uuid": "9d81343b-a934-4bf7-96f9-b309456c3b94", 00:24:42.966 "strip_size_kb": 64, 00:24:42.966 "state": "online", 00:24:42.967 "raid_level": "concat", 00:24:42.967 "superblock": true, 00:24:42.967 "num_base_bdevs": 4, 00:24:42.967 "num_base_bdevs_discovered": 4, 00:24:42.967 "num_base_bdevs_operational": 4, 00:24:42.967 "base_bdevs_list": [ 00:24:42.967 { 00:24:42.967 "name": "NewBaseBdev", 00:24:42.967 "uuid": "8ca965bd-187d-43fd-866d-ea37dbf0d9a6", 00:24:42.967 "is_configured": true, 00:24:42.967 "data_offset": 2048, 00:24:42.967 "data_size": 63488 00:24:42.967 }, 00:24:42.967 { 00:24:42.967 "name": "BaseBdev2", 00:24:42.967 "uuid": "748a7839-f0a3-48ec-8898-503c1b2be5b9", 00:24:42.967 "is_configured": true, 00:24:42.967 "data_offset": 2048, 00:24:42.967 "data_size": 63488 00:24:42.967 }, 00:24:42.967 { 00:24:42.967 "name": "BaseBdev3", 00:24:42.967 "uuid": "d91b7dba-c205-451f-a7ba-40854d9fb95b", 00:24:42.967 "is_configured": true, 00:24:42.967 "data_offset": 2048, 00:24:42.967 "data_size": 63488 00:24:42.967 }, 00:24:42.967 { 00:24:42.967 "name": "BaseBdev4", 00:24:42.967 "uuid": "7f13259c-1831-471a-a55c-08dc257f001a", 00:24:42.967 "is_configured": true, 00:24:42.967 "data_offset": 2048, 00:24:42.967 "data_size": 63488 00:24:42.967 } 00:24:42.967 ] 00:24:42.967 }' 00:24:42.967 14:17:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:42.967 14:17:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:43.533 14:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:24:43.533 14:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:24:43.533 14:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:24:43.533 14:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:24:43.533 14:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:24:43.533 14:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:24:43.533 14:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:24:43.533 14:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:24:43.791 [2024-07-15 14:17:29.694675] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:43.791 14:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:24:43.791 "name": "Existed_Raid", 00:24:43.791 "aliases": [ 00:24:43.791 "9d81343b-a934-4bf7-96f9-b309456c3b94" 00:24:43.791 ], 00:24:43.791 "product_name": "Raid Volume", 00:24:43.791 "block_size": 512, 00:24:43.791 "num_blocks": 253952, 00:24:43.791 "uuid": "9d81343b-a934-4bf7-96f9-b309456c3b94", 00:24:43.791 "assigned_rate_limits": { 00:24:43.791 "rw_ios_per_sec": 0, 00:24:43.791 "rw_mbytes_per_sec": 0, 00:24:43.791 "r_mbytes_per_sec": 0, 00:24:43.791 "w_mbytes_per_sec": 0 00:24:43.791 }, 00:24:43.791 "claimed": false, 00:24:43.791 "zoned": false, 00:24:43.791 "supported_io_types": { 00:24:43.791 "read": true, 00:24:43.791 "write": true, 00:24:43.791 "unmap": true, 00:24:43.791 "flush": true, 00:24:43.791 "reset": true, 00:24:43.791 "nvme_admin": false, 00:24:43.791 "nvme_io": false, 00:24:43.791 "nvme_io_md": false, 00:24:43.791 "write_zeroes": true, 00:24:43.791 "zcopy": false, 00:24:43.791 "get_zone_info": false, 00:24:43.791 "zone_management": false, 00:24:43.791 "zone_append": false, 00:24:43.791 "compare": false, 00:24:43.791 "compare_and_write": false, 00:24:43.791 "abort": false, 00:24:43.791 "seek_hole": false, 00:24:43.791 "seek_data": false, 00:24:43.791 "copy": false, 00:24:43.791 "nvme_iov_md": false 00:24:43.791 }, 00:24:43.791 "memory_domains": [ 00:24:43.791 { 00:24:43.791 "dma_device_id": "system", 00:24:43.791 "dma_device_type": 1 00:24:43.791 }, 00:24:43.791 { 00:24:43.791 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:43.791 "dma_device_type": 2 00:24:43.791 }, 00:24:43.791 { 00:24:43.791 "dma_device_id": "system", 00:24:43.791 "dma_device_type": 1 00:24:43.791 }, 00:24:43.791 { 00:24:43.791 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:43.791 "dma_device_type": 2 00:24:43.791 }, 00:24:43.791 { 00:24:43.791 "dma_device_id": "system", 00:24:43.791 "dma_device_type": 1 00:24:43.791 }, 00:24:43.791 { 00:24:43.791 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:43.791 "dma_device_type": 2 00:24:43.791 }, 00:24:43.791 { 00:24:43.791 "dma_device_id": "system", 00:24:43.791 "dma_device_type": 1 00:24:43.791 }, 00:24:43.791 { 00:24:43.791 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:43.791 "dma_device_type": 2 00:24:43.791 } 00:24:43.791 ], 00:24:43.791 "driver_specific": { 00:24:43.791 "raid": { 00:24:43.791 "uuid": "9d81343b-a934-4bf7-96f9-b309456c3b94", 00:24:43.791 "strip_size_kb": 64, 00:24:43.791 "state": "online", 00:24:43.791 "raid_level": "concat", 00:24:43.791 "superblock": true, 00:24:43.791 "num_base_bdevs": 4, 00:24:43.791 "num_base_bdevs_discovered": 4, 00:24:43.791 "num_base_bdevs_operational": 4, 00:24:43.791 "base_bdevs_list": [ 00:24:43.791 { 00:24:43.791 "name": "NewBaseBdev", 00:24:43.791 "uuid": "8ca965bd-187d-43fd-866d-ea37dbf0d9a6", 00:24:43.791 "is_configured": true, 00:24:43.791 "data_offset": 2048, 00:24:43.791 "data_size": 63488 00:24:43.791 }, 00:24:43.791 { 00:24:43.791 "name": "BaseBdev2", 00:24:43.791 "uuid": "748a7839-f0a3-48ec-8898-503c1b2be5b9", 00:24:43.791 "is_configured": true, 00:24:43.791 "data_offset": 2048, 00:24:43.791 "data_size": 63488 00:24:43.791 }, 00:24:43.791 { 00:24:43.791 "name": "BaseBdev3", 00:24:43.791 "uuid": "d91b7dba-c205-451f-a7ba-40854d9fb95b", 00:24:43.791 "is_configured": true, 00:24:43.791 "data_offset": 2048, 00:24:43.791 "data_size": 63488 00:24:43.791 }, 00:24:43.791 { 00:24:43.791 "name": "BaseBdev4", 00:24:43.791 "uuid": "7f13259c-1831-471a-a55c-08dc257f001a", 00:24:43.791 "is_configured": true, 00:24:43.791 "data_offset": 2048, 00:24:43.791 "data_size": 63488 00:24:43.791 } 00:24:43.791 ] 00:24:43.791 } 00:24:43.791 } 00:24:43.791 }' 00:24:43.791 14:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:43.791 14:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:24:43.791 BaseBdev2 00:24:43.791 BaseBdev3 00:24:43.791 BaseBdev4' 00:24:43.791 14:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:43.791 14:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:24:43.791 14:17:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:44.049 14:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:44.049 "name": "NewBaseBdev", 00:24:44.049 "aliases": [ 00:24:44.049 "8ca965bd-187d-43fd-866d-ea37dbf0d9a6" 00:24:44.049 ], 00:24:44.049 "product_name": "Malloc disk", 00:24:44.049 "block_size": 512, 00:24:44.049 "num_blocks": 65536, 00:24:44.049 "uuid": "8ca965bd-187d-43fd-866d-ea37dbf0d9a6", 00:24:44.049 "assigned_rate_limits": { 00:24:44.049 "rw_ios_per_sec": 0, 00:24:44.049 "rw_mbytes_per_sec": 0, 00:24:44.049 "r_mbytes_per_sec": 0, 00:24:44.049 "w_mbytes_per_sec": 0 00:24:44.049 }, 00:24:44.049 "claimed": true, 00:24:44.049 "claim_type": "exclusive_write", 00:24:44.049 "zoned": false, 00:24:44.049 "supported_io_types": { 00:24:44.049 "read": true, 00:24:44.049 "write": true, 00:24:44.049 "unmap": true, 00:24:44.049 "flush": true, 00:24:44.049 "reset": true, 00:24:44.049 "nvme_admin": false, 00:24:44.049 "nvme_io": false, 00:24:44.049 "nvme_io_md": false, 00:24:44.049 "write_zeroes": true, 00:24:44.049 "zcopy": true, 00:24:44.049 "get_zone_info": false, 00:24:44.049 "zone_management": false, 00:24:44.049 "zone_append": false, 00:24:44.049 "compare": false, 00:24:44.049 "compare_and_write": false, 00:24:44.049 "abort": true, 00:24:44.049 "seek_hole": false, 00:24:44.049 "seek_data": false, 00:24:44.049 "copy": true, 00:24:44.049 "nvme_iov_md": false 00:24:44.049 }, 00:24:44.049 "memory_domains": [ 00:24:44.049 { 00:24:44.049 "dma_device_id": "system", 00:24:44.049 "dma_device_type": 1 00:24:44.049 }, 00:24:44.049 { 00:24:44.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:44.049 "dma_device_type": 2 00:24:44.049 } 00:24:44.049 ], 00:24:44.049 "driver_specific": {} 00:24:44.049 }' 00:24:44.049 14:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:44.307 14:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:44.307 14:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:44.307 14:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:44.307 14:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:44.307 14:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:44.307 14:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:44.307 14:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:44.307 14:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:44.307 14:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:44.565 14:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:44.565 14:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:44.565 14:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:44.565 14:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:24:44.565 14:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:44.824 14:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:44.824 "name": "BaseBdev2", 00:24:44.824 "aliases": [ 00:24:44.824 "748a7839-f0a3-48ec-8898-503c1b2be5b9" 00:24:44.824 ], 00:24:44.824 "product_name": "Malloc disk", 00:24:44.824 "block_size": 512, 00:24:44.824 "num_blocks": 65536, 00:24:44.824 "uuid": "748a7839-f0a3-48ec-8898-503c1b2be5b9", 00:24:44.824 "assigned_rate_limits": { 00:24:44.824 "rw_ios_per_sec": 0, 00:24:44.824 "rw_mbytes_per_sec": 0, 00:24:44.824 "r_mbytes_per_sec": 0, 00:24:44.824 "w_mbytes_per_sec": 0 00:24:44.824 }, 00:24:44.824 "claimed": true, 00:24:44.824 "claim_type": "exclusive_write", 00:24:44.824 "zoned": false, 00:24:44.824 "supported_io_types": { 00:24:44.824 "read": true, 00:24:44.824 "write": true, 00:24:44.824 "unmap": true, 00:24:44.824 "flush": true, 00:24:44.824 "reset": true, 00:24:44.824 "nvme_admin": false, 00:24:44.824 "nvme_io": false, 00:24:44.824 "nvme_io_md": false, 00:24:44.824 "write_zeroes": true, 00:24:44.824 "zcopy": true, 00:24:44.824 "get_zone_info": false, 00:24:44.824 "zone_management": false, 00:24:44.824 "zone_append": false, 00:24:44.824 "compare": false, 00:24:44.824 "compare_and_write": false, 00:24:44.824 "abort": true, 00:24:44.824 "seek_hole": false, 00:24:44.824 "seek_data": false, 00:24:44.824 "copy": true, 00:24:44.824 "nvme_iov_md": false 00:24:44.824 }, 00:24:44.824 "memory_domains": [ 00:24:44.824 { 00:24:44.824 "dma_device_id": "system", 00:24:44.824 "dma_device_type": 1 00:24:44.824 }, 00:24:44.824 { 00:24:44.824 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:44.824 "dma_device_type": 2 00:24:44.824 } 00:24:44.824 ], 00:24:44.824 "driver_specific": {} 00:24:44.824 }' 00:24:44.824 14:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:44.824 14:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:44.824 14:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:44.824 14:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:44.824 14:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:45.087 14:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:45.087 14:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:45.087 14:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:45.087 14:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:45.087 14:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:45.087 14:17:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:45.087 14:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:45.087 14:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:45.087 14:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:45.087 14:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:24:45.346 14:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:45.346 "name": "BaseBdev3", 00:24:45.346 "aliases": [ 00:24:45.346 "d91b7dba-c205-451f-a7ba-40854d9fb95b" 00:24:45.346 ], 00:24:45.346 "product_name": "Malloc disk", 00:24:45.346 "block_size": 512, 00:24:45.346 "num_blocks": 65536, 00:24:45.346 "uuid": "d91b7dba-c205-451f-a7ba-40854d9fb95b", 00:24:45.346 "assigned_rate_limits": { 00:24:45.346 "rw_ios_per_sec": 0, 00:24:45.346 "rw_mbytes_per_sec": 0, 00:24:45.346 "r_mbytes_per_sec": 0, 00:24:45.346 "w_mbytes_per_sec": 0 00:24:45.346 }, 00:24:45.346 "claimed": true, 00:24:45.346 "claim_type": "exclusive_write", 00:24:45.346 "zoned": false, 00:24:45.346 "supported_io_types": { 00:24:45.346 "read": true, 00:24:45.346 "write": true, 00:24:45.346 "unmap": true, 00:24:45.346 "flush": true, 00:24:45.346 "reset": true, 00:24:45.346 "nvme_admin": false, 00:24:45.346 "nvme_io": false, 00:24:45.346 "nvme_io_md": false, 00:24:45.346 "write_zeroes": true, 00:24:45.346 "zcopy": true, 00:24:45.346 "get_zone_info": false, 00:24:45.346 "zone_management": false, 00:24:45.346 "zone_append": false, 00:24:45.346 "compare": false, 00:24:45.346 "compare_and_write": false, 00:24:45.346 "abort": true, 00:24:45.346 "seek_hole": false, 00:24:45.346 "seek_data": false, 00:24:45.346 "copy": true, 00:24:45.346 "nvme_iov_md": false 00:24:45.346 }, 00:24:45.346 "memory_domains": [ 00:24:45.346 { 00:24:45.346 "dma_device_id": "system", 00:24:45.346 "dma_device_type": 1 00:24:45.346 }, 00:24:45.346 { 00:24:45.346 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:45.346 "dma_device_type": 2 00:24:45.346 } 00:24:45.346 ], 00:24:45.346 "driver_specific": {} 00:24:45.346 }' 00:24:45.346 14:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:45.346 14:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:45.604 14:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:45.604 14:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:45.604 14:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:45.604 14:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:45.604 14:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:45.604 14:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:45.604 14:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:45.604 14:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:45.861 14:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:45.861 14:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:45.861 14:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:45.861 14:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:24:45.861 14:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:46.121 14:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:46.121 "name": "BaseBdev4", 00:24:46.121 "aliases": [ 00:24:46.121 "7f13259c-1831-471a-a55c-08dc257f001a" 00:24:46.121 ], 00:24:46.121 "product_name": "Malloc disk", 00:24:46.121 "block_size": 512, 00:24:46.121 "num_blocks": 65536, 00:24:46.121 "uuid": "7f13259c-1831-471a-a55c-08dc257f001a", 00:24:46.121 "assigned_rate_limits": { 00:24:46.121 "rw_ios_per_sec": 0, 00:24:46.121 "rw_mbytes_per_sec": 0, 00:24:46.121 "r_mbytes_per_sec": 0, 00:24:46.121 "w_mbytes_per_sec": 0 00:24:46.121 }, 00:24:46.121 "claimed": true, 00:24:46.121 "claim_type": "exclusive_write", 00:24:46.121 "zoned": false, 00:24:46.121 "supported_io_types": { 00:24:46.121 "read": true, 00:24:46.121 "write": true, 00:24:46.121 "unmap": true, 00:24:46.121 "flush": true, 00:24:46.121 "reset": true, 00:24:46.121 "nvme_admin": false, 00:24:46.121 "nvme_io": false, 00:24:46.121 "nvme_io_md": false, 00:24:46.121 "write_zeroes": true, 00:24:46.121 "zcopy": true, 00:24:46.121 "get_zone_info": false, 00:24:46.121 "zone_management": false, 00:24:46.121 "zone_append": false, 00:24:46.121 "compare": false, 00:24:46.121 "compare_and_write": false, 00:24:46.121 "abort": true, 00:24:46.121 "seek_hole": false, 00:24:46.121 "seek_data": false, 00:24:46.121 "copy": true, 00:24:46.121 "nvme_iov_md": false 00:24:46.121 }, 00:24:46.121 "memory_domains": [ 00:24:46.121 { 00:24:46.121 "dma_device_id": "system", 00:24:46.121 "dma_device_type": 1 00:24:46.121 }, 00:24:46.121 { 00:24:46.121 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:46.121 "dma_device_type": 2 00:24:46.121 } 00:24:46.121 ], 00:24:46.121 "driver_specific": {} 00:24:46.121 }' 00:24:46.121 14:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:46.121 14:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:46.121 14:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:46.121 14:17:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:46.121 14:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:46.121 14:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:46.121 14:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:46.121 14:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:46.379 14:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:46.379 14:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:46.379 14:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:46.379 14:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:46.379 14:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:46.694 [2024-07-15 14:17:32.550832] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:46.694 [2024-07-15 14:17:32.551028] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:46.694 [2024-07-15 14:17:32.551208] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:46.694 [2024-07-15 14:17:32.551378] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:46.694 [2024-07-15 14:17:32.551536] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name Existed_Raid, state offline 00:24:46.694 14:17:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 204717 00:24:46.694 14:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 204717 ']' 00:24:46.694 14:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 204717 00:24:46.694 14:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:24:46.694 14:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:46.694 14:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 204717 00:24:46.694 14:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:46.694 14:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:46.694 14:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 204717' 00:24:46.694 killing process with pid 204717 00:24:46.694 14:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 204717 00:24:46.694 [2024-07-15 14:17:32.595112] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:46.694 14:17:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 204717 00:24:46.956 [2024-07-15 14:17:32.935564] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:48.333 14:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:24:48.333 00:24:48.333 real 0m37.312s 00:24:48.333 user 1m8.845s 00:24:48.333 sys 0m4.372s 00:24:48.333 14:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:48.333 14:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:48.333 ************************************ 00:24:48.333 END TEST raid_state_function_test_sb 00:24:48.333 ************************************ 00:24:48.333 14:17:34 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:24:48.333 14:17:34 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:24:48.333 14:17:34 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:24:48.333 14:17:34 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:48.333 14:17:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:48.333 ************************************ 00:24:48.333 START TEST raid_superblock_test 00:24:48.333 ************************************ 00:24:48.333 14:17:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test concat 4 00:24:48.333 14:17:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=concat 00:24:48.333 14:17:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:24:48.333 14:17:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:24:48.333 14:17:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:24:48.333 14:17:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:24:48.333 14:17:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:24:48.333 14:17:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:24:48.333 14:17:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:24:48.333 14:17:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:24:48.333 14:17:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:24:48.333 14:17:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:24:48.333 14:17:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:24:48.333 14:17:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:24:48.333 14:17:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' concat '!=' raid1 ']' 00:24:48.333 14:17:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:24:48.333 14:17:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:24:48.333 14:17:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=205842 00:24:48.333 14:17:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 205842 /var/tmp/spdk-raid.sock 00:24:48.333 14:17:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:24:48.333 14:17:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 205842 ']' 00:24:48.333 14:17:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:48.333 14:17:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:48.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:48.333 14:17:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:48.333 14:17:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:48.333 14:17:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:48.333 [2024-07-15 14:17:34.171818] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:24:48.333 [2024-07-15 14:17:34.172288] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid205842 ] 00:24:48.593 [2024-07-15 14:17:34.338191] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:48.593 [2024-07-15 14:17:34.552692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:48.853 [2024-07-15 14:17:34.749666] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:49.421 14:17:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:49.421 14:17:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:24:49.421 14:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:24:49.421 14:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:24:49.421 14:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:24:49.421 14:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:24:49.421 14:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:24:49.421 14:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:49.421 14:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:24:49.421 14:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:49.421 14:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:24:49.680 malloc1 00:24:49.681 14:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:49.939 [2024-07-15 14:17:35.698879] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:49.939 [2024-07-15 14:17:35.699181] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:49.939 [2024-07-15 14:17:35.699345] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:24:49.939 [2024-07-15 14:17:35.699481] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:49.939 [2024-07-15 14:17:35.701318] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:49.939 [2024-07-15 14:17:35.701508] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:49.939 pt1 00:24:49.939 14:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:24:49.939 14:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:24:49.939 14:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:24:49.939 14:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:24:49.939 14:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:24:49.939 14:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:49.939 14:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:24:49.939 14:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:49.939 14:17:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:24:50.198 malloc2 00:24:50.198 14:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:50.457 [2024-07-15 14:17:36.289165] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:50.457 [2024-07-15 14:17:36.289503] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:50.457 [2024-07-15 14:17:36.289598] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:24:50.457 [2024-07-15 14:17:36.289809] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:50.457 [2024-07-15 14:17:36.291745] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:50.457 [2024-07-15 14:17:36.291916] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:50.457 pt2 00:24:50.457 14:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:24:50.457 14:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:24:50.457 14:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:24:50.457 14:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:24:50.457 14:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:24:50.457 14:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:50.457 14:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:24:50.457 14:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:50.457 14:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:24:50.716 malloc3 00:24:50.716 14:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:24:50.974 [2024-07-15 14:17:36.852417] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:24:50.974 [2024-07-15 14:17:36.852790] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:50.974 [2024-07-15 14:17:36.852960] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:24:50.974 [2024-07-15 14:17:36.853139] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:50.974 [2024-07-15 14:17:36.855035] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:50.974 [2024-07-15 14:17:36.855210] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:24:50.974 pt3 00:24:50.974 14:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:24:50.974 14:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:24:50.975 14:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:24:50.975 14:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:24:50.975 14:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:24:50.975 14:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:50.975 14:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:24:50.975 14:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:50.975 14:17:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:24:51.233 malloc4 00:24:51.233 14:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:24:51.492 [2024-07-15 14:17:37.383471] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:24:51.492 [2024-07-15 14:17:37.383958] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:51.492 [2024-07-15 14:17:37.384122] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:24:51.492 [2024-07-15 14:17:37.384270] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:51.492 [2024-07-15 14:17:37.386207] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:51.492 [2024-07-15 14:17:37.386385] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:24:51.492 pt4 00:24:51.492 14:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:24:51.492 14:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:24:51.492 14:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:24:51.752 [2024-07-15 14:17:37.631547] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:51.752 [2024-07-15 14:17:37.633250] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:51.752 [2024-07-15 14:17:37.633495] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:24:51.752 [2024-07-15 14:17:37.633658] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:24:51.752 [2024-07-15 14:17:37.633957] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:24:51.752 [2024-07-15 14:17:37.634092] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:24:51.752 [2024-07-15 14:17:37.634286] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:24:51.752 [2024-07-15 14:17:37.634616] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:24:51.752 [2024-07-15 14:17:37.634752] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:24:51.752 [2024-07-15 14:17:37.635001] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:51.752 14:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:24:51.752 14:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:51.752 14:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:51.752 14:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:51.752 14:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:51.752 14:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:51.752 14:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:51.752 14:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:51.752 14:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:51.752 14:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:51.752 14:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:51.752 14:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:52.011 14:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:52.011 "name": "raid_bdev1", 00:24:52.011 "uuid": "6032b163-cba4-4655-a96c-f22b0505b714", 00:24:52.011 "strip_size_kb": 64, 00:24:52.011 "state": "online", 00:24:52.011 "raid_level": "concat", 00:24:52.011 "superblock": true, 00:24:52.011 "num_base_bdevs": 4, 00:24:52.011 "num_base_bdevs_discovered": 4, 00:24:52.011 "num_base_bdevs_operational": 4, 00:24:52.011 "base_bdevs_list": [ 00:24:52.011 { 00:24:52.011 "name": "pt1", 00:24:52.011 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:52.011 "is_configured": true, 00:24:52.011 "data_offset": 2048, 00:24:52.011 "data_size": 63488 00:24:52.011 }, 00:24:52.011 { 00:24:52.011 "name": "pt2", 00:24:52.011 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:52.011 "is_configured": true, 00:24:52.011 "data_offset": 2048, 00:24:52.011 "data_size": 63488 00:24:52.011 }, 00:24:52.011 { 00:24:52.011 "name": "pt3", 00:24:52.011 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:52.011 "is_configured": true, 00:24:52.011 "data_offset": 2048, 00:24:52.011 "data_size": 63488 00:24:52.011 }, 00:24:52.011 { 00:24:52.011 "name": "pt4", 00:24:52.011 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:52.011 "is_configured": true, 00:24:52.011 "data_offset": 2048, 00:24:52.011 "data_size": 63488 00:24:52.011 } 00:24:52.011 ] 00:24:52.011 }' 00:24:52.011 14:17:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:52.011 14:17:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:52.946 14:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:24:52.946 14:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:24:52.946 14:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:24:52.946 14:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:24:52.946 14:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:24:52.946 14:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:24:52.946 14:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:24:52.946 14:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:52.946 [2024-07-15 14:17:38.871957] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:52.946 14:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:24:52.946 "name": "raid_bdev1", 00:24:52.946 "aliases": [ 00:24:52.946 "6032b163-cba4-4655-a96c-f22b0505b714" 00:24:52.946 ], 00:24:52.946 "product_name": "Raid Volume", 00:24:52.946 "block_size": 512, 00:24:52.946 "num_blocks": 253952, 00:24:52.946 "uuid": "6032b163-cba4-4655-a96c-f22b0505b714", 00:24:52.946 "assigned_rate_limits": { 00:24:52.946 "rw_ios_per_sec": 0, 00:24:52.946 "rw_mbytes_per_sec": 0, 00:24:52.946 "r_mbytes_per_sec": 0, 00:24:52.946 "w_mbytes_per_sec": 0 00:24:52.946 }, 00:24:52.946 "claimed": false, 00:24:52.946 "zoned": false, 00:24:52.946 "supported_io_types": { 00:24:52.946 "read": true, 00:24:52.946 "write": true, 00:24:52.946 "unmap": true, 00:24:52.946 "flush": true, 00:24:52.946 "reset": true, 00:24:52.946 "nvme_admin": false, 00:24:52.946 "nvme_io": false, 00:24:52.946 "nvme_io_md": false, 00:24:52.946 "write_zeroes": true, 00:24:52.946 "zcopy": false, 00:24:52.946 "get_zone_info": false, 00:24:52.946 "zone_management": false, 00:24:52.946 "zone_append": false, 00:24:52.946 "compare": false, 00:24:52.946 "compare_and_write": false, 00:24:52.946 "abort": false, 00:24:52.946 "seek_hole": false, 00:24:52.946 "seek_data": false, 00:24:52.946 "copy": false, 00:24:52.946 "nvme_iov_md": false 00:24:52.946 }, 00:24:52.946 "memory_domains": [ 00:24:52.946 { 00:24:52.946 "dma_device_id": "system", 00:24:52.946 "dma_device_type": 1 00:24:52.946 }, 00:24:52.946 { 00:24:52.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:52.946 "dma_device_type": 2 00:24:52.946 }, 00:24:52.946 { 00:24:52.946 "dma_device_id": "system", 00:24:52.946 "dma_device_type": 1 00:24:52.946 }, 00:24:52.946 { 00:24:52.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:52.946 "dma_device_type": 2 00:24:52.946 }, 00:24:52.946 { 00:24:52.946 "dma_device_id": "system", 00:24:52.946 "dma_device_type": 1 00:24:52.946 }, 00:24:52.946 { 00:24:52.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:52.946 "dma_device_type": 2 00:24:52.946 }, 00:24:52.946 { 00:24:52.946 "dma_device_id": "system", 00:24:52.946 "dma_device_type": 1 00:24:52.946 }, 00:24:52.946 { 00:24:52.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:52.946 "dma_device_type": 2 00:24:52.946 } 00:24:52.946 ], 00:24:52.946 "driver_specific": { 00:24:52.946 "raid": { 00:24:52.946 "uuid": "6032b163-cba4-4655-a96c-f22b0505b714", 00:24:52.946 "strip_size_kb": 64, 00:24:52.946 "state": "online", 00:24:52.946 "raid_level": "concat", 00:24:52.946 "superblock": true, 00:24:52.946 "num_base_bdevs": 4, 00:24:52.946 "num_base_bdevs_discovered": 4, 00:24:52.946 "num_base_bdevs_operational": 4, 00:24:52.946 "base_bdevs_list": [ 00:24:52.946 { 00:24:52.946 "name": "pt1", 00:24:52.946 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:52.947 "is_configured": true, 00:24:52.947 "data_offset": 2048, 00:24:52.947 "data_size": 63488 00:24:52.947 }, 00:24:52.947 { 00:24:52.947 "name": "pt2", 00:24:52.947 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:52.947 "is_configured": true, 00:24:52.947 "data_offset": 2048, 00:24:52.947 "data_size": 63488 00:24:52.947 }, 00:24:52.947 { 00:24:52.947 "name": "pt3", 00:24:52.947 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:52.947 "is_configured": true, 00:24:52.947 "data_offset": 2048, 00:24:52.947 "data_size": 63488 00:24:52.947 }, 00:24:52.947 { 00:24:52.947 "name": "pt4", 00:24:52.947 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:52.947 "is_configured": true, 00:24:52.947 "data_offset": 2048, 00:24:52.947 "data_size": 63488 00:24:52.947 } 00:24:52.947 ] 00:24:52.947 } 00:24:52.947 } 00:24:52.947 }' 00:24:52.947 14:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:52.947 14:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:24:52.947 pt2 00:24:52.947 pt3 00:24:52.947 pt4' 00:24:52.947 14:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:52.947 14:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:24:52.947 14:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:53.206 14:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:53.206 "name": "pt1", 00:24:53.206 "aliases": [ 00:24:53.206 "00000000-0000-0000-0000-000000000001" 00:24:53.206 ], 00:24:53.206 "product_name": "passthru", 00:24:53.206 "block_size": 512, 00:24:53.206 "num_blocks": 65536, 00:24:53.206 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:53.206 "assigned_rate_limits": { 00:24:53.206 "rw_ios_per_sec": 0, 00:24:53.206 "rw_mbytes_per_sec": 0, 00:24:53.206 "r_mbytes_per_sec": 0, 00:24:53.206 "w_mbytes_per_sec": 0 00:24:53.206 }, 00:24:53.206 "claimed": true, 00:24:53.206 "claim_type": "exclusive_write", 00:24:53.206 "zoned": false, 00:24:53.206 "supported_io_types": { 00:24:53.206 "read": true, 00:24:53.206 "write": true, 00:24:53.206 "unmap": true, 00:24:53.206 "flush": true, 00:24:53.206 "reset": true, 00:24:53.206 "nvme_admin": false, 00:24:53.206 "nvme_io": false, 00:24:53.206 "nvme_io_md": false, 00:24:53.206 "write_zeroes": true, 00:24:53.206 "zcopy": true, 00:24:53.206 "get_zone_info": false, 00:24:53.206 "zone_management": false, 00:24:53.206 "zone_append": false, 00:24:53.206 "compare": false, 00:24:53.206 "compare_and_write": false, 00:24:53.206 "abort": true, 00:24:53.206 "seek_hole": false, 00:24:53.206 "seek_data": false, 00:24:53.206 "copy": true, 00:24:53.206 "nvme_iov_md": false 00:24:53.206 }, 00:24:53.206 "memory_domains": [ 00:24:53.206 { 00:24:53.206 "dma_device_id": "system", 00:24:53.206 "dma_device_type": 1 00:24:53.206 }, 00:24:53.206 { 00:24:53.206 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:53.206 "dma_device_type": 2 00:24:53.206 } 00:24:53.206 ], 00:24:53.206 "driver_specific": { 00:24:53.206 "passthru": { 00:24:53.206 "name": "pt1", 00:24:53.206 "base_bdev_name": "malloc1" 00:24:53.206 } 00:24:53.206 } 00:24:53.206 }' 00:24:53.206 14:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:53.465 14:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:53.465 14:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:53.465 14:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:53.465 14:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:53.465 14:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:53.465 14:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:53.465 14:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:53.466 14:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:53.466 14:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:53.747 14:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:53.747 14:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:53.747 14:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:53.747 14:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:24:53.747 14:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:54.005 14:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:54.005 "name": "pt2", 00:24:54.005 "aliases": [ 00:24:54.005 "00000000-0000-0000-0000-000000000002" 00:24:54.005 ], 00:24:54.005 "product_name": "passthru", 00:24:54.005 "block_size": 512, 00:24:54.005 "num_blocks": 65536, 00:24:54.005 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:54.006 "assigned_rate_limits": { 00:24:54.006 "rw_ios_per_sec": 0, 00:24:54.006 "rw_mbytes_per_sec": 0, 00:24:54.006 "r_mbytes_per_sec": 0, 00:24:54.006 "w_mbytes_per_sec": 0 00:24:54.006 }, 00:24:54.006 "claimed": true, 00:24:54.006 "claim_type": "exclusive_write", 00:24:54.006 "zoned": false, 00:24:54.006 "supported_io_types": { 00:24:54.006 "read": true, 00:24:54.006 "write": true, 00:24:54.006 "unmap": true, 00:24:54.006 "flush": true, 00:24:54.006 "reset": true, 00:24:54.006 "nvme_admin": false, 00:24:54.006 "nvme_io": false, 00:24:54.006 "nvme_io_md": false, 00:24:54.006 "write_zeroes": true, 00:24:54.006 "zcopy": true, 00:24:54.006 "get_zone_info": false, 00:24:54.006 "zone_management": false, 00:24:54.006 "zone_append": false, 00:24:54.006 "compare": false, 00:24:54.006 "compare_and_write": false, 00:24:54.006 "abort": true, 00:24:54.006 "seek_hole": false, 00:24:54.006 "seek_data": false, 00:24:54.006 "copy": true, 00:24:54.006 "nvme_iov_md": false 00:24:54.006 }, 00:24:54.006 "memory_domains": [ 00:24:54.006 { 00:24:54.006 "dma_device_id": "system", 00:24:54.006 "dma_device_type": 1 00:24:54.006 }, 00:24:54.006 { 00:24:54.006 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:54.006 "dma_device_type": 2 00:24:54.006 } 00:24:54.006 ], 00:24:54.006 "driver_specific": { 00:24:54.006 "passthru": { 00:24:54.006 "name": "pt2", 00:24:54.006 "base_bdev_name": "malloc2" 00:24:54.006 } 00:24:54.006 } 00:24:54.006 }' 00:24:54.006 14:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:54.006 14:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:54.006 14:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:54.006 14:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:54.006 14:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:54.006 14:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:54.006 14:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:54.263 14:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:54.264 14:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:54.264 14:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:54.264 14:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:54.264 14:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:54.264 14:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:54.264 14:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:24:54.264 14:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:54.522 14:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:54.522 "name": "pt3", 00:24:54.522 "aliases": [ 00:24:54.522 "00000000-0000-0000-0000-000000000003" 00:24:54.522 ], 00:24:54.522 "product_name": "passthru", 00:24:54.522 "block_size": 512, 00:24:54.522 "num_blocks": 65536, 00:24:54.522 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:54.522 "assigned_rate_limits": { 00:24:54.522 "rw_ios_per_sec": 0, 00:24:54.522 "rw_mbytes_per_sec": 0, 00:24:54.522 "r_mbytes_per_sec": 0, 00:24:54.522 "w_mbytes_per_sec": 0 00:24:54.522 }, 00:24:54.522 "claimed": true, 00:24:54.522 "claim_type": "exclusive_write", 00:24:54.522 "zoned": false, 00:24:54.522 "supported_io_types": { 00:24:54.522 "read": true, 00:24:54.522 "write": true, 00:24:54.522 "unmap": true, 00:24:54.522 "flush": true, 00:24:54.522 "reset": true, 00:24:54.522 "nvme_admin": false, 00:24:54.522 "nvme_io": false, 00:24:54.522 "nvme_io_md": false, 00:24:54.522 "write_zeroes": true, 00:24:54.522 "zcopy": true, 00:24:54.522 "get_zone_info": false, 00:24:54.522 "zone_management": false, 00:24:54.522 "zone_append": false, 00:24:54.522 "compare": false, 00:24:54.522 "compare_and_write": false, 00:24:54.522 "abort": true, 00:24:54.522 "seek_hole": false, 00:24:54.522 "seek_data": false, 00:24:54.522 "copy": true, 00:24:54.522 "nvme_iov_md": false 00:24:54.522 }, 00:24:54.522 "memory_domains": [ 00:24:54.522 { 00:24:54.522 "dma_device_id": "system", 00:24:54.522 "dma_device_type": 1 00:24:54.522 }, 00:24:54.522 { 00:24:54.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:54.522 "dma_device_type": 2 00:24:54.522 } 00:24:54.522 ], 00:24:54.522 "driver_specific": { 00:24:54.522 "passthru": { 00:24:54.522 "name": "pt3", 00:24:54.522 "base_bdev_name": "malloc3" 00:24:54.522 } 00:24:54.522 } 00:24:54.522 }' 00:24:54.522 14:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:54.522 14:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:54.522 14:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:54.522 14:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:54.780 14:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:54.780 14:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:54.780 14:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:54.780 14:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:54.780 14:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:54.780 14:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:54.780 14:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:54.780 14:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:54.780 14:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:54.780 14:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:24:54.780 14:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:55.038 14:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:55.038 "name": "pt4", 00:24:55.038 "aliases": [ 00:24:55.038 "00000000-0000-0000-0000-000000000004" 00:24:55.038 ], 00:24:55.039 "product_name": "passthru", 00:24:55.039 "block_size": 512, 00:24:55.039 "num_blocks": 65536, 00:24:55.039 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:55.039 "assigned_rate_limits": { 00:24:55.039 "rw_ios_per_sec": 0, 00:24:55.039 "rw_mbytes_per_sec": 0, 00:24:55.039 "r_mbytes_per_sec": 0, 00:24:55.039 "w_mbytes_per_sec": 0 00:24:55.039 }, 00:24:55.039 "claimed": true, 00:24:55.039 "claim_type": "exclusive_write", 00:24:55.039 "zoned": false, 00:24:55.039 "supported_io_types": { 00:24:55.039 "read": true, 00:24:55.039 "write": true, 00:24:55.039 "unmap": true, 00:24:55.039 "flush": true, 00:24:55.039 "reset": true, 00:24:55.039 "nvme_admin": false, 00:24:55.039 "nvme_io": false, 00:24:55.039 "nvme_io_md": false, 00:24:55.039 "write_zeroes": true, 00:24:55.039 "zcopy": true, 00:24:55.039 "get_zone_info": false, 00:24:55.039 "zone_management": false, 00:24:55.039 "zone_append": false, 00:24:55.039 "compare": false, 00:24:55.039 "compare_and_write": false, 00:24:55.039 "abort": true, 00:24:55.039 "seek_hole": false, 00:24:55.039 "seek_data": false, 00:24:55.039 "copy": true, 00:24:55.039 "nvme_iov_md": false 00:24:55.039 }, 00:24:55.039 "memory_domains": [ 00:24:55.039 { 00:24:55.039 "dma_device_id": "system", 00:24:55.039 "dma_device_type": 1 00:24:55.039 }, 00:24:55.039 { 00:24:55.039 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:55.039 "dma_device_type": 2 00:24:55.039 } 00:24:55.039 ], 00:24:55.039 "driver_specific": { 00:24:55.039 "passthru": { 00:24:55.039 "name": "pt4", 00:24:55.039 "base_bdev_name": "malloc4" 00:24:55.039 } 00:24:55.039 } 00:24:55.039 }' 00:24:55.039 14:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:55.297 14:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:55.297 14:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:55.297 14:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:55.297 14:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:55.297 14:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:55.297 14:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:55.297 14:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:55.556 14:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:55.556 14:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:55.556 14:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:55.556 14:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:55.556 14:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:55.556 14:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:24:55.815 [2024-07-15 14:17:41.664279] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:55.815 14:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=6032b163-cba4-4655-a96c-f22b0505b714 00:24:55.815 14:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 6032b163-cba4-4655-a96c-f22b0505b714 ']' 00:24:55.815 14:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:56.072 [2024-07-15 14:17:41.908056] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:56.072 [2024-07-15 14:17:41.908101] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:56.072 [2024-07-15 14:17:41.908173] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:56.072 [2024-07-15 14:17:41.908230] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:56.072 [2024-07-15 14:17:41.908258] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:24:56.072 14:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:56.072 14:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:24:56.330 14:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:24:56.330 14:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:24:56.330 14:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:24:56.330 14:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:24:56.588 14:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:24:56.588 14:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:24:56.846 14:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:24:56.846 14:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:24:57.104 14:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:24:57.104 14:17:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:24:57.362 14:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:24:57.362 14:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:24:57.620 14:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:24:57.620 14:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:24:57.620 14:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:24:57.620 14:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:24:57.620 14:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:57.620 14:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:57.620 14:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:57.620 14:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:57.620 14:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:57.620 14:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:57.620 14:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:57.620 14:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:24:57.620 14:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:24:57.878 [2024-07-15 14:17:43.672300] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:24:57.878 [2024-07-15 14:17:43.673896] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:24:57.878 [2024-07-15 14:17:43.673970] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:24:57.878 [2024-07-15 14:17:43.674002] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:24:57.878 [2024-07-15 14:17:43.674041] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:24:57.878 [2024-07-15 14:17:43.674148] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:24:57.878 [2024-07-15 14:17:43.674188] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:24:57.878 [2024-07-15 14:17:43.674224] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:24:57.878 [2024-07-15 14:17:43.674251] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:57.878 [2024-07-15 14:17:43.674262] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state configuring 00:24:57.878 request: 00:24:57.878 { 00:24:57.878 "name": "raid_bdev1", 00:24:57.878 "raid_level": "concat", 00:24:57.878 "base_bdevs": [ 00:24:57.878 "malloc1", 00:24:57.878 "malloc2", 00:24:57.878 "malloc3", 00:24:57.878 "malloc4" 00:24:57.878 ], 00:24:57.878 "strip_size_kb": 64, 00:24:57.878 "superblock": false, 00:24:57.878 "method": "bdev_raid_create", 00:24:57.878 "req_id": 1 00:24:57.878 } 00:24:57.878 Got JSON-RPC error response 00:24:57.878 response: 00:24:57.878 { 00:24:57.878 "code": -17, 00:24:57.878 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:24:57.878 } 00:24:57.878 14:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:24:57.878 14:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:57.878 14:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:57.878 14:17:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:57.878 14:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:24:57.878 14:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:58.136 14:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:24:58.136 14:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:24:58.136 14:17:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:58.394 [2024-07-15 14:17:44.220336] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:58.394 [2024-07-15 14:17:44.220702] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:58.394 [2024-07-15 14:17:44.220807] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:24:58.394 [2024-07-15 14:17:44.221107] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:58.394 [2024-07-15 14:17:44.222949] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:58.394 [2024-07-15 14:17:44.223125] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:58.394 [2024-07-15 14:17:44.223340] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:24:58.394 [2024-07-15 14:17:44.223508] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:58.394 pt1 00:24:58.394 14:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:24:58.394 14:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:58.394 14:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:58.394 14:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:58.394 14:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:58.394 14:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:58.394 14:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:58.394 14:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:58.394 14:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:58.394 14:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:58.394 14:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:58.394 14:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:58.652 14:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:58.652 "name": "raid_bdev1", 00:24:58.652 "uuid": "6032b163-cba4-4655-a96c-f22b0505b714", 00:24:58.652 "strip_size_kb": 64, 00:24:58.652 "state": "configuring", 00:24:58.652 "raid_level": "concat", 00:24:58.652 "superblock": true, 00:24:58.652 "num_base_bdevs": 4, 00:24:58.652 "num_base_bdevs_discovered": 1, 00:24:58.652 "num_base_bdevs_operational": 4, 00:24:58.652 "base_bdevs_list": [ 00:24:58.652 { 00:24:58.652 "name": "pt1", 00:24:58.652 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:58.652 "is_configured": true, 00:24:58.652 "data_offset": 2048, 00:24:58.652 "data_size": 63488 00:24:58.652 }, 00:24:58.652 { 00:24:58.652 "name": null, 00:24:58.652 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:58.652 "is_configured": false, 00:24:58.652 "data_offset": 2048, 00:24:58.652 "data_size": 63488 00:24:58.652 }, 00:24:58.652 { 00:24:58.652 "name": null, 00:24:58.652 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:58.652 "is_configured": false, 00:24:58.652 "data_offset": 2048, 00:24:58.652 "data_size": 63488 00:24:58.652 }, 00:24:58.652 { 00:24:58.652 "name": null, 00:24:58.652 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:58.652 "is_configured": false, 00:24:58.652 "data_offset": 2048, 00:24:58.652 "data_size": 63488 00:24:58.652 } 00:24:58.652 ] 00:24:58.652 }' 00:24:58.652 14:17:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:58.652 14:17:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:59.217 14:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:24:59.217 14:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:59.475 [2024-07-15 14:17:45.408496] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:59.475 [2024-07-15 14:17:45.408840] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:59.475 [2024-07-15 14:17:45.409044] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:24:59.475 [2024-07-15 14:17:45.409222] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:59.475 [2024-07-15 14:17:45.409711] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:59.475 [2024-07-15 14:17:45.409871] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:59.475 [2024-07-15 14:17:45.410080] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:24:59.475 [2024-07-15 14:17:45.410210] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:59.475 pt2 00:24:59.475 14:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:24:59.733 [2024-07-15 14:17:45.704559] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:24:59.733 14:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:24:59.733 14:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:59.733 14:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:59.733 14:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:59.733 14:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:59.733 14:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:59.733 14:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:59.733 14:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:59.733 14:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:59.733 14:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:59.733 14:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:59.733 14:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:59.991 14:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:59.991 "name": "raid_bdev1", 00:24:59.991 "uuid": "6032b163-cba4-4655-a96c-f22b0505b714", 00:24:59.991 "strip_size_kb": 64, 00:24:59.991 "state": "configuring", 00:24:59.991 "raid_level": "concat", 00:24:59.991 "superblock": true, 00:24:59.991 "num_base_bdevs": 4, 00:24:59.991 "num_base_bdevs_discovered": 1, 00:24:59.991 "num_base_bdevs_operational": 4, 00:24:59.991 "base_bdevs_list": [ 00:24:59.991 { 00:24:59.991 "name": "pt1", 00:24:59.991 "uuid": "00000000-0000-0000-0000-000000000001", 00:24:59.991 "is_configured": true, 00:24:59.991 "data_offset": 2048, 00:24:59.991 "data_size": 63488 00:24:59.991 }, 00:24:59.991 { 00:24:59.991 "name": null, 00:24:59.991 "uuid": "00000000-0000-0000-0000-000000000002", 00:24:59.991 "is_configured": false, 00:24:59.991 "data_offset": 2048, 00:24:59.991 "data_size": 63488 00:24:59.991 }, 00:24:59.991 { 00:24:59.991 "name": null, 00:24:59.991 "uuid": "00000000-0000-0000-0000-000000000003", 00:24:59.991 "is_configured": false, 00:24:59.991 "data_offset": 2048, 00:24:59.991 "data_size": 63488 00:24:59.991 }, 00:24:59.991 { 00:24:59.991 "name": null, 00:24:59.991 "uuid": "00000000-0000-0000-0000-000000000004", 00:24:59.991 "is_configured": false, 00:24:59.991 "data_offset": 2048, 00:24:59.991 "data_size": 63488 00:24:59.991 } 00:24:59.991 ] 00:24:59.991 }' 00:24:59.991 14:17:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:59.991 14:17:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:00.923 14:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:25:00.923 14:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:25:00.923 14:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:00.923 [2024-07-15 14:17:46.852644] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:00.923 [2024-07-15 14:17:46.853226] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:00.923 [2024-07-15 14:17:46.853464] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:25:00.923 [2024-07-15 14:17:46.853699] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:00.923 [2024-07-15 14:17:46.854269] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:00.923 [2024-07-15 14:17:46.854488] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:00.923 [2024-07-15 14:17:46.854775] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:25:00.923 [2024-07-15 14:17:46.854914] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:00.923 pt2 00:25:00.923 14:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:25:00.923 14:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:25:00.923 14:17:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:25:01.181 [2024-07-15 14:17:47.088704] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:25:01.181 [2024-07-15 14:17:47.089318] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:01.181 [2024-07-15 14:17:47.089536] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:25:01.181 [2024-07-15 14:17:47.089796] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:01.181 [2024-07-15 14:17:47.090346] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:01.181 [2024-07-15 14:17:47.090572] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:25:01.181 [2024-07-15 14:17:47.090880] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:25:01.181 [2024-07-15 14:17:47.091019] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:25:01.181 pt3 00:25:01.182 14:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:25:01.182 14:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:25:01.182 14:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:25:01.439 [2024-07-15 14:17:47.332722] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:25:01.439 [2024-07-15 14:17:47.333118] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:01.439 [2024-07-15 14:17:47.333349] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:25:01.439 [2024-07-15 14:17:47.333576] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:01.439 [2024-07-15 14:17:47.334149] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:01.439 [2024-07-15 14:17:47.334367] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:25:01.439 [2024-07-15 14:17:47.334636] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:25:01.439 [2024-07-15 14:17:47.334795] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:25:01.439 [2024-07-15 14:17:47.335030] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:25:01.439 [2024-07-15 14:17:47.335149] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:25:01.439 [2024-07-15 14:17:47.335268] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:25:01.439 [2024-07-15 14:17:47.335567] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:25:01.439 [2024-07-15 14:17:47.335615] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:25:01.439 [2024-07-15 14:17:47.335882] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:01.439 pt4 00:25:01.439 14:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:25:01.439 14:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:25:01.439 14:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:25:01.439 14:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:01.439 14:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:01.439 14:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:01.439 14:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:01.439 14:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:01.439 14:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:01.439 14:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:01.439 14:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:01.439 14:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:01.439 14:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:01.439 14:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:01.697 14:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:01.697 "name": "raid_bdev1", 00:25:01.697 "uuid": "6032b163-cba4-4655-a96c-f22b0505b714", 00:25:01.697 "strip_size_kb": 64, 00:25:01.697 "state": "online", 00:25:01.697 "raid_level": "concat", 00:25:01.697 "superblock": true, 00:25:01.697 "num_base_bdevs": 4, 00:25:01.697 "num_base_bdevs_discovered": 4, 00:25:01.697 "num_base_bdevs_operational": 4, 00:25:01.697 "base_bdevs_list": [ 00:25:01.697 { 00:25:01.697 "name": "pt1", 00:25:01.697 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:01.697 "is_configured": true, 00:25:01.697 "data_offset": 2048, 00:25:01.697 "data_size": 63488 00:25:01.697 }, 00:25:01.697 { 00:25:01.697 "name": "pt2", 00:25:01.698 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:01.698 "is_configured": true, 00:25:01.698 "data_offset": 2048, 00:25:01.698 "data_size": 63488 00:25:01.698 }, 00:25:01.698 { 00:25:01.698 "name": "pt3", 00:25:01.698 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:01.698 "is_configured": true, 00:25:01.698 "data_offset": 2048, 00:25:01.698 "data_size": 63488 00:25:01.698 }, 00:25:01.698 { 00:25:01.698 "name": "pt4", 00:25:01.698 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:01.698 "is_configured": true, 00:25:01.698 "data_offset": 2048, 00:25:01.698 "data_size": 63488 00:25:01.698 } 00:25:01.698 ] 00:25:01.698 }' 00:25:01.698 14:17:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:01.698 14:17:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:02.634 14:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:25:02.634 14:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:25:02.634 14:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:25:02.634 14:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:25:02.634 14:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:25:02.634 14:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:25:02.634 14:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:25:02.634 14:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:02.634 [2024-07-15 14:17:48.553228] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:02.634 14:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:25:02.634 "name": "raid_bdev1", 00:25:02.634 "aliases": [ 00:25:02.634 "6032b163-cba4-4655-a96c-f22b0505b714" 00:25:02.634 ], 00:25:02.634 "product_name": "Raid Volume", 00:25:02.634 "block_size": 512, 00:25:02.634 "num_blocks": 253952, 00:25:02.634 "uuid": "6032b163-cba4-4655-a96c-f22b0505b714", 00:25:02.634 "assigned_rate_limits": { 00:25:02.634 "rw_ios_per_sec": 0, 00:25:02.634 "rw_mbytes_per_sec": 0, 00:25:02.634 "r_mbytes_per_sec": 0, 00:25:02.634 "w_mbytes_per_sec": 0 00:25:02.634 }, 00:25:02.634 "claimed": false, 00:25:02.634 "zoned": false, 00:25:02.634 "supported_io_types": { 00:25:02.634 "read": true, 00:25:02.634 "write": true, 00:25:02.634 "unmap": true, 00:25:02.634 "flush": true, 00:25:02.634 "reset": true, 00:25:02.634 "nvme_admin": false, 00:25:02.634 "nvme_io": false, 00:25:02.634 "nvme_io_md": false, 00:25:02.634 "write_zeroes": true, 00:25:02.634 "zcopy": false, 00:25:02.634 "get_zone_info": false, 00:25:02.634 "zone_management": false, 00:25:02.634 "zone_append": false, 00:25:02.634 "compare": false, 00:25:02.634 "compare_and_write": false, 00:25:02.634 "abort": false, 00:25:02.634 "seek_hole": false, 00:25:02.634 "seek_data": false, 00:25:02.634 "copy": false, 00:25:02.634 "nvme_iov_md": false 00:25:02.634 }, 00:25:02.634 "memory_domains": [ 00:25:02.634 { 00:25:02.634 "dma_device_id": "system", 00:25:02.634 "dma_device_type": 1 00:25:02.634 }, 00:25:02.634 { 00:25:02.634 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:02.634 "dma_device_type": 2 00:25:02.634 }, 00:25:02.634 { 00:25:02.634 "dma_device_id": "system", 00:25:02.634 "dma_device_type": 1 00:25:02.634 }, 00:25:02.634 { 00:25:02.634 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:02.634 "dma_device_type": 2 00:25:02.634 }, 00:25:02.634 { 00:25:02.634 "dma_device_id": "system", 00:25:02.634 "dma_device_type": 1 00:25:02.634 }, 00:25:02.634 { 00:25:02.634 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:02.634 "dma_device_type": 2 00:25:02.634 }, 00:25:02.634 { 00:25:02.634 "dma_device_id": "system", 00:25:02.634 "dma_device_type": 1 00:25:02.634 }, 00:25:02.634 { 00:25:02.634 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:02.634 "dma_device_type": 2 00:25:02.634 } 00:25:02.634 ], 00:25:02.634 "driver_specific": { 00:25:02.634 "raid": { 00:25:02.634 "uuid": "6032b163-cba4-4655-a96c-f22b0505b714", 00:25:02.634 "strip_size_kb": 64, 00:25:02.634 "state": "online", 00:25:02.634 "raid_level": "concat", 00:25:02.634 "superblock": true, 00:25:02.634 "num_base_bdevs": 4, 00:25:02.634 "num_base_bdevs_discovered": 4, 00:25:02.634 "num_base_bdevs_operational": 4, 00:25:02.634 "base_bdevs_list": [ 00:25:02.634 { 00:25:02.634 "name": "pt1", 00:25:02.634 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:02.634 "is_configured": true, 00:25:02.634 "data_offset": 2048, 00:25:02.634 "data_size": 63488 00:25:02.634 }, 00:25:02.634 { 00:25:02.634 "name": "pt2", 00:25:02.634 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:02.634 "is_configured": true, 00:25:02.634 "data_offset": 2048, 00:25:02.634 "data_size": 63488 00:25:02.634 }, 00:25:02.634 { 00:25:02.634 "name": "pt3", 00:25:02.634 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:02.634 "is_configured": true, 00:25:02.634 "data_offset": 2048, 00:25:02.634 "data_size": 63488 00:25:02.634 }, 00:25:02.634 { 00:25:02.634 "name": "pt4", 00:25:02.634 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:02.634 "is_configured": true, 00:25:02.634 "data_offset": 2048, 00:25:02.634 "data_size": 63488 00:25:02.634 } 00:25:02.634 ] 00:25:02.634 } 00:25:02.634 } 00:25:02.634 }' 00:25:02.634 14:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:02.634 14:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:25:02.634 pt2 00:25:02.634 pt3 00:25:02.634 pt4' 00:25:02.634 14:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:02.634 14:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:02.634 14:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:25:03.200 14:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:03.200 "name": "pt1", 00:25:03.200 "aliases": [ 00:25:03.200 "00000000-0000-0000-0000-000000000001" 00:25:03.200 ], 00:25:03.200 "product_name": "passthru", 00:25:03.200 "block_size": 512, 00:25:03.200 "num_blocks": 65536, 00:25:03.200 "uuid": "00000000-0000-0000-0000-000000000001", 00:25:03.200 "assigned_rate_limits": { 00:25:03.200 "rw_ios_per_sec": 0, 00:25:03.200 "rw_mbytes_per_sec": 0, 00:25:03.200 "r_mbytes_per_sec": 0, 00:25:03.200 "w_mbytes_per_sec": 0 00:25:03.200 }, 00:25:03.200 "claimed": true, 00:25:03.200 "claim_type": "exclusive_write", 00:25:03.200 "zoned": false, 00:25:03.200 "supported_io_types": { 00:25:03.200 "read": true, 00:25:03.200 "write": true, 00:25:03.200 "unmap": true, 00:25:03.200 "flush": true, 00:25:03.200 "reset": true, 00:25:03.200 "nvme_admin": false, 00:25:03.200 "nvme_io": false, 00:25:03.200 "nvme_io_md": false, 00:25:03.200 "write_zeroes": true, 00:25:03.200 "zcopy": true, 00:25:03.200 "get_zone_info": false, 00:25:03.200 "zone_management": false, 00:25:03.200 "zone_append": false, 00:25:03.200 "compare": false, 00:25:03.200 "compare_and_write": false, 00:25:03.200 "abort": true, 00:25:03.200 "seek_hole": false, 00:25:03.200 "seek_data": false, 00:25:03.200 "copy": true, 00:25:03.200 "nvme_iov_md": false 00:25:03.200 }, 00:25:03.200 "memory_domains": [ 00:25:03.200 { 00:25:03.200 "dma_device_id": "system", 00:25:03.200 "dma_device_type": 1 00:25:03.200 }, 00:25:03.200 { 00:25:03.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:03.200 "dma_device_type": 2 00:25:03.200 } 00:25:03.200 ], 00:25:03.200 "driver_specific": { 00:25:03.200 "passthru": { 00:25:03.200 "name": "pt1", 00:25:03.200 "base_bdev_name": "malloc1" 00:25:03.200 } 00:25:03.200 } 00:25:03.200 }' 00:25:03.200 14:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:03.200 14:17:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:03.200 14:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:03.200 14:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:03.200 14:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:03.200 14:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:03.200 14:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:03.200 14:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:03.200 14:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:03.200 14:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:03.459 14:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:03.459 14:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:03.459 14:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:03.459 14:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:25:03.459 14:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:03.715 14:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:03.715 "name": "pt2", 00:25:03.715 "aliases": [ 00:25:03.715 "00000000-0000-0000-0000-000000000002" 00:25:03.715 ], 00:25:03.715 "product_name": "passthru", 00:25:03.715 "block_size": 512, 00:25:03.715 "num_blocks": 65536, 00:25:03.716 "uuid": "00000000-0000-0000-0000-000000000002", 00:25:03.716 "assigned_rate_limits": { 00:25:03.716 "rw_ios_per_sec": 0, 00:25:03.716 "rw_mbytes_per_sec": 0, 00:25:03.716 "r_mbytes_per_sec": 0, 00:25:03.716 "w_mbytes_per_sec": 0 00:25:03.716 }, 00:25:03.716 "claimed": true, 00:25:03.716 "claim_type": "exclusive_write", 00:25:03.716 "zoned": false, 00:25:03.716 "supported_io_types": { 00:25:03.716 "read": true, 00:25:03.716 "write": true, 00:25:03.716 "unmap": true, 00:25:03.716 "flush": true, 00:25:03.716 "reset": true, 00:25:03.716 "nvme_admin": false, 00:25:03.716 "nvme_io": false, 00:25:03.716 "nvme_io_md": false, 00:25:03.716 "write_zeroes": true, 00:25:03.716 "zcopy": true, 00:25:03.716 "get_zone_info": false, 00:25:03.716 "zone_management": false, 00:25:03.716 "zone_append": false, 00:25:03.716 "compare": false, 00:25:03.716 "compare_and_write": false, 00:25:03.716 "abort": true, 00:25:03.716 "seek_hole": false, 00:25:03.716 "seek_data": false, 00:25:03.716 "copy": true, 00:25:03.716 "nvme_iov_md": false 00:25:03.716 }, 00:25:03.716 "memory_domains": [ 00:25:03.716 { 00:25:03.716 "dma_device_id": "system", 00:25:03.716 "dma_device_type": 1 00:25:03.716 }, 00:25:03.716 { 00:25:03.716 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:03.716 "dma_device_type": 2 00:25:03.716 } 00:25:03.716 ], 00:25:03.716 "driver_specific": { 00:25:03.716 "passthru": { 00:25:03.716 "name": "pt2", 00:25:03.716 "base_bdev_name": "malloc2" 00:25:03.716 } 00:25:03.716 } 00:25:03.716 }' 00:25:03.716 14:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:03.716 14:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:03.716 14:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:03.716 14:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:03.716 14:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:03.973 14:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:03.973 14:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:03.973 14:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:03.973 14:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:03.973 14:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:03.973 14:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:03.973 14:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:03.973 14:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:03.973 14:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:25:03.973 14:17:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:04.231 14:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:04.231 "name": "pt3", 00:25:04.231 "aliases": [ 00:25:04.231 "00000000-0000-0000-0000-000000000003" 00:25:04.231 ], 00:25:04.231 "product_name": "passthru", 00:25:04.231 "block_size": 512, 00:25:04.231 "num_blocks": 65536, 00:25:04.231 "uuid": "00000000-0000-0000-0000-000000000003", 00:25:04.231 "assigned_rate_limits": { 00:25:04.231 "rw_ios_per_sec": 0, 00:25:04.231 "rw_mbytes_per_sec": 0, 00:25:04.231 "r_mbytes_per_sec": 0, 00:25:04.231 "w_mbytes_per_sec": 0 00:25:04.231 }, 00:25:04.231 "claimed": true, 00:25:04.231 "claim_type": "exclusive_write", 00:25:04.231 "zoned": false, 00:25:04.231 "supported_io_types": { 00:25:04.231 "read": true, 00:25:04.231 "write": true, 00:25:04.231 "unmap": true, 00:25:04.231 "flush": true, 00:25:04.231 "reset": true, 00:25:04.231 "nvme_admin": false, 00:25:04.231 "nvme_io": false, 00:25:04.231 "nvme_io_md": false, 00:25:04.231 "write_zeroes": true, 00:25:04.231 "zcopy": true, 00:25:04.231 "get_zone_info": false, 00:25:04.231 "zone_management": false, 00:25:04.231 "zone_append": false, 00:25:04.231 "compare": false, 00:25:04.231 "compare_and_write": false, 00:25:04.231 "abort": true, 00:25:04.231 "seek_hole": false, 00:25:04.231 "seek_data": false, 00:25:04.231 "copy": true, 00:25:04.231 "nvme_iov_md": false 00:25:04.231 }, 00:25:04.231 "memory_domains": [ 00:25:04.231 { 00:25:04.231 "dma_device_id": "system", 00:25:04.231 "dma_device_type": 1 00:25:04.231 }, 00:25:04.231 { 00:25:04.231 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:04.231 "dma_device_type": 2 00:25:04.231 } 00:25:04.231 ], 00:25:04.231 "driver_specific": { 00:25:04.231 "passthru": { 00:25:04.231 "name": "pt3", 00:25:04.231 "base_bdev_name": "malloc3" 00:25:04.231 } 00:25:04.231 } 00:25:04.231 }' 00:25:04.231 14:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:04.489 14:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:04.489 14:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:04.489 14:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:04.489 14:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:04.489 14:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:04.489 14:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:04.489 14:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:04.489 14:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:04.489 14:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:04.746 14:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:04.746 14:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:04.746 14:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:04.746 14:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:25:04.746 14:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:05.004 14:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:05.004 "name": "pt4", 00:25:05.004 "aliases": [ 00:25:05.004 "00000000-0000-0000-0000-000000000004" 00:25:05.004 ], 00:25:05.004 "product_name": "passthru", 00:25:05.004 "block_size": 512, 00:25:05.004 "num_blocks": 65536, 00:25:05.004 "uuid": "00000000-0000-0000-0000-000000000004", 00:25:05.004 "assigned_rate_limits": { 00:25:05.004 "rw_ios_per_sec": 0, 00:25:05.004 "rw_mbytes_per_sec": 0, 00:25:05.004 "r_mbytes_per_sec": 0, 00:25:05.004 "w_mbytes_per_sec": 0 00:25:05.004 }, 00:25:05.004 "claimed": true, 00:25:05.004 "claim_type": "exclusive_write", 00:25:05.004 "zoned": false, 00:25:05.004 "supported_io_types": { 00:25:05.004 "read": true, 00:25:05.004 "write": true, 00:25:05.004 "unmap": true, 00:25:05.004 "flush": true, 00:25:05.004 "reset": true, 00:25:05.004 "nvme_admin": false, 00:25:05.004 "nvme_io": false, 00:25:05.004 "nvme_io_md": false, 00:25:05.004 "write_zeroes": true, 00:25:05.004 "zcopy": true, 00:25:05.004 "get_zone_info": false, 00:25:05.004 "zone_management": false, 00:25:05.004 "zone_append": false, 00:25:05.004 "compare": false, 00:25:05.004 "compare_and_write": false, 00:25:05.004 "abort": true, 00:25:05.004 "seek_hole": false, 00:25:05.004 "seek_data": false, 00:25:05.004 "copy": true, 00:25:05.004 "nvme_iov_md": false 00:25:05.004 }, 00:25:05.004 "memory_domains": [ 00:25:05.004 { 00:25:05.004 "dma_device_id": "system", 00:25:05.004 "dma_device_type": 1 00:25:05.004 }, 00:25:05.004 { 00:25:05.004 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:05.004 "dma_device_type": 2 00:25:05.004 } 00:25:05.004 ], 00:25:05.004 "driver_specific": { 00:25:05.004 "passthru": { 00:25:05.004 "name": "pt4", 00:25:05.004 "base_bdev_name": "malloc4" 00:25:05.004 } 00:25:05.004 } 00:25:05.004 }' 00:25:05.004 14:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:05.004 14:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:05.004 14:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:05.004 14:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:05.004 14:17:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:05.262 14:17:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:05.262 14:17:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:05.262 14:17:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:05.262 14:17:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:05.262 14:17:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:05.262 14:17:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:05.262 14:17:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:05.262 14:17:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:05.262 14:17:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:25:05.519 [2024-07-15 14:17:51.457606] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:05.519 14:17:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 6032b163-cba4-4655-a96c-f22b0505b714 '!=' 6032b163-cba4-4655-a96c-f22b0505b714 ']' 00:25:05.519 14:17:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy concat 00:25:05.519 14:17:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:25:05.519 14:17:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:25:05.519 14:17:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 205842 00:25:05.519 14:17:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 205842 ']' 00:25:05.519 14:17:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 205842 00:25:05.519 14:17:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:25:05.519 14:17:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:05.519 14:17:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 205842 00:25:05.519 14:17:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:05.519 14:17:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:05.519 14:17:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 205842' 00:25:05.519 killing process with pid 205842 00:25:05.519 14:17:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 205842 00:25:05.519 [2024-07-15 14:17:51.520152] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:05.519 [2024-07-15 14:17:51.520232] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:05.519 [2024-07-15 14:17:51.520282] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:05.519 [2024-07-15 14:17:51.520293] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:25:05.519 14:17:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 205842 00:25:06.085 [2024-07-15 14:17:51.866890] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:07.021 14:17:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:25:07.021 00:25:07.021 real 0m18.900s 00:25:07.021 user 0m33.951s 00:25:07.021 sys 0m2.184s 00:25:07.021 14:17:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:07.021 14:17:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:07.021 ************************************ 00:25:07.280 END TEST raid_superblock_test 00:25:07.280 ************************************ 00:25:07.280 14:17:53 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:25:07.280 14:17:53 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:25:07.280 14:17:53 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:25:07.280 14:17:53 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:07.280 14:17:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:07.280 ************************************ 00:25:07.280 START TEST raid_read_error_test 00:25:07.280 ************************************ 00:25:07.280 14:17:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 4 read 00:25:07.280 14:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:25:07.280 14:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:25:07.280 14:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:25:07.280 14:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:25:07.280 14:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:25:07.280 14:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:25:07.280 14:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:25:07.280 14:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:25:07.280 14:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:25:07.280 14:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:25:07.280 14:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:25:07.280 14:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:25:07.280 14:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:25:07.280 14:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:25:07.280 14:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev4 00:25:07.280 14:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:25:07.280 14:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:25:07.280 14:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:25:07.280 14:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:25:07.280 14:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:25:07.280 14:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:25:07.280 14:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:25:07.280 14:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:25:07.280 14:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:25:07.280 14:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:25:07.280 14:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:25:07.280 14:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:25:07.280 14:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:25:07.280 14:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.dAJ57TERUG 00:25:07.280 14:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=206406 00:25:07.280 14:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:25:07.280 14:17:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 206406 /var/tmp/spdk-raid.sock 00:25:07.280 14:17:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 206406 ']' 00:25:07.281 14:17:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:07.281 14:17:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:07.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:07.281 14:17:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:07.281 14:17:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:07.281 14:17:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:07.281 [2024-07-15 14:17:53.147119] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:25:07.281 [2024-07-15 14:17:53.147378] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid206406 ] 00:25:07.608 [2024-07-15 14:17:53.318675] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:07.608 [2024-07-15 14:17:53.578484] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:07.881 [2024-07-15 14:17:53.785611] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:08.447 14:17:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:08.447 14:17:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:25:08.447 14:17:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:25:08.447 14:17:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:25:08.447 BaseBdev1_malloc 00:25:08.447 14:17:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:25:08.706 true 00:25:08.706 14:17:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:25:08.965 [2024-07-15 14:17:54.954065] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:25:08.965 [2024-07-15 14:17:54.954191] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:08.965 [2024-07-15 14:17:54.954240] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:25:08.965 [2024-07-15 14:17:54.954270] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:08.965 [2024-07-15 14:17:54.956091] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:08.965 [2024-07-15 14:17:54.956161] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:08.965 BaseBdev1 00:25:09.223 14:17:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:25:09.223 14:17:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:25:09.482 BaseBdev2_malloc 00:25:09.482 14:17:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:25:09.740 true 00:25:09.740 14:17:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:25:09.998 [2024-07-15 14:17:55.813627] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:25:09.998 [2024-07-15 14:17:55.813939] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:09.998 [2024-07-15 14:17:55.814053] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:25:09.998 [2024-07-15 14:17:55.814132] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:09.998 [2024-07-15 14:17:55.815966] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:09.998 [2024-07-15 14:17:55.816090] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:09.998 BaseBdev2 00:25:09.998 14:17:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:25:09.998 14:17:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:25:10.257 BaseBdev3_malloc 00:25:10.257 14:17:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:25:10.516 true 00:25:10.516 14:17:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:25:10.774 [2024-07-15 14:17:56.708813] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:25:10.774 [2024-07-15 14:17:56.709292] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:10.774 [2024-07-15 14:17:56.709399] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:25:10.774 [2024-07-15 14:17:56.709485] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:10.774 [2024-07-15 14:17:56.711323] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:10.774 [2024-07-15 14:17:56.711447] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:25:10.774 BaseBdev3 00:25:10.774 14:17:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:25:10.775 14:17:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:25:11.049 BaseBdev4_malloc 00:25:11.308 14:17:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:25:11.566 true 00:25:11.566 14:17:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:25:11.825 [2024-07-15 14:17:57.641347] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:25:11.825 [2024-07-15 14:17:57.641634] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:11.825 [2024-07-15 14:17:57.641748] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:25:11.825 [2024-07-15 14:17:57.641851] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:11.825 [2024-07-15 14:17:57.643712] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:11.825 [2024-07-15 14:17:57.643856] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:25:11.825 BaseBdev4 00:25:11.825 14:17:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:25:12.084 [2024-07-15 14:17:57.885459] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:12.084 [2024-07-15 14:17:57.887120] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:12.084 [2024-07-15 14:17:57.887222] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:12.084 [2024-07-15 14:17:57.887272] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:12.084 [2024-07-15 14:17:57.887464] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a280 00:25:12.084 [2024-07-15 14:17:57.887479] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:25:12.084 [2024-07-15 14:17:57.887607] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:25:12.084 [2024-07-15 14:17:57.887895] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a280 00:25:12.084 [2024-07-15 14:17:57.887910] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a280 00:25:12.084 [2024-07-15 14:17:57.888032] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:12.084 14:17:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:25:12.084 14:17:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:12.084 14:17:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:12.084 14:17:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:12.084 14:17:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:12.084 14:17:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:12.084 14:17:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:12.084 14:17:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:12.084 14:17:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:12.084 14:17:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:12.084 14:17:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:12.084 14:17:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:12.343 14:17:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:12.343 "name": "raid_bdev1", 00:25:12.343 "uuid": "9692d760-6015-4baf-b89a-186a4ce99b75", 00:25:12.343 "strip_size_kb": 64, 00:25:12.343 "state": "online", 00:25:12.343 "raid_level": "concat", 00:25:12.343 "superblock": true, 00:25:12.343 "num_base_bdevs": 4, 00:25:12.343 "num_base_bdevs_discovered": 4, 00:25:12.343 "num_base_bdevs_operational": 4, 00:25:12.343 "base_bdevs_list": [ 00:25:12.343 { 00:25:12.343 "name": "BaseBdev1", 00:25:12.343 "uuid": "120506d3-7abf-5a2c-84eb-8d64b5047b79", 00:25:12.343 "is_configured": true, 00:25:12.343 "data_offset": 2048, 00:25:12.343 "data_size": 63488 00:25:12.343 }, 00:25:12.343 { 00:25:12.343 "name": "BaseBdev2", 00:25:12.343 "uuid": "375ef595-3686-5454-8477-ee16f5e4ff18", 00:25:12.343 "is_configured": true, 00:25:12.343 "data_offset": 2048, 00:25:12.343 "data_size": 63488 00:25:12.343 }, 00:25:12.343 { 00:25:12.343 "name": "BaseBdev3", 00:25:12.343 "uuid": "f4fc0aa3-3585-5255-b331-654d4fbddb68", 00:25:12.343 "is_configured": true, 00:25:12.343 "data_offset": 2048, 00:25:12.343 "data_size": 63488 00:25:12.343 }, 00:25:12.343 { 00:25:12.343 "name": "BaseBdev4", 00:25:12.343 "uuid": "5dcba062-b324-5a3a-80e9-2c0e3af27d90", 00:25:12.343 "is_configured": true, 00:25:12.343 "data_offset": 2048, 00:25:12.343 "data_size": 63488 00:25:12.343 } 00:25:12.343 ] 00:25:12.343 }' 00:25:12.343 14:17:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:12.343 14:17:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:12.909 14:17:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:25:12.909 14:17:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:25:13.169 [2024-07-15 14:17:59.018765] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:25:14.104 14:17:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:25:14.363 14:18:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:25:14.363 14:18:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:25:14.363 14:18:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:25:14.363 14:18:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:25:14.363 14:18:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:14.363 14:18:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:14.363 14:18:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:14.363 14:18:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:14.363 14:18:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:14.363 14:18:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:14.363 14:18:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:14.363 14:18:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:14.363 14:18:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:14.363 14:18:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:14.363 14:18:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:14.622 14:18:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:14.622 "name": "raid_bdev1", 00:25:14.622 "uuid": "9692d760-6015-4baf-b89a-186a4ce99b75", 00:25:14.622 "strip_size_kb": 64, 00:25:14.622 "state": "online", 00:25:14.622 "raid_level": "concat", 00:25:14.622 "superblock": true, 00:25:14.622 "num_base_bdevs": 4, 00:25:14.622 "num_base_bdevs_discovered": 4, 00:25:14.622 "num_base_bdevs_operational": 4, 00:25:14.622 "base_bdevs_list": [ 00:25:14.622 { 00:25:14.622 "name": "BaseBdev1", 00:25:14.622 "uuid": "120506d3-7abf-5a2c-84eb-8d64b5047b79", 00:25:14.622 "is_configured": true, 00:25:14.622 "data_offset": 2048, 00:25:14.622 "data_size": 63488 00:25:14.622 }, 00:25:14.622 { 00:25:14.622 "name": "BaseBdev2", 00:25:14.622 "uuid": "375ef595-3686-5454-8477-ee16f5e4ff18", 00:25:14.622 "is_configured": true, 00:25:14.622 "data_offset": 2048, 00:25:14.622 "data_size": 63488 00:25:14.622 }, 00:25:14.622 { 00:25:14.622 "name": "BaseBdev3", 00:25:14.622 "uuid": "f4fc0aa3-3585-5255-b331-654d4fbddb68", 00:25:14.622 "is_configured": true, 00:25:14.622 "data_offset": 2048, 00:25:14.622 "data_size": 63488 00:25:14.622 }, 00:25:14.622 { 00:25:14.622 "name": "BaseBdev4", 00:25:14.622 "uuid": "5dcba062-b324-5a3a-80e9-2c0e3af27d90", 00:25:14.622 "is_configured": true, 00:25:14.622 "data_offset": 2048, 00:25:14.622 "data_size": 63488 00:25:14.622 } 00:25:14.622 ] 00:25:14.622 }' 00:25:14.622 14:18:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:14.622 14:18:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:15.190 14:18:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:15.449 [2024-07-15 14:18:01.363637] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:15.449 [2024-07-15 14:18:01.363693] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:15.449 [2024-07-15 14:18:01.365005] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:15.449 [2024-07-15 14:18:01.365053] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:15.449 [2024-07-15 14:18:01.365083] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:15.449 [2024-07-15 14:18:01.365094] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a280 name raid_bdev1, state offline 00:25:15.449 0 00:25:15.449 14:18:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 206406 00:25:15.449 14:18:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 206406 ']' 00:25:15.450 14:18:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 206406 00:25:15.450 14:18:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:25:15.450 14:18:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:15.450 14:18:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 206406 00:25:15.450 14:18:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:15.450 14:18:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:15.450 killing process with pid 206406 00:25:15.450 14:18:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 206406' 00:25:15.450 14:18:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 206406 00:25:15.450 [2024-07-15 14:18:01.403094] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:15.450 14:18:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 206406 00:25:15.708 [2024-07-15 14:18:01.686286] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:17.119 14:18:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.dAJ57TERUG 00:25:17.119 14:18:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:25:17.119 14:18:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:25:17.119 14:18:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.43 00:25:17.119 14:18:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:25:17.119 14:18:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:25:17.119 14:18:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:25:17.119 14:18:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.43 != \0\.\0\0 ]] 00:25:17.119 00:25:17.119 real 0m9.832s 00:25:17.119 user 0m15.432s 00:25:17.119 sys 0m1.059s 00:25:17.119 14:18:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:17.119 14:18:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:17.119 ************************************ 00:25:17.119 END TEST raid_read_error_test 00:25:17.119 ************************************ 00:25:17.119 14:18:02 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:25:17.119 14:18:02 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:25:17.119 14:18:02 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:25:17.119 14:18:02 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:17.119 14:18:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:17.119 ************************************ 00:25:17.119 START TEST raid_write_error_test 00:25:17.119 ************************************ 00:25:17.119 14:18:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 4 write 00:25:17.119 14:18:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:25:17.119 14:18:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:25:17.119 14:18:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:25:17.119 14:18:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:25:17.119 14:18:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:25:17.119 14:18:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:25:17.119 14:18:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:25:17.119 14:18:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:25:17.119 14:18:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:25:17.119 14:18:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:25:17.119 14:18:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:25:17.119 14:18:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:25:17.119 14:18:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:25:17.119 14:18:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:25:17.119 14:18:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev4 00:25:17.119 14:18:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:25:17.119 14:18:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:25:17.119 14:18:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:25:17.119 14:18:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:25:17.119 14:18:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:25:17.119 14:18:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:25:17.120 14:18:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:25:17.120 14:18:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:25:17.120 14:18:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:25:17.120 14:18:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:25:17.120 14:18:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:25:17.120 14:18:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:25:17.120 14:18:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:25:17.120 14:18:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.ZcsKV1qnWL 00:25:17.120 14:18:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=206627 00:25:17.120 14:18:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 206627 /var/tmp/spdk-raid.sock 00:25:17.120 14:18:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 206627 ']' 00:25:17.120 14:18:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:25:17.120 14:18:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:17.120 14:18:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:17.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:17.120 14:18:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:17.120 14:18:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:17.120 14:18:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:17.120 [2024-07-15 14:18:03.032858] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:25:17.120 [2024-07-15 14:18:03.033069] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid206627 ] 00:25:17.378 [2024-07-15 14:18:03.193697] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:17.636 [2024-07-15 14:18:03.418938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:17.636 [2024-07-15 14:18:03.620257] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:18.204 14:18:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:18.204 14:18:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:25:18.204 14:18:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:25:18.204 14:18:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:25:18.463 BaseBdev1_malloc 00:25:18.463 14:18:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:25:18.723 true 00:25:18.723 14:18:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:25:18.982 [2024-07-15 14:18:04.896064] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:25:18.982 [2024-07-15 14:18:04.896571] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:18.982 [2024-07-15 14:18:04.896699] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:25:18.982 [2024-07-15 14:18:04.896801] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:18.982 [2024-07-15 14:18:04.898688] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:18.982 [2024-07-15 14:18:04.898835] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:18.982 BaseBdev1 00:25:18.982 14:18:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:25:18.982 14:18:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:25:19.241 BaseBdev2_malloc 00:25:19.241 14:18:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:25:19.810 true 00:25:19.810 14:18:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:25:19.810 [2024-07-15 14:18:05.779437] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:25:19.810 [2024-07-15 14:18:05.779713] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:19.810 [2024-07-15 14:18:05.779859] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:25:19.810 [2024-07-15 14:18:05.779941] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:19.810 [2024-07-15 14:18:05.781748] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:19.810 [2024-07-15 14:18:05.781868] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:19.810 BaseBdev2 00:25:19.810 14:18:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:25:19.810 14:18:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:25:20.069 BaseBdev3_malloc 00:25:20.327 14:18:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:25:20.327 true 00:25:20.586 14:18:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:25:20.586 [2024-07-15 14:18:06.568466] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:25:20.586 [2024-07-15 14:18:06.569018] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:20.586 [2024-07-15 14:18:06.569151] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:25:20.586 [2024-07-15 14:18:06.569241] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:20.586 [2024-07-15 14:18:06.571119] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:20.586 [2024-07-15 14:18:06.571238] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:25:20.586 BaseBdev3 00:25:20.586 14:18:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:25:20.586 14:18:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:25:21.154 BaseBdev4_malloc 00:25:21.154 14:18:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:25:21.154 true 00:25:21.154 14:18:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:25:21.412 [2024-07-15 14:18:07.397726] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:25:21.412 [2024-07-15 14:18:07.398031] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:21.413 [2024-07-15 14:18:07.398179] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:25:21.413 [2024-07-15 14:18:07.398303] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:21.413 [2024-07-15 14:18:07.400240] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:21.413 [2024-07-15 14:18:07.400393] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:25:21.413 BaseBdev4 00:25:21.672 14:18:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:25:21.672 [2024-07-15 14:18:07.649987] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:21.672 [2024-07-15 14:18:07.651576] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:21.672 [2024-07-15 14:18:07.651648] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:21.672 [2024-07-15 14:18:07.651696] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:21.672 [2024-07-15 14:18:07.651880] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a280 00:25:21.672 [2024-07-15 14:18:07.651896] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:25:21.672 [2024-07-15 14:18:07.652020] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:25:21.672 [2024-07-15 14:18:07.652272] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a280 00:25:21.672 [2024-07-15 14:18:07.652297] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a280 00:25:21.672 [2024-07-15 14:18:07.652413] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:21.672 14:18:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:25:21.672 14:18:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:21.672 14:18:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:21.672 14:18:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:21.672 14:18:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:21.672 14:18:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:21.672 14:18:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:21.672 14:18:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:21.672 14:18:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:21.672 14:18:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:21.672 14:18:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:21.672 14:18:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:22.240 14:18:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:22.240 "name": "raid_bdev1", 00:25:22.240 "uuid": "2ed7ddb8-7c87-4681-8262-11a20237000b", 00:25:22.240 "strip_size_kb": 64, 00:25:22.240 "state": "online", 00:25:22.240 "raid_level": "concat", 00:25:22.240 "superblock": true, 00:25:22.240 "num_base_bdevs": 4, 00:25:22.240 "num_base_bdevs_discovered": 4, 00:25:22.240 "num_base_bdevs_operational": 4, 00:25:22.240 "base_bdevs_list": [ 00:25:22.240 { 00:25:22.240 "name": "BaseBdev1", 00:25:22.240 "uuid": "735c5d8f-14bc-50e5-911f-29524b74bd49", 00:25:22.240 "is_configured": true, 00:25:22.240 "data_offset": 2048, 00:25:22.240 "data_size": 63488 00:25:22.240 }, 00:25:22.240 { 00:25:22.240 "name": "BaseBdev2", 00:25:22.240 "uuid": "ad64b563-6f9d-5052-87cb-c12dfa5fdf71", 00:25:22.240 "is_configured": true, 00:25:22.240 "data_offset": 2048, 00:25:22.240 "data_size": 63488 00:25:22.240 }, 00:25:22.240 { 00:25:22.240 "name": "BaseBdev3", 00:25:22.240 "uuid": "cac6455b-9f31-59f5-942a-95d6fcd11d65", 00:25:22.240 "is_configured": true, 00:25:22.240 "data_offset": 2048, 00:25:22.240 "data_size": 63488 00:25:22.240 }, 00:25:22.240 { 00:25:22.240 "name": "BaseBdev4", 00:25:22.240 "uuid": "cb4c85b0-7ff7-5a36-a149-0f8945ce0d5e", 00:25:22.240 "is_configured": true, 00:25:22.240 "data_offset": 2048, 00:25:22.241 "data_size": 63488 00:25:22.241 } 00:25:22.241 ] 00:25:22.241 }' 00:25:22.241 14:18:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:22.241 14:18:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:22.808 14:18:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:25:22.808 14:18:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:25:22.808 [2024-07-15 14:18:08.803744] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:25:23.744 14:18:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:25:24.002 14:18:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:25:24.002 14:18:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:25:24.002 14:18:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:25:24.002 14:18:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:25:24.002 14:18:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:24.002 14:18:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:24.002 14:18:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:25:24.002 14:18:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:25:24.002 14:18:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:24.002 14:18:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:24.002 14:18:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:24.002 14:18:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:24.002 14:18:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:24.002 14:18:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:24.002 14:18:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:24.569 14:18:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:24.569 "name": "raid_bdev1", 00:25:24.569 "uuid": "2ed7ddb8-7c87-4681-8262-11a20237000b", 00:25:24.569 "strip_size_kb": 64, 00:25:24.569 "state": "online", 00:25:24.569 "raid_level": "concat", 00:25:24.569 "superblock": true, 00:25:24.569 "num_base_bdevs": 4, 00:25:24.569 "num_base_bdevs_discovered": 4, 00:25:24.569 "num_base_bdevs_operational": 4, 00:25:24.569 "base_bdevs_list": [ 00:25:24.569 { 00:25:24.569 "name": "BaseBdev1", 00:25:24.569 "uuid": "735c5d8f-14bc-50e5-911f-29524b74bd49", 00:25:24.569 "is_configured": true, 00:25:24.569 "data_offset": 2048, 00:25:24.569 "data_size": 63488 00:25:24.569 }, 00:25:24.569 { 00:25:24.569 "name": "BaseBdev2", 00:25:24.569 "uuid": "ad64b563-6f9d-5052-87cb-c12dfa5fdf71", 00:25:24.569 "is_configured": true, 00:25:24.569 "data_offset": 2048, 00:25:24.569 "data_size": 63488 00:25:24.569 }, 00:25:24.569 { 00:25:24.569 "name": "BaseBdev3", 00:25:24.569 "uuid": "cac6455b-9f31-59f5-942a-95d6fcd11d65", 00:25:24.569 "is_configured": true, 00:25:24.569 "data_offset": 2048, 00:25:24.569 "data_size": 63488 00:25:24.569 }, 00:25:24.569 { 00:25:24.569 "name": "BaseBdev4", 00:25:24.569 "uuid": "cb4c85b0-7ff7-5a36-a149-0f8945ce0d5e", 00:25:24.569 "is_configured": true, 00:25:24.569 "data_offset": 2048, 00:25:24.569 "data_size": 63488 00:25:24.569 } 00:25:24.569 ] 00:25:24.569 }' 00:25:24.569 14:18:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:24.569 14:18:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:25.135 14:18:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:25.427 [2024-07-15 14:18:11.221664] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:25.427 [2024-07-15 14:18:11.221717] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:25.427 [2024-07-15 14:18:11.223150] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:25.427 [2024-07-15 14:18:11.223214] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:25.427 [2024-07-15 14:18:11.223244] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:25.427 [2024-07-15 14:18:11.223254] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a280 name raid_bdev1, state offline 00:25:25.427 0 00:25:25.427 14:18:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 206627 00:25:25.427 14:18:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 206627 ']' 00:25:25.427 14:18:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 206627 00:25:25.427 14:18:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:25:25.427 14:18:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:25.427 14:18:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 206627 00:25:25.427 14:18:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:25.427 14:18:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:25.427 14:18:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 206627' 00:25:25.427 killing process with pid 206627 00:25:25.427 14:18:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 206627 00:25:25.427 [2024-07-15 14:18:11.276013] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:25.427 14:18:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 206627 00:25:25.691 [2024-07-15 14:18:11.562499] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:27.070 14:18:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.ZcsKV1qnWL 00:25:27.070 14:18:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:25:27.070 14:18:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:25:27.070 14:18:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.41 00:25:27.070 14:18:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:25:27.070 14:18:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:25:27.070 14:18:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:25:27.070 14:18:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.41 != \0\.\0\0 ]] 00:25:27.070 00:25:27.070 real 0m9.815s 00:25:27.070 user 0m15.325s 00:25:27.070 sys 0m1.118s 00:25:27.070 14:18:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:27.071 14:18:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:25:27.071 ************************************ 00:25:27.071 END TEST raid_write_error_test 00:25:27.071 ************************************ 00:25:27.071 14:18:12 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:25:27.071 14:18:12 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:25:27.071 14:18:12 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:25:27.071 14:18:12 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:25:27.071 14:18:12 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:27.071 14:18:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:27.071 ************************************ 00:25:27.071 START TEST raid_state_function_test 00:25:27.071 ************************************ 00:25:27.071 14:18:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 4 false 00:25:27.071 14:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:25:27.071 14:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:25:27.071 14:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:25:27.071 14:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:25:27.071 14:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:25:27.071 14:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:27.071 14:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:25:27.071 14:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:27.071 14:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:27.071 14:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:25:27.071 14:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:27.071 14:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:27.071 14:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:25:27.071 14:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:27.071 14:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:27.071 14:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev4 00:25:27.071 14:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:25:27.071 14:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:25:27.071 14:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:25:27.071 14:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:25:27.071 14:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:25:27.071 14:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:25:27.071 14:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:25:27.071 14:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:25:27.071 14:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:25:27.071 14:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:25:27.071 14:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:25:27.071 14:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:25:27.071 14:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=206846 00:25:27.071 14:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:25:27.071 Process raid pid: 206846 00:25:27.071 14:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 206846' 00:25:27.071 14:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 206846 /var/tmp/spdk-raid.sock 00:25:27.071 14:18:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 206846 ']' 00:25:27.071 14:18:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:27.071 14:18:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:27.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:27.071 14:18:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:27.071 14:18:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:27.071 14:18:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:27.071 [2024-07-15 14:18:12.898997] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:25:27.071 [2024-07-15 14:18:12.899682] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:27.071 [2024-07-15 14:18:13.061451] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:27.330 [2024-07-15 14:18:13.316960] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:27.588 [2024-07-15 14:18:13.521848] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:28.153 14:18:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:28.153 14:18:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:25:28.153 14:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:28.153 [2024-07-15 14:18:14.110729] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:28.153 [2024-07-15 14:18:14.111198] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:28.153 [2024-07-15 14:18:14.111231] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:28.153 [2024-07-15 14:18:14.111328] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:28.153 [2024-07-15 14:18:14.111350] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:28.153 [2024-07-15 14:18:14.111432] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:28.153 [2024-07-15 14:18:14.111452] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:28.153 [2024-07-15 14:18:14.111533] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:28.153 14:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:28.153 14:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:28.153 14:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:28.153 14:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:28.153 14:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:28.153 14:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:28.153 14:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:28.153 14:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:28.153 14:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:28.153 14:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:28.153 14:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:28.153 14:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:28.411 14:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:28.411 "name": "Existed_Raid", 00:25:28.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:28.411 "strip_size_kb": 0, 00:25:28.411 "state": "configuring", 00:25:28.411 "raid_level": "raid1", 00:25:28.411 "superblock": false, 00:25:28.411 "num_base_bdevs": 4, 00:25:28.411 "num_base_bdevs_discovered": 0, 00:25:28.411 "num_base_bdevs_operational": 4, 00:25:28.411 "base_bdevs_list": [ 00:25:28.411 { 00:25:28.411 "name": "BaseBdev1", 00:25:28.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:28.411 "is_configured": false, 00:25:28.411 "data_offset": 0, 00:25:28.411 "data_size": 0 00:25:28.411 }, 00:25:28.411 { 00:25:28.411 "name": "BaseBdev2", 00:25:28.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:28.411 "is_configured": false, 00:25:28.411 "data_offset": 0, 00:25:28.411 "data_size": 0 00:25:28.411 }, 00:25:28.411 { 00:25:28.411 "name": "BaseBdev3", 00:25:28.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:28.411 "is_configured": false, 00:25:28.411 "data_offset": 0, 00:25:28.411 "data_size": 0 00:25:28.411 }, 00:25:28.411 { 00:25:28.411 "name": "BaseBdev4", 00:25:28.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:28.411 "is_configured": false, 00:25:28.411 "data_offset": 0, 00:25:28.411 "data_size": 0 00:25:28.411 } 00:25:28.411 ] 00:25:28.411 }' 00:25:28.411 14:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:28.411 14:18:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:29.019 14:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:29.283 [2024-07-15 14:18:15.214924] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:29.283 [2024-07-15 14:18:15.214976] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:25:29.283 14:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:29.541 [2024-07-15 14:18:15.463017] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:29.541 [2024-07-15 14:18:15.463098] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:29.541 [2024-07-15 14:18:15.463112] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:29.541 [2024-07-15 14:18:15.463139] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:29.541 [2024-07-15 14:18:15.463149] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:29.541 [2024-07-15 14:18:15.463185] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:29.541 [2024-07-15 14:18:15.463194] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:29.541 [2024-07-15 14:18:15.463219] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:29.541 14:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:25:29.799 [2024-07-15 14:18:15.744886] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:29.799 BaseBdev1 00:25:29.799 14:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:25:29.799 14:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:25:29.799 14:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:29.799 14:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:25:29.799 14:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:29.799 14:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:29.799 14:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:30.058 14:18:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:30.624 [ 00:25:30.624 { 00:25:30.624 "name": "BaseBdev1", 00:25:30.624 "aliases": [ 00:25:30.624 "b96c54d8-2a47-4798-a790-047d3b43e917" 00:25:30.624 ], 00:25:30.624 "product_name": "Malloc disk", 00:25:30.624 "block_size": 512, 00:25:30.624 "num_blocks": 65536, 00:25:30.624 "uuid": "b96c54d8-2a47-4798-a790-047d3b43e917", 00:25:30.624 "assigned_rate_limits": { 00:25:30.624 "rw_ios_per_sec": 0, 00:25:30.624 "rw_mbytes_per_sec": 0, 00:25:30.624 "r_mbytes_per_sec": 0, 00:25:30.624 "w_mbytes_per_sec": 0 00:25:30.624 }, 00:25:30.624 "claimed": true, 00:25:30.624 "claim_type": "exclusive_write", 00:25:30.624 "zoned": false, 00:25:30.624 "supported_io_types": { 00:25:30.624 "read": true, 00:25:30.624 "write": true, 00:25:30.624 "unmap": true, 00:25:30.624 "flush": true, 00:25:30.624 "reset": true, 00:25:30.624 "nvme_admin": false, 00:25:30.624 "nvme_io": false, 00:25:30.624 "nvme_io_md": false, 00:25:30.624 "write_zeroes": true, 00:25:30.624 "zcopy": true, 00:25:30.624 "get_zone_info": false, 00:25:30.624 "zone_management": false, 00:25:30.624 "zone_append": false, 00:25:30.624 "compare": false, 00:25:30.624 "compare_and_write": false, 00:25:30.624 "abort": true, 00:25:30.624 "seek_hole": false, 00:25:30.624 "seek_data": false, 00:25:30.624 "copy": true, 00:25:30.624 "nvme_iov_md": false 00:25:30.624 }, 00:25:30.624 "memory_domains": [ 00:25:30.624 { 00:25:30.624 "dma_device_id": "system", 00:25:30.624 "dma_device_type": 1 00:25:30.624 }, 00:25:30.624 { 00:25:30.624 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:30.624 "dma_device_type": 2 00:25:30.624 } 00:25:30.624 ], 00:25:30.624 "driver_specific": {} 00:25:30.624 } 00:25:30.624 ] 00:25:30.624 14:18:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:25:30.624 14:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:30.624 14:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:30.624 14:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:30.624 14:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:30.624 14:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:30.624 14:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:30.624 14:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:30.624 14:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:30.624 14:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:30.624 14:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:30.624 14:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:30.624 14:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:30.882 14:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:30.882 "name": "Existed_Raid", 00:25:30.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:30.882 "strip_size_kb": 0, 00:25:30.882 "state": "configuring", 00:25:30.882 "raid_level": "raid1", 00:25:30.882 "superblock": false, 00:25:30.882 "num_base_bdevs": 4, 00:25:30.882 "num_base_bdevs_discovered": 1, 00:25:30.882 "num_base_bdevs_operational": 4, 00:25:30.882 "base_bdevs_list": [ 00:25:30.882 { 00:25:30.882 "name": "BaseBdev1", 00:25:30.882 "uuid": "b96c54d8-2a47-4798-a790-047d3b43e917", 00:25:30.882 "is_configured": true, 00:25:30.882 "data_offset": 0, 00:25:30.882 "data_size": 65536 00:25:30.882 }, 00:25:30.882 { 00:25:30.882 "name": "BaseBdev2", 00:25:30.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:30.882 "is_configured": false, 00:25:30.882 "data_offset": 0, 00:25:30.882 "data_size": 0 00:25:30.882 }, 00:25:30.882 { 00:25:30.882 "name": "BaseBdev3", 00:25:30.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:30.882 "is_configured": false, 00:25:30.882 "data_offset": 0, 00:25:30.882 "data_size": 0 00:25:30.882 }, 00:25:30.882 { 00:25:30.882 "name": "BaseBdev4", 00:25:30.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:30.882 "is_configured": false, 00:25:30.882 "data_offset": 0, 00:25:30.882 "data_size": 0 00:25:30.882 } 00:25:30.882 ] 00:25:30.882 }' 00:25:30.882 14:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:30.882 14:18:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:31.448 14:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:31.706 [2024-07-15 14:18:17.581334] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:31.706 [2024-07-15 14:18:17.581392] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:25:31.706 14:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:31.967 [2024-07-15 14:18:17.885446] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:31.967 [2024-07-15 14:18:17.887091] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:31.967 [2024-07-15 14:18:17.887179] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:31.967 [2024-07-15 14:18:17.887208] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:31.967 [2024-07-15 14:18:17.887243] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:31.967 [2024-07-15 14:18:17.887253] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:31.967 [2024-07-15 14:18:17.887273] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:31.967 14:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:25:31.967 14:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:31.967 14:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:31.967 14:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:31.967 14:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:31.967 14:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:31.967 14:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:31.967 14:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:31.967 14:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:31.967 14:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:31.967 14:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:31.967 14:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:31.967 14:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:31.967 14:18:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:32.227 14:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:32.227 "name": "Existed_Raid", 00:25:32.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:32.227 "strip_size_kb": 0, 00:25:32.227 "state": "configuring", 00:25:32.227 "raid_level": "raid1", 00:25:32.227 "superblock": false, 00:25:32.227 "num_base_bdevs": 4, 00:25:32.227 "num_base_bdevs_discovered": 1, 00:25:32.227 "num_base_bdevs_operational": 4, 00:25:32.227 "base_bdevs_list": [ 00:25:32.227 { 00:25:32.227 "name": "BaseBdev1", 00:25:32.227 "uuid": "b96c54d8-2a47-4798-a790-047d3b43e917", 00:25:32.227 "is_configured": true, 00:25:32.227 "data_offset": 0, 00:25:32.227 "data_size": 65536 00:25:32.227 }, 00:25:32.227 { 00:25:32.227 "name": "BaseBdev2", 00:25:32.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:32.227 "is_configured": false, 00:25:32.227 "data_offset": 0, 00:25:32.227 "data_size": 0 00:25:32.227 }, 00:25:32.227 { 00:25:32.227 "name": "BaseBdev3", 00:25:32.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:32.227 "is_configured": false, 00:25:32.227 "data_offset": 0, 00:25:32.227 "data_size": 0 00:25:32.227 }, 00:25:32.227 { 00:25:32.227 "name": "BaseBdev4", 00:25:32.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:32.227 "is_configured": false, 00:25:32.227 "data_offset": 0, 00:25:32.227 "data_size": 0 00:25:32.227 } 00:25:32.227 ] 00:25:32.227 }' 00:25:32.227 14:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:32.227 14:18:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:33.173 14:18:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:25:33.173 [2024-07-15 14:18:19.161310] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:33.173 BaseBdev2 00:25:33.430 14:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:25:33.430 14:18:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:25:33.430 14:18:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:33.430 14:18:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:25:33.430 14:18:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:33.430 14:18:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:33.430 14:18:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:33.430 14:18:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:33.998 [ 00:25:33.998 { 00:25:33.998 "name": "BaseBdev2", 00:25:33.998 "aliases": [ 00:25:33.998 "8ebaa18c-b7de-4d78-ba20-139996ac3592" 00:25:33.998 ], 00:25:33.998 "product_name": "Malloc disk", 00:25:33.998 "block_size": 512, 00:25:33.998 "num_blocks": 65536, 00:25:33.998 "uuid": "8ebaa18c-b7de-4d78-ba20-139996ac3592", 00:25:33.998 "assigned_rate_limits": { 00:25:33.998 "rw_ios_per_sec": 0, 00:25:33.998 "rw_mbytes_per_sec": 0, 00:25:33.998 "r_mbytes_per_sec": 0, 00:25:33.998 "w_mbytes_per_sec": 0 00:25:33.998 }, 00:25:33.999 "claimed": true, 00:25:33.999 "claim_type": "exclusive_write", 00:25:33.999 "zoned": false, 00:25:33.999 "supported_io_types": { 00:25:33.999 "read": true, 00:25:33.999 "write": true, 00:25:33.999 "unmap": true, 00:25:33.999 "flush": true, 00:25:33.999 "reset": true, 00:25:33.999 "nvme_admin": false, 00:25:33.999 "nvme_io": false, 00:25:33.999 "nvme_io_md": false, 00:25:33.999 "write_zeroes": true, 00:25:33.999 "zcopy": true, 00:25:33.999 "get_zone_info": false, 00:25:33.999 "zone_management": false, 00:25:33.999 "zone_append": false, 00:25:33.999 "compare": false, 00:25:33.999 "compare_and_write": false, 00:25:33.999 "abort": true, 00:25:33.999 "seek_hole": false, 00:25:33.999 "seek_data": false, 00:25:33.999 "copy": true, 00:25:33.999 "nvme_iov_md": false 00:25:33.999 }, 00:25:33.999 "memory_domains": [ 00:25:33.999 { 00:25:33.999 "dma_device_id": "system", 00:25:33.999 "dma_device_type": 1 00:25:33.999 }, 00:25:33.999 { 00:25:33.999 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:33.999 "dma_device_type": 2 00:25:33.999 } 00:25:33.999 ], 00:25:33.999 "driver_specific": {} 00:25:33.999 } 00:25:33.999 ] 00:25:33.999 14:18:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:25:33.999 14:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:25:33.999 14:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:33.999 14:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:33.999 14:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:33.999 14:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:33.999 14:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:33.999 14:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:33.999 14:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:33.999 14:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:33.999 14:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:33.999 14:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:33.999 14:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:33.999 14:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:33.999 14:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:33.999 14:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:33.999 "name": "Existed_Raid", 00:25:33.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:33.999 "strip_size_kb": 0, 00:25:33.999 "state": "configuring", 00:25:33.999 "raid_level": "raid1", 00:25:33.999 "superblock": false, 00:25:33.999 "num_base_bdevs": 4, 00:25:33.999 "num_base_bdevs_discovered": 2, 00:25:33.999 "num_base_bdevs_operational": 4, 00:25:33.999 "base_bdevs_list": [ 00:25:33.999 { 00:25:33.999 "name": "BaseBdev1", 00:25:33.999 "uuid": "b96c54d8-2a47-4798-a790-047d3b43e917", 00:25:33.999 "is_configured": true, 00:25:33.999 "data_offset": 0, 00:25:33.999 "data_size": 65536 00:25:33.999 }, 00:25:33.999 { 00:25:33.999 "name": "BaseBdev2", 00:25:33.999 "uuid": "8ebaa18c-b7de-4d78-ba20-139996ac3592", 00:25:33.999 "is_configured": true, 00:25:33.999 "data_offset": 0, 00:25:33.999 "data_size": 65536 00:25:33.999 }, 00:25:33.999 { 00:25:33.999 "name": "BaseBdev3", 00:25:33.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:33.999 "is_configured": false, 00:25:33.999 "data_offset": 0, 00:25:33.999 "data_size": 0 00:25:33.999 }, 00:25:33.999 { 00:25:33.999 "name": "BaseBdev4", 00:25:33.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:33.999 "is_configured": false, 00:25:33.999 "data_offset": 0, 00:25:33.999 "data_size": 0 00:25:33.999 } 00:25:33.999 ] 00:25:33.999 }' 00:25:33.999 14:18:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:33.999 14:18:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:34.932 14:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:25:35.190 [2024-07-15 14:18:20.976326] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:35.190 BaseBdev3 00:25:35.190 14:18:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:25:35.190 14:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:25:35.190 14:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:35.190 14:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:25:35.190 14:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:35.190 14:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:35.190 14:18:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:35.448 14:18:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:35.706 [ 00:25:35.706 { 00:25:35.706 "name": "BaseBdev3", 00:25:35.706 "aliases": [ 00:25:35.706 "76edf266-6695-4c97-8156-7f269819353d" 00:25:35.706 ], 00:25:35.706 "product_name": "Malloc disk", 00:25:35.706 "block_size": 512, 00:25:35.706 "num_blocks": 65536, 00:25:35.706 "uuid": "76edf266-6695-4c97-8156-7f269819353d", 00:25:35.706 "assigned_rate_limits": { 00:25:35.706 "rw_ios_per_sec": 0, 00:25:35.706 "rw_mbytes_per_sec": 0, 00:25:35.706 "r_mbytes_per_sec": 0, 00:25:35.706 "w_mbytes_per_sec": 0 00:25:35.706 }, 00:25:35.706 "claimed": true, 00:25:35.706 "claim_type": "exclusive_write", 00:25:35.706 "zoned": false, 00:25:35.706 "supported_io_types": { 00:25:35.706 "read": true, 00:25:35.706 "write": true, 00:25:35.706 "unmap": true, 00:25:35.706 "flush": true, 00:25:35.706 "reset": true, 00:25:35.706 "nvme_admin": false, 00:25:35.706 "nvme_io": false, 00:25:35.706 "nvme_io_md": false, 00:25:35.706 "write_zeroes": true, 00:25:35.706 "zcopy": true, 00:25:35.706 "get_zone_info": false, 00:25:35.706 "zone_management": false, 00:25:35.706 "zone_append": false, 00:25:35.706 "compare": false, 00:25:35.706 "compare_and_write": false, 00:25:35.706 "abort": true, 00:25:35.706 "seek_hole": false, 00:25:35.706 "seek_data": false, 00:25:35.706 "copy": true, 00:25:35.706 "nvme_iov_md": false 00:25:35.706 }, 00:25:35.706 "memory_domains": [ 00:25:35.706 { 00:25:35.706 "dma_device_id": "system", 00:25:35.706 "dma_device_type": 1 00:25:35.706 }, 00:25:35.706 { 00:25:35.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:35.706 "dma_device_type": 2 00:25:35.706 } 00:25:35.706 ], 00:25:35.706 "driver_specific": {} 00:25:35.706 } 00:25:35.706 ] 00:25:35.706 14:18:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:25:35.706 14:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:25:35.706 14:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:35.706 14:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:35.706 14:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:35.706 14:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:35.706 14:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:35.706 14:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:35.706 14:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:35.706 14:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:35.706 14:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:35.706 14:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:35.706 14:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:35.706 14:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:35.706 14:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:35.963 14:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:35.963 "name": "Existed_Raid", 00:25:35.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:35.963 "strip_size_kb": 0, 00:25:35.963 "state": "configuring", 00:25:35.963 "raid_level": "raid1", 00:25:35.963 "superblock": false, 00:25:35.963 "num_base_bdevs": 4, 00:25:35.963 "num_base_bdevs_discovered": 3, 00:25:35.963 "num_base_bdevs_operational": 4, 00:25:35.963 "base_bdevs_list": [ 00:25:35.963 { 00:25:35.963 "name": "BaseBdev1", 00:25:35.963 "uuid": "b96c54d8-2a47-4798-a790-047d3b43e917", 00:25:35.963 "is_configured": true, 00:25:35.963 "data_offset": 0, 00:25:35.963 "data_size": 65536 00:25:35.963 }, 00:25:35.963 { 00:25:35.963 "name": "BaseBdev2", 00:25:35.963 "uuid": "8ebaa18c-b7de-4d78-ba20-139996ac3592", 00:25:35.963 "is_configured": true, 00:25:35.963 "data_offset": 0, 00:25:35.963 "data_size": 65536 00:25:35.963 }, 00:25:35.963 { 00:25:35.963 "name": "BaseBdev3", 00:25:35.963 "uuid": "76edf266-6695-4c97-8156-7f269819353d", 00:25:35.963 "is_configured": true, 00:25:35.963 "data_offset": 0, 00:25:35.963 "data_size": 65536 00:25:35.963 }, 00:25:35.963 { 00:25:35.963 "name": "BaseBdev4", 00:25:35.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:35.963 "is_configured": false, 00:25:35.963 "data_offset": 0, 00:25:35.963 "data_size": 0 00:25:35.963 } 00:25:35.963 ] 00:25:35.963 }' 00:25:35.964 14:18:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:35.964 14:18:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:36.556 14:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:25:36.814 [2024-07-15 14:18:22.726156] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:36.814 [2024-07-15 14:18:22.726477] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:25:36.814 [2024-07-15 14:18:22.726529] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:25:36.814 [2024-07-15 14:18:22.726755] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:25:36.814 [2024-07-15 14:18:22.727165] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:25:36.814 [2024-07-15 14:18:22.727329] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:25:36.814 [2024-07-15 14:18:22.727652] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:36.814 BaseBdev4 00:25:36.814 14:18:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:25:36.814 14:18:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:25:36.814 14:18:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:36.814 14:18:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:25:36.814 14:18:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:36.814 14:18:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:36.814 14:18:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:37.073 14:18:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:25:37.340 [ 00:25:37.340 { 00:25:37.340 "name": "BaseBdev4", 00:25:37.340 "aliases": [ 00:25:37.340 "5cfe2587-1368-4e2f-9c81-0ef1b1476388" 00:25:37.340 ], 00:25:37.340 "product_name": "Malloc disk", 00:25:37.340 "block_size": 512, 00:25:37.340 "num_blocks": 65536, 00:25:37.340 "uuid": "5cfe2587-1368-4e2f-9c81-0ef1b1476388", 00:25:37.340 "assigned_rate_limits": { 00:25:37.340 "rw_ios_per_sec": 0, 00:25:37.340 "rw_mbytes_per_sec": 0, 00:25:37.340 "r_mbytes_per_sec": 0, 00:25:37.340 "w_mbytes_per_sec": 0 00:25:37.340 }, 00:25:37.340 "claimed": true, 00:25:37.340 "claim_type": "exclusive_write", 00:25:37.340 "zoned": false, 00:25:37.340 "supported_io_types": { 00:25:37.340 "read": true, 00:25:37.340 "write": true, 00:25:37.340 "unmap": true, 00:25:37.340 "flush": true, 00:25:37.340 "reset": true, 00:25:37.340 "nvme_admin": false, 00:25:37.340 "nvme_io": false, 00:25:37.340 "nvme_io_md": false, 00:25:37.340 "write_zeroes": true, 00:25:37.340 "zcopy": true, 00:25:37.340 "get_zone_info": false, 00:25:37.340 "zone_management": false, 00:25:37.340 "zone_append": false, 00:25:37.340 "compare": false, 00:25:37.340 "compare_and_write": false, 00:25:37.340 "abort": true, 00:25:37.340 "seek_hole": false, 00:25:37.340 "seek_data": false, 00:25:37.340 "copy": true, 00:25:37.340 "nvme_iov_md": false 00:25:37.340 }, 00:25:37.340 "memory_domains": [ 00:25:37.340 { 00:25:37.340 "dma_device_id": "system", 00:25:37.340 "dma_device_type": 1 00:25:37.340 }, 00:25:37.340 { 00:25:37.340 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:37.340 "dma_device_type": 2 00:25:37.340 } 00:25:37.340 ], 00:25:37.340 "driver_specific": {} 00:25:37.340 } 00:25:37.340 ] 00:25:37.340 14:18:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:25:37.340 14:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:25:37.340 14:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:37.340 14:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:25:37.340 14:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:37.340 14:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:37.340 14:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:37.340 14:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:37.340 14:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:37.340 14:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:37.340 14:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:37.340 14:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:37.340 14:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:37.599 14:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:37.599 14:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:37.599 14:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:37.599 "name": "Existed_Raid", 00:25:37.599 "uuid": "3b872aec-fe4e-4885-b5c1-beccdd8c3cf9", 00:25:37.599 "strip_size_kb": 0, 00:25:37.599 "state": "online", 00:25:37.599 "raid_level": "raid1", 00:25:37.599 "superblock": false, 00:25:37.599 "num_base_bdevs": 4, 00:25:37.599 "num_base_bdevs_discovered": 4, 00:25:37.599 "num_base_bdevs_operational": 4, 00:25:37.599 "base_bdevs_list": [ 00:25:37.599 { 00:25:37.599 "name": "BaseBdev1", 00:25:37.599 "uuid": "b96c54d8-2a47-4798-a790-047d3b43e917", 00:25:37.599 "is_configured": true, 00:25:37.599 "data_offset": 0, 00:25:37.599 "data_size": 65536 00:25:37.599 }, 00:25:37.599 { 00:25:37.599 "name": "BaseBdev2", 00:25:37.599 "uuid": "8ebaa18c-b7de-4d78-ba20-139996ac3592", 00:25:37.599 "is_configured": true, 00:25:37.599 "data_offset": 0, 00:25:37.599 "data_size": 65536 00:25:37.599 }, 00:25:37.599 { 00:25:37.599 "name": "BaseBdev3", 00:25:37.599 "uuid": "76edf266-6695-4c97-8156-7f269819353d", 00:25:37.599 "is_configured": true, 00:25:37.599 "data_offset": 0, 00:25:37.599 "data_size": 65536 00:25:37.599 }, 00:25:37.599 { 00:25:37.599 "name": "BaseBdev4", 00:25:37.599 "uuid": "5cfe2587-1368-4e2f-9c81-0ef1b1476388", 00:25:37.599 "is_configured": true, 00:25:37.599 "data_offset": 0, 00:25:37.599 "data_size": 65536 00:25:37.599 } 00:25:37.599 ] 00:25:37.599 }' 00:25:37.599 14:18:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:37.599 14:18:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:38.535 14:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:25:38.535 14:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:25:38.535 14:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:25:38.535 14:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:25:38.535 14:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:25:38.535 14:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:25:38.535 14:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:25:38.535 14:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:25:38.793 [2024-07-15 14:18:24.582775] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:38.793 14:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:25:38.793 "name": "Existed_Raid", 00:25:38.793 "aliases": [ 00:25:38.793 "3b872aec-fe4e-4885-b5c1-beccdd8c3cf9" 00:25:38.793 ], 00:25:38.793 "product_name": "Raid Volume", 00:25:38.793 "block_size": 512, 00:25:38.793 "num_blocks": 65536, 00:25:38.793 "uuid": "3b872aec-fe4e-4885-b5c1-beccdd8c3cf9", 00:25:38.793 "assigned_rate_limits": { 00:25:38.793 "rw_ios_per_sec": 0, 00:25:38.793 "rw_mbytes_per_sec": 0, 00:25:38.793 "r_mbytes_per_sec": 0, 00:25:38.793 "w_mbytes_per_sec": 0 00:25:38.793 }, 00:25:38.793 "claimed": false, 00:25:38.793 "zoned": false, 00:25:38.793 "supported_io_types": { 00:25:38.793 "read": true, 00:25:38.793 "write": true, 00:25:38.793 "unmap": false, 00:25:38.793 "flush": false, 00:25:38.793 "reset": true, 00:25:38.793 "nvme_admin": false, 00:25:38.793 "nvme_io": false, 00:25:38.793 "nvme_io_md": false, 00:25:38.793 "write_zeroes": true, 00:25:38.793 "zcopy": false, 00:25:38.793 "get_zone_info": false, 00:25:38.793 "zone_management": false, 00:25:38.793 "zone_append": false, 00:25:38.793 "compare": false, 00:25:38.793 "compare_and_write": false, 00:25:38.793 "abort": false, 00:25:38.793 "seek_hole": false, 00:25:38.793 "seek_data": false, 00:25:38.793 "copy": false, 00:25:38.793 "nvme_iov_md": false 00:25:38.793 }, 00:25:38.793 "memory_domains": [ 00:25:38.793 { 00:25:38.793 "dma_device_id": "system", 00:25:38.793 "dma_device_type": 1 00:25:38.793 }, 00:25:38.793 { 00:25:38.793 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:38.793 "dma_device_type": 2 00:25:38.793 }, 00:25:38.793 { 00:25:38.793 "dma_device_id": "system", 00:25:38.793 "dma_device_type": 1 00:25:38.793 }, 00:25:38.793 { 00:25:38.793 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:38.793 "dma_device_type": 2 00:25:38.793 }, 00:25:38.793 { 00:25:38.793 "dma_device_id": "system", 00:25:38.793 "dma_device_type": 1 00:25:38.793 }, 00:25:38.793 { 00:25:38.793 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:38.793 "dma_device_type": 2 00:25:38.793 }, 00:25:38.793 { 00:25:38.793 "dma_device_id": "system", 00:25:38.793 "dma_device_type": 1 00:25:38.793 }, 00:25:38.793 { 00:25:38.793 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:38.793 "dma_device_type": 2 00:25:38.793 } 00:25:38.793 ], 00:25:38.793 "driver_specific": { 00:25:38.793 "raid": { 00:25:38.793 "uuid": "3b872aec-fe4e-4885-b5c1-beccdd8c3cf9", 00:25:38.793 "strip_size_kb": 0, 00:25:38.793 "state": "online", 00:25:38.794 "raid_level": "raid1", 00:25:38.794 "superblock": false, 00:25:38.794 "num_base_bdevs": 4, 00:25:38.794 "num_base_bdevs_discovered": 4, 00:25:38.794 "num_base_bdevs_operational": 4, 00:25:38.794 "base_bdevs_list": [ 00:25:38.794 { 00:25:38.794 "name": "BaseBdev1", 00:25:38.794 "uuid": "b96c54d8-2a47-4798-a790-047d3b43e917", 00:25:38.794 "is_configured": true, 00:25:38.794 "data_offset": 0, 00:25:38.794 "data_size": 65536 00:25:38.794 }, 00:25:38.794 { 00:25:38.794 "name": "BaseBdev2", 00:25:38.794 "uuid": "8ebaa18c-b7de-4d78-ba20-139996ac3592", 00:25:38.794 "is_configured": true, 00:25:38.794 "data_offset": 0, 00:25:38.794 "data_size": 65536 00:25:38.794 }, 00:25:38.794 { 00:25:38.794 "name": "BaseBdev3", 00:25:38.794 "uuid": "76edf266-6695-4c97-8156-7f269819353d", 00:25:38.794 "is_configured": true, 00:25:38.794 "data_offset": 0, 00:25:38.794 "data_size": 65536 00:25:38.794 }, 00:25:38.794 { 00:25:38.794 "name": "BaseBdev4", 00:25:38.794 "uuid": "5cfe2587-1368-4e2f-9c81-0ef1b1476388", 00:25:38.794 "is_configured": true, 00:25:38.794 "data_offset": 0, 00:25:38.794 "data_size": 65536 00:25:38.794 } 00:25:38.794 ] 00:25:38.794 } 00:25:38.794 } 00:25:38.794 }' 00:25:38.794 14:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:38.794 14:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:25:38.794 BaseBdev2 00:25:38.794 BaseBdev3 00:25:38.794 BaseBdev4' 00:25:38.794 14:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:38.794 14:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:25:38.794 14:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:39.053 14:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:39.053 "name": "BaseBdev1", 00:25:39.053 "aliases": [ 00:25:39.053 "b96c54d8-2a47-4798-a790-047d3b43e917" 00:25:39.053 ], 00:25:39.053 "product_name": "Malloc disk", 00:25:39.053 "block_size": 512, 00:25:39.053 "num_blocks": 65536, 00:25:39.053 "uuid": "b96c54d8-2a47-4798-a790-047d3b43e917", 00:25:39.053 "assigned_rate_limits": { 00:25:39.053 "rw_ios_per_sec": 0, 00:25:39.053 "rw_mbytes_per_sec": 0, 00:25:39.053 "r_mbytes_per_sec": 0, 00:25:39.053 "w_mbytes_per_sec": 0 00:25:39.053 }, 00:25:39.053 "claimed": true, 00:25:39.053 "claim_type": "exclusive_write", 00:25:39.053 "zoned": false, 00:25:39.053 "supported_io_types": { 00:25:39.053 "read": true, 00:25:39.053 "write": true, 00:25:39.053 "unmap": true, 00:25:39.053 "flush": true, 00:25:39.053 "reset": true, 00:25:39.053 "nvme_admin": false, 00:25:39.053 "nvme_io": false, 00:25:39.053 "nvme_io_md": false, 00:25:39.053 "write_zeroes": true, 00:25:39.053 "zcopy": true, 00:25:39.053 "get_zone_info": false, 00:25:39.053 "zone_management": false, 00:25:39.053 "zone_append": false, 00:25:39.053 "compare": false, 00:25:39.053 "compare_and_write": false, 00:25:39.053 "abort": true, 00:25:39.053 "seek_hole": false, 00:25:39.053 "seek_data": false, 00:25:39.053 "copy": true, 00:25:39.053 "nvme_iov_md": false 00:25:39.053 }, 00:25:39.053 "memory_domains": [ 00:25:39.053 { 00:25:39.053 "dma_device_id": "system", 00:25:39.053 "dma_device_type": 1 00:25:39.053 }, 00:25:39.053 { 00:25:39.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:39.053 "dma_device_type": 2 00:25:39.053 } 00:25:39.053 ], 00:25:39.053 "driver_specific": {} 00:25:39.053 }' 00:25:39.053 14:18:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:39.053 14:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:39.313 14:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:39.313 14:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:39.313 14:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:39.313 14:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:39.313 14:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:39.313 14:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:39.313 14:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:39.313 14:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:39.571 14:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:39.571 14:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:39.571 14:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:39.571 14:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:25:39.571 14:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:39.829 14:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:39.829 "name": "BaseBdev2", 00:25:39.829 "aliases": [ 00:25:39.829 "8ebaa18c-b7de-4d78-ba20-139996ac3592" 00:25:39.829 ], 00:25:39.829 "product_name": "Malloc disk", 00:25:39.829 "block_size": 512, 00:25:39.829 "num_blocks": 65536, 00:25:39.829 "uuid": "8ebaa18c-b7de-4d78-ba20-139996ac3592", 00:25:39.829 "assigned_rate_limits": { 00:25:39.829 "rw_ios_per_sec": 0, 00:25:39.829 "rw_mbytes_per_sec": 0, 00:25:39.829 "r_mbytes_per_sec": 0, 00:25:39.829 "w_mbytes_per_sec": 0 00:25:39.829 }, 00:25:39.829 "claimed": true, 00:25:39.829 "claim_type": "exclusive_write", 00:25:39.829 "zoned": false, 00:25:39.829 "supported_io_types": { 00:25:39.829 "read": true, 00:25:39.829 "write": true, 00:25:39.829 "unmap": true, 00:25:39.829 "flush": true, 00:25:39.829 "reset": true, 00:25:39.829 "nvme_admin": false, 00:25:39.829 "nvme_io": false, 00:25:39.829 "nvme_io_md": false, 00:25:39.829 "write_zeroes": true, 00:25:39.829 "zcopy": true, 00:25:39.829 "get_zone_info": false, 00:25:39.829 "zone_management": false, 00:25:39.829 "zone_append": false, 00:25:39.829 "compare": false, 00:25:39.829 "compare_and_write": false, 00:25:39.829 "abort": true, 00:25:39.829 "seek_hole": false, 00:25:39.829 "seek_data": false, 00:25:39.829 "copy": true, 00:25:39.829 "nvme_iov_md": false 00:25:39.829 }, 00:25:39.829 "memory_domains": [ 00:25:39.829 { 00:25:39.829 "dma_device_id": "system", 00:25:39.829 "dma_device_type": 1 00:25:39.829 }, 00:25:39.829 { 00:25:39.829 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:39.829 "dma_device_type": 2 00:25:39.829 } 00:25:39.829 ], 00:25:39.829 "driver_specific": {} 00:25:39.829 }' 00:25:39.829 14:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:39.829 14:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:39.829 14:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:39.829 14:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:39.829 14:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:40.087 14:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:40.087 14:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:40.087 14:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:40.087 14:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:40.087 14:18:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:40.087 14:18:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:40.087 14:18:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:40.087 14:18:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:40.087 14:18:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:25:40.087 14:18:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:40.347 14:18:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:40.347 "name": "BaseBdev3", 00:25:40.347 "aliases": [ 00:25:40.347 "76edf266-6695-4c97-8156-7f269819353d" 00:25:40.347 ], 00:25:40.347 "product_name": "Malloc disk", 00:25:40.347 "block_size": 512, 00:25:40.347 "num_blocks": 65536, 00:25:40.347 "uuid": "76edf266-6695-4c97-8156-7f269819353d", 00:25:40.347 "assigned_rate_limits": { 00:25:40.347 "rw_ios_per_sec": 0, 00:25:40.347 "rw_mbytes_per_sec": 0, 00:25:40.347 "r_mbytes_per_sec": 0, 00:25:40.347 "w_mbytes_per_sec": 0 00:25:40.347 }, 00:25:40.347 "claimed": true, 00:25:40.347 "claim_type": "exclusive_write", 00:25:40.347 "zoned": false, 00:25:40.347 "supported_io_types": { 00:25:40.347 "read": true, 00:25:40.347 "write": true, 00:25:40.347 "unmap": true, 00:25:40.347 "flush": true, 00:25:40.347 "reset": true, 00:25:40.347 "nvme_admin": false, 00:25:40.347 "nvme_io": false, 00:25:40.347 "nvme_io_md": false, 00:25:40.347 "write_zeroes": true, 00:25:40.347 "zcopy": true, 00:25:40.347 "get_zone_info": false, 00:25:40.347 "zone_management": false, 00:25:40.347 "zone_append": false, 00:25:40.347 "compare": false, 00:25:40.347 "compare_and_write": false, 00:25:40.347 "abort": true, 00:25:40.347 "seek_hole": false, 00:25:40.347 "seek_data": false, 00:25:40.347 "copy": true, 00:25:40.347 "nvme_iov_md": false 00:25:40.347 }, 00:25:40.347 "memory_domains": [ 00:25:40.347 { 00:25:40.347 "dma_device_id": "system", 00:25:40.347 "dma_device_type": 1 00:25:40.347 }, 00:25:40.347 { 00:25:40.347 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:40.347 "dma_device_type": 2 00:25:40.347 } 00:25:40.347 ], 00:25:40.347 "driver_specific": {} 00:25:40.347 }' 00:25:40.347 14:18:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:40.605 14:18:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:40.605 14:18:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:40.605 14:18:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:40.605 14:18:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:40.605 14:18:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:40.605 14:18:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:40.605 14:18:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:40.864 14:18:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:40.864 14:18:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:40.864 14:18:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:40.864 14:18:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:40.864 14:18:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:40.864 14:18:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:25:40.864 14:18:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:41.123 14:18:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:41.123 "name": "BaseBdev4", 00:25:41.123 "aliases": [ 00:25:41.123 "5cfe2587-1368-4e2f-9c81-0ef1b1476388" 00:25:41.123 ], 00:25:41.123 "product_name": "Malloc disk", 00:25:41.123 "block_size": 512, 00:25:41.123 "num_blocks": 65536, 00:25:41.123 "uuid": "5cfe2587-1368-4e2f-9c81-0ef1b1476388", 00:25:41.123 "assigned_rate_limits": { 00:25:41.123 "rw_ios_per_sec": 0, 00:25:41.123 "rw_mbytes_per_sec": 0, 00:25:41.123 "r_mbytes_per_sec": 0, 00:25:41.123 "w_mbytes_per_sec": 0 00:25:41.123 }, 00:25:41.123 "claimed": true, 00:25:41.123 "claim_type": "exclusive_write", 00:25:41.123 "zoned": false, 00:25:41.123 "supported_io_types": { 00:25:41.123 "read": true, 00:25:41.123 "write": true, 00:25:41.123 "unmap": true, 00:25:41.123 "flush": true, 00:25:41.123 "reset": true, 00:25:41.123 "nvme_admin": false, 00:25:41.123 "nvme_io": false, 00:25:41.123 "nvme_io_md": false, 00:25:41.123 "write_zeroes": true, 00:25:41.123 "zcopy": true, 00:25:41.123 "get_zone_info": false, 00:25:41.123 "zone_management": false, 00:25:41.123 "zone_append": false, 00:25:41.123 "compare": false, 00:25:41.123 "compare_and_write": false, 00:25:41.123 "abort": true, 00:25:41.123 "seek_hole": false, 00:25:41.123 "seek_data": false, 00:25:41.123 "copy": true, 00:25:41.123 "nvme_iov_md": false 00:25:41.123 }, 00:25:41.123 "memory_domains": [ 00:25:41.123 { 00:25:41.123 "dma_device_id": "system", 00:25:41.123 "dma_device_type": 1 00:25:41.123 }, 00:25:41.123 { 00:25:41.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:41.123 "dma_device_type": 2 00:25:41.123 } 00:25:41.123 ], 00:25:41.123 "driver_specific": {} 00:25:41.123 }' 00:25:41.123 14:18:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:41.123 14:18:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:41.123 14:18:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:41.123 14:18:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:41.382 14:18:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:41.382 14:18:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:41.382 14:18:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:41.382 14:18:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:41.382 14:18:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:41.382 14:18:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:41.382 14:18:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:41.382 14:18:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:41.382 14:18:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:25:41.949 [2024-07-15 14:18:27.659055] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:41.949 14:18:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:25:41.949 14:18:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:25:41.949 14:18:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:25:41.949 14:18:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:25:41.949 14:18:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:25:41.949 14:18:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:25:41.949 14:18:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:41.949 14:18:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:41.949 14:18:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:41.949 14:18:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:41.949 14:18:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:41.949 14:18:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:41.949 14:18:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:41.949 14:18:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:41.949 14:18:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:41.949 14:18:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:41.949 14:18:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:42.208 14:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:42.208 "name": "Existed_Raid", 00:25:42.208 "uuid": "3b872aec-fe4e-4885-b5c1-beccdd8c3cf9", 00:25:42.208 "strip_size_kb": 0, 00:25:42.208 "state": "online", 00:25:42.208 "raid_level": "raid1", 00:25:42.208 "superblock": false, 00:25:42.208 "num_base_bdevs": 4, 00:25:42.208 "num_base_bdevs_discovered": 3, 00:25:42.208 "num_base_bdevs_operational": 3, 00:25:42.208 "base_bdevs_list": [ 00:25:42.208 { 00:25:42.208 "name": null, 00:25:42.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:42.208 "is_configured": false, 00:25:42.208 "data_offset": 0, 00:25:42.208 "data_size": 65536 00:25:42.208 }, 00:25:42.208 { 00:25:42.208 "name": "BaseBdev2", 00:25:42.208 "uuid": "8ebaa18c-b7de-4d78-ba20-139996ac3592", 00:25:42.208 "is_configured": true, 00:25:42.208 "data_offset": 0, 00:25:42.208 "data_size": 65536 00:25:42.208 }, 00:25:42.208 { 00:25:42.208 "name": "BaseBdev3", 00:25:42.208 "uuid": "76edf266-6695-4c97-8156-7f269819353d", 00:25:42.208 "is_configured": true, 00:25:42.208 "data_offset": 0, 00:25:42.208 "data_size": 65536 00:25:42.208 }, 00:25:42.208 { 00:25:42.208 "name": "BaseBdev4", 00:25:42.208 "uuid": "5cfe2587-1368-4e2f-9c81-0ef1b1476388", 00:25:42.208 "is_configured": true, 00:25:42.208 "data_offset": 0, 00:25:42.208 "data_size": 65536 00:25:42.208 } 00:25:42.208 ] 00:25:42.208 }' 00:25:42.208 14:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:42.208 14:18:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:42.776 14:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:25:42.776 14:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:25:42.776 14:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:42.776 14:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:25:43.034 14:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:25:43.034 14:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:43.034 14:18:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:25:43.293 [2024-07-15 14:18:29.242673] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:43.552 14:18:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:25:43.552 14:18:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:25:43.552 14:18:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:25:43.552 14:18:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:43.809 14:18:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:25:43.809 14:18:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:43.809 14:18:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:25:44.066 [2024-07-15 14:18:29.885020] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:44.066 14:18:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:25:44.066 14:18:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:25:44.066 14:18:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:44.066 14:18:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:25:44.351 14:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:25:44.351 14:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:44.351 14:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:25:44.629 [2024-07-15 14:18:30.476166] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:25:44.629 [2024-07-15 14:18:30.476529] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:44.629 [2024-07-15 14:18:30.563796] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:44.629 [2024-07-15 14:18:30.564001] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:44.629 [2024-07-15 14:18:30.564149] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:25:44.629 14:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:25:44.629 14:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:25:44.629 14:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:44.629 14:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:25:44.887 14:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:25:44.887 14:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:25:44.887 14:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:25:44.887 14:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:25:44.887 14:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:25:44.887 14:18:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:25:45.146 BaseBdev2 00:25:45.146 14:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:25:45.146 14:18:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:25:45.146 14:18:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:45.146 14:18:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:25:45.146 14:18:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:45.146 14:18:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:45.146 14:18:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:45.404 14:18:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:45.663 [ 00:25:45.663 { 00:25:45.663 "name": "BaseBdev2", 00:25:45.663 "aliases": [ 00:25:45.663 "1d9ad0f0-5adc-454b-9917-0f9e221d30db" 00:25:45.663 ], 00:25:45.663 "product_name": "Malloc disk", 00:25:45.663 "block_size": 512, 00:25:45.663 "num_blocks": 65536, 00:25:45.663 "uuid": "1d9ad0f0-5adc-454b-9917-0f9e221d30db", 00:25:45.663 "assigned_rate_limits": { 00:25:45.663 "rw_ios_per_sec": 0, 00:25:45.663 "rw_mbytes_per_sec": 0, 00:25:45.663 "r_mbytes_per_sec": 0, 00:25:45.663 "w_mbytes_per_sec": 0 00:25:45.663 }, 00:25:45.663 "claimed": false, 00:25:45.663 "zoned": false, 00:25:45.663 "supported_io_types": { 00:25:45.663 "read": true, 00:25:45.663 "write": true, 00:25:45.663 "unmap": true, 00:25:45.663 "flush": true, 00:25:45.663 "reset": true, 00:25:45.663 "nvme_admin": false, 00:25:45.663 "nvme_io": false, 00:25:45.663 "nvme_io_md": false, 00:25:45.663 "write_zeroes": true, 00:25:45.663 "zcopy": true, 00:25:45.663 "get_zone_info": false, 00:25:45.663 "zone_management": false, 00:25:45.663 "zone_append": false, 00:25:45.663 "compare": false, 00:25:45.663 "compare_and_write": false, 00:25:45.663 "abort": true, 00:25:45.663 "seek_hole": false, 00:25:45.663 "seek_data": false, 00:25:45.663 "copy": true, 00:25:45.663 "nvme_iov_md": false 00:25:45.663 }, 00:25:45.663 "memory_domains": [ 00:25:45.663 { 00:25:45.663 "dma_device_id": "system", 00:25:45.663 "dma_device_type": 1 00:25:45.663 }, 00:25:45.663 { 00:25:45.663 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:45.663 "dma_device_type": 2 00:25:45.663 } 00:25:45.663 ], 00:25:45.663 "driver_specific": {} 00:25:45.663 } 00:25:45.663 ] 00:25:45.663 14:18:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:25:45.663 14:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:25:45.663 14:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:25:45.663 14:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:25:45.921 BaseBdev3 00:25:45.921 14:18:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:25:45.921 14:18:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:25:45.921 14:18:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:45.921 14:18:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:25:45.921 14:18:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:45.921 14:18:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:45.921 14:18:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:46.179 14:18:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:46.437 [ 00:25:46.437 { 00:25:46.437 "name": "BaseBdev3", 00:25:46.437 "aliases": [ 00:25:46.437 "c4ebcfbd-fe6e-4ecf-81ae-8fca9d97c8f5" 00:25:46.437 ], 00:25:46.437 "product_name": "Malloc disk", 00:25:46.437 "block_size": 512, 00:25:46.437 "num_blocks": 65536, 00:25:46.437 "uuid": "c4ebcfbd-fe6e-4ecf-81ae-8fca9d97c8f5", 00:25:46.437 "assigned_rate_limits": { 00:25:46.437 "rw_ios_per_sec": 0, 00:25:46.437 "rw_mbytes_per_sec": 0, 00:25:46.437 "r_mbytes_per_sec": 0, 00:25:46.437 "w_mbytes_per_sec": 0 00:25:46.437 }, 00:25:46.437 "claimed": false, 00:25:46.437 "zoned": false, 00:25:46.437 "supported_io_types": { 00:25:46.437 "read": true, 00:25:46.437 "write": true, 00:25:46.437 "unmap": true, 00:25:46.437 "flush": true, 00:25:46.437 "reset": true, 00:25:46.437 "nvme_admin": false, 00:25:46.437 "nvme_io": false, 00:25:46.437 "nvme_io_md": false, 00:25:46.437 "write_zeroes": true, 00:25:46.437 "zcopy": true, 00:25:46.437 "get_zone_info": false, 00:25:46.437 "zone_management": false, 00:25:46.437 "zone_append": false, 00:25:46.437 "compare": false, 00:25:46.437 "compare_and_write": false, 00:25:46.437 "abort": true, 00:25:46.437 "seek_hole": false, 00:25:46.437 "seek_data": false, 00:25:46.437 "copy": true, 00:25:46.437 "nvme_iov_md": false 00:25:46.437 }, 00:25:46.437 "memory_domains": [ 00:25:46.437 { 00:25:46.437 "dma_device_id": "system", 00:25:46.438 "dma_device_type": 1 00:25:46.438 }, 00:25:46.438 { 00:25:46.438 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:46.438 "dma_device_type": 2 00:25:46.438 } 00:25:46.438 ], 00:25:46.438 "driver_specific": {} 00:25:46.438 } 00:25:46.438 ] 00:25:46.438 14:18:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:25:46.438 14:18:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:25:46.438 14:18:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:25:46.438 14:18:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:25:47.005 BaseBdev4 00:25:47.005 14:18:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:25:47.005 14:18:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:25:47.005 14:18:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:47.005 14:18:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:25:47.005 14:18:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:47.005 14:18:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:47.005 14:18:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:47.005 14:18:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:25:47.571 [ 00:25:47.571 { 00:25:47.571 "name": "BaseBdev4", 00:25:47.571 "aliases": [ 00:25:47.571 "7d716799-ebb2-4943-8394-3fb6a20357a7" 00:25:47.571 ], 00:25:47.571 "product_name": "Malloc disk", 00:25:47.571 "block_size": 512, 00:25:47.571 "num_blocks": 65536, 00:25:47.571 "uuid": "7d716799-ebb2-4943-8394-3fb6a20357a7", 00:25:47.571 "assigned_rate_limits": { 00:25:47.571 "rw_ios_per_sec": 0, 00:25:47.571 "rw_mbytes_per_sec": 0, 00:25:47.571 "r_mbytes_per_sec": 0, 00:25:47.571 "w_mbytes_per_sec": 0 00:25:47.571 }, 00:25:47.571 "claimed": false, 00:25:47.571 "zoned": false, 00:25:47.571 "supported_io_types": { 00:25:47.571 "read": true, 00:25:47.571 "write": true, 00:25:47.571 "unmap": true, 00:25:47.571 "flush": true, 00:25:47.571 "reset": true, 00:25:47.571 "nvme_admin": false, 00:25:47.571 "nvme_io": false, 00:25:47.571 "nvme_io_md": false, 00:25:47.571 "write_zeroes": true, 00:25:47.571 "zcopy": true, 00:25:47.571 "get_zone_info": false, 00:25:47.571 "zone_management": false, 00:25:47.571 "zone_append": false, 00:25:47.571 "compare": false, 00:25:47.571 "compare_and_write": false, 00:25:47.571 "abort": true, 00:25:47.571 "seek_hole": false, 00:25:47.571 "seek_data": false, 00:25:47.571 "copy": true, 00:25:47.571 "nvme_iov_md": false 00:25:47.571 }, 00:25:47.571 "memory_domains": [ 00:25:47.571 { 00:25:47.571 "dma_device_id": "system", 00:25:47.571 "dma_device_type": 1 00:25:47.571 }, 00:25:47.571 { 00:25:47.571 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:47.571 "dma_device_type": 2 00:25:47.571 } 00:25:47.571 ], 00:25:47.571 "driver_specific": {} 00:25:47.571 } 00:25:47.571 ] 00:25:47.572 14:18:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:25:47.572 14:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:25:47.572 14:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:25:47.572 14:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:47.572 [2024-07-15 14:18:33.529735] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:47.572 [2024-07-15 14:18:33.530133] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:47.572 [2024-07-15 14:18:33.530287] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:47.572 [2024-07-15 14:18:33.532378] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:47.572 [2024-07-15 14:18:33.532609] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:47.572 14:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:47.572 14:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:47.572 14:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:47.572 14:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:47.572 14:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:47.572 14:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:47.572 14:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:47.572 14:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:47.572 14:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:47.572 14:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:47.572 14:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:47.572 14:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:47.828 14:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:47.828 "name": "Existed_Raid", 00:25:47.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:47.828 "strip_size_kb": 0, 00:25:47.828 "state": "configuring", 00:25:47.828 "raid_level": "raid1", 00:25:47.828 "superblock": false, 00:25:47.828 "num_base_bdevs": 4, 00:25:47.828 "num_base_bdevs_discovered": 3, 00:25:47.828 "num_base_bdevs_operational": 4, 00:25:47.828 "base_bdevs_list": [ 00:25:47.828 { 00:25:47.828 "name": "BaseBdev1", 00:25:47.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:47.828 "is_configured": false, 00:25:47.828 "data_offset": 0, 00:25:47.828 "data_size": 0 00:25:47.828 }, 00:25:47.828 { 00:25:47.828 "name": "BaseBdev2", 00:25:47.828 "uuid": "1d9ad0f0-5adc-454b-9917-0f9e221d30db", 00:25:47.828 "is_configured": true, 00:25:47.828 "data_offset": 0, 00:25:47.828 "data_size": 65536 00:25:47.828 }, 00:25:47.828 { 00:25:47.828 "name": "BaseBdev3", 00:25:47.828 "uuid": "c4ebcfbd-fe6e-4ecf-81ae-8fca9d97c8f5", 00:25:47.828 "is_configured": true, 00:25:47.828 "data_offset": 0, 00:25:47.828 "data_size": 65536 00:25:47.828 }, 00:25:47.828 { 00:25:47.828 "name": "BaseBdev4", 00:25:47.828 "uuid": "7d716799-ebb2-4943-8394-3fb6a20357a7", 00:25:47.828 "is_configured": true, 00:25:47.828 "data_offset": 0, 00:25:47.828 "data_size": 65536 00:25:47.828 } 00:25:47.828 ] 00:25:47.828 }' 00:25:47.828 14:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:47.828 14:18:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:48.787 14:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:25:48.787 [2024-07-15 14:18:34.709950] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:48.787 14:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:48.787 14:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:48.787 14:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:48.787 14:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:48.787 14:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:48.787 14:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:48.787 14:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:48.787 14:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:48.787 14:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:48.787 14:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:48.787 14:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:48.787 14:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:49.046 14:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:49.046 "name": "Existed_Raid", 00:25:49.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:49.046 "strip_size_kb": 0, 00:25:49.046 "state": "configuring", 00:25:49.046 "raid_level": "raid1", 00:25:49.046 "superblock": false, 00:25:49.046 "num_base_bdevs": 4, 00:25:49.046 "num_base_bdevs_discovered": 2, 00:25:49.046 "num_base_bdevs_operational": 4, 00:25:49.046 "base_bdevs_list": [ 00:25:49.046 { 00:25:49.046 "name": "BaseBdev1", 00:25:49.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:49.046 "is_configured": false, 00:25:49.046 "data_offset": 0, 00:25:49.046 "data_size": 0 00:25:49.046 }, 00:25:49.046 { 00:25:49.046 "name": null, 00:25:49.046 "uuid": "1d9ad0f0-5adc-454b-9917-0f9e221d30db", 00:25:49.046 "is_configured": false, 00:25:49.046 "data_offset": 0, 00:25:49.046 "data_size": 65536 00:25:49.046 }, 00:25:49.046 { 00:25:49.046 "name": "BaseBdev3", 00:25:49.046 "uuid": "c4ebcfbd-fe6e-4ecf-81ae-8fca9d97c8f5", 00:25:49.046 "is_configured": true, 00:25:49.046 "data_offset": 0, 00:25:49.046 "data_size": 65536 00:25:49.046 }, 00:25:49.046 { 00:25:49.046 "name": "BaseBdev4", 00:25:49.046 "uuid": "7d716799-ebb2-4943-8394-3fb6a20357a7", 00:25:49.046 "is_configured": true, 00:25:49.046 "data_offset": 0, 00:25:49.046 "data_size": 65536 00:25:49.046 } 00:25:49.046 ] 00:25:49.046 }' 00:25:49.046 14:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:49.046 14:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:49.612 14:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:49.870 14:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:25:49.870 14:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:25:49.870 14:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:25:50.127 [2024-07-15 14:18:36.111629] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:50.127 BaseBdev1 00:25:50.127 14:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:25:50.127 14:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:25:50.127 14:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:50.127 14:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:25:50.127 14:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:50.387 14:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:50.387 14:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:50.387 14:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:50.645 [ 00:25:50.645 { 00:25:50.645 "name": "BaseBdev1", 00:25:50.645 "aliases": [ 00:25:50.646 "26942438-b6cb-4a6a-97a9-5c65e1d97554" 00:25:50.646 ], 00:25:50.646 "product_name": "Malloc disk", 00:25:50.646 "block_size": 512, 00:25:50.646 "num_blocks": 65536, 00:25:50.646 "uuid": "26942438-b6cb-4a6a-97a9-5c65e1d97554", 00:25:50.646 "assigned_rate_limits": { 00:25:50.646 "rw_ios_per_sec": 0, 00:25:50.646 "rw_mbytes_per_sec": 0, 00:25:50.646 "r_mbytes_per_sec": 0, 00:25:50.646 "w_mbytes_per_sec": 0 00:25:50.646 }, 00:25:50.646 "claimed": true, 00:25:50.646 "claim_type": "exclusive_write", 00:25:50.646 "zoned": false, 00:25:50.646 "supported_io_types": { 00:25:50.646 "read": true, 00:25:50.646 "write": true, 00:25:50.646 "unmap": true, 00:25:50.646 "flush": true, 00:25:50.646 "reset": true, 00:25:50.646 "nvme_admin": false, 00:25:50.646 "nvme_io": false, 00:25:50.646 "nvme_io_md": false, 00:25:50.646 "write_zeroes": true, 00:25:50.646 "zcopy": true, 00:25:50.646 "get_zone_info": false, 00:25:50.646 "zone_management": false, 00:25:50.646 "zone_append": false, 00:25:50.646 "compare": false, 00:25:50.646 "compare_and_write": false, 00:25:50.646 "abort": true, 00:25:50.646 "seek_hole": false, 00:25:50.646 "seek_data": false, 00:25:50.646 "copy": true, 00:25:50.646 "nvme_iov_md": false 00:25:50.646 }, 00:25:50.646 "memory_domains": [ 00:25:50.646 { 00:25:50.646 "dma_device_id": "system", 00:25:50.646 "dma_device_type": 1 00:25:50.646 }, 00:25:50.646 { 00:25:50.646 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:50.646 "dma_device_type": 2 00:25:50.646 } 00:25:50.646 ], 00:25:50.646 "driver_specific": {} 00:25:50.646 } 00:25:50.646 ] 00:25:50.646 14:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:25:50.646 14:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:50.646 14:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:50.646 14:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:50.646 14:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:50.646 14:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:50.646 14:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:50.646 14:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:50.646 14:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:50.646 14:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:50.646 14:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:50.646 14:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:50.646 14:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:50.905 14:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:50.905 "name": "Existed_Raid", 00:25:50.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:50.905 "strip_size_kb": 0, 00:25:50.905 "state": "configuring", 00:25:50.905 "raid_level": "raid1", 00:25:50.905 "superblock": false, 00:25:50.905 "num_base_bdevs": 4, 00:25:50.905 "num_base_bdevs_discovered": 3, 00:25:50.905 "num_base_bdevs_operational": 4, 00:25:50.905 "base_bdevs_list": [ 00:25:50.905 { 00:25:50.905 "name": "BaseBdev1", 00:25:50.905 "uuid": "26942438-b6cb-4a6a-97a9-5c65e1d97554", 00:25:50.905 "is_configured": true, 00:25:50.905 "data_offset": 0, 00:25:50.905 "data_size": 65536 00:25:50.905 }, 00:25:50.905 { 00:25:50.905 "name": null, 00:25:50.905 "uuid": "1d9ad0f0-5adc-454b-9917-0f9e221d30db", 00:25:50.905 "is_configured": false, 00:25:50.905 "data_offset": 0, 00:25:50.905 "data_size": 65536 00:25:50.905 }, 00:25:50.905 { 00:25:50.905 "name": "BaseBdev3", 00:25:50.905 "uuid": "c4ebcfbd-fe6e-4ecf-81ae-8fca9d97c8f5", 00:25:50.905 "is_configured": true, 00:25:50.905 "data_offset": 0, 00:25:50.905 "data_size": 65536 00:25:50.905 }, 00:25:50.905 { 00:25:50.905 "name": "BaseBdev4", 00:25:50.905 "uuid": "7d716799-ebb2-4943-8394-3fb6a20357a7", 00:25:50.905 "is_configured": true, 00:25:50.905 "data_offset": 0, 00:25:50.905 "data_size": 65536 00:25:50.905 } 00:25:50.905 ] 00:25:50.905 }' 00:25:50.905 14:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:50.905 14:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:51.865 14:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:25:51.865 14:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:51.865 14:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:25:51.865 14:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:25:52.123 [2024-07-15 14:18:38.060072] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:52.123 14:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:52.123 14:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:52.123 14:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:52.123 14:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:52.123 14:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:52.123 14:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:52.123 14:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:52.123 14:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:52.123 14:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:52.123 14:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:52.123 14:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:52.123 14:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:52.382 14:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:52.382 "name": "Existed_Raid", 00:25:52.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:52.382 "strip_size_kb": 0, 00:25:52.382 "state": "configuring", 00:25:52.382 "raid_level": "raid1", 00:25:52.382 "superblock": false, 00:25:52.382 "num_base_bdevs": 4, 00:25:52.382 "num_base_bdevs_discovered": 2, 00:25:52.382 "num_base_bdevs_operational": 4, 00:25:52.382 "base_bdevs_list": [ 00:25:52.382 { 00:25:52.382 "name": "BaseBdev1", 00:25:52.382 "uuid": "26942438-b6cb-4a6a-97a9-5c65e1d97554", 00:25:52.382 "is_configured": true, 00:25:52.382 "data_offset": 0, 00:25:52.382 "data_size": 65536 00:25:52.382 }, 00:25:52.382 { 00:25:52.382 "name": null, 00:25:52.382 "uuid": "1d9ad0f0-5adc-454b-9917-0f9e221d30db", 00:25:52.382 "is_configured": false, 00:25:52.382 "data_offset": 0, 00:25:52.382 "data_size": 65536 00:25:52.382 }, 00:25:52.382 { 00:25:52.382 "name": null, 00:25:52.382 "uuid": "c4ebcfbd-fe6e-4ecf-81ae-8fca9d97c8f5", 00:25:52.382 "is_configured": false, 00:25:52.382 "data_offset": 0, 00:25:52.382 "data_size": 65536 00:25:52.382 }, 00:25:52.382 { 00:25:52.382 "name": "BaseBdev4", 00:25:52.382 "uuid": "7d716799-ebb2-4943-8394-3fb6a20357a7", 00:25:52.382 "is_configured": true, 00:25:52.382 "data_offset": 0, 00:25:52.382 "data_size": 65536 00:25:52.382 } 00:25:52.382 ] 00:25:52.382 }' 00:25:52.382 14:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:52.382 14:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:53.317 14:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:25:53.317 14:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:53.317 14:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:25:53.317 14:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:25:53.575 [2024-07-15 14:18:39.532291] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:53.575 14:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:53.575 14:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:53.575 14:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:53.575 14:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:53.575 14:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:53.575 14:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:53.575 14:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:53.575 14:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:53.575 14:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:53.575 14:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:53.575 14:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:53.575 14:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:53.833 14:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:53.833 "name": "Existed_Raid", 00:25:53.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:53.834 "strip_size_kb": 0, 00:25:53.834 "state": "configuring", 00:25:53.834 "raid_level": "raid1", 00:25:53.834 "superblock": false, 00:25:53.834 "num_base_bdevs": 4, 00:25:53.834 "num_base_bdevs_discovered": 3, 00:25:53.834 "num_base_bdevs_operational": 4, 00:25:53.834 "base_bdevs_list": [ 00:25:53.834 { 00:25:53.834 "name": "BaseBdev1", 00:25:53.834 "uuid": "26942438-b6cb-4a6a-97a9-5c65e1d97554", 00:25:53.834 "is_configured": true, 00:25:53.834 "data_offset": 0, 00:25:53.834 "data_size": 65536 00:25:53.834 }, 00:25:53.834 { 00:25:53.834 "name": null, 00:25:53.834 "uuid": "1d9ad0f0-5adc-454b-9917-0f9e221d30db", 00:25:53.834 "is_configured": false, 00:25:53.834 "data_offset": 0, 00:25:53.834 "data_size": 65536 00:25:53.834 }, 00:25:53.834 { 00:25:53.834 "name": "BaseBdev3", 00:25:53.834 "uuid": "c4ebcfbd-fe6e-4ecf-81ae-8fca9d97c8f5", 00:25:53.834 "is_configured": true, 00:25:53.834 "data_offset": 0, 00:25:53.834 "data_size": 65536 00:25:53.834 }, 00:25:53.834 { 00:25:53.834 "name": "BaseBdev4", 00:25:53.834 "uuid": "7d716799-ebb2-4943-8394-3fb6a20357a7", 00:25:53.834 "is_configured": true, 00:25:53.834 "data_offset": 0, 00:25:53.834 "data_size": 65536 00:25:53.834 } 00:25:53.834 ] 00:25:53.834 }' 00:25:53.834 14:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:53.834 14:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:54.770 14:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:54.770 14:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:25:54.770 14:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:25:54.770 14:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:25:55.028 [2024-07-15 14:18:41.024543] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:55.286 14:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:55.286 14:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:55.286 14:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:55.286 14:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:55.286 14:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:55.286 14:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:55.286 14:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:55.286 14:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:55.286 14:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:55.286 14:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:55.286 14:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:55.286 14:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:55.544 14:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:55.544 "name": "Existed_Raid", 00:25:55.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:55.544 "strip_size_kb": 0, 00:25:55.544 "state": "configuring", 00:25:55.544 "raid_level": "raid1", 00:25:55.544 "superblock": false, 00:25:55.544 "num_base_bdevs": 4, 00:25:55.544 "num_base_bdevs_discovered": 2, 00:25:55.544 "num_base_bdevs_operational": 4, 00:25:55.544 "base_bdevs_list": [ 00:25:55.544 { 00:25:55.544 "name": null, 00:25:55.544 "uuid": "26942438-b6cb-4a6a-97a9-5c65e1d97554", 00:25:55.544 "is_configured": false, 00:25:55.544 "data_offset": 0, 00:25:55.544 "data_size": 65536 00:25:55.544 }, 00:25:55.544 { 00:25:55.544 "name": null, 00:25:55.544 "uuid": "1d9ad0f0-5adc-454b-9917-0f9e221d30db", 00:25:55.544 "is_configured": false, 00:25:55.544 "data_offset": 0, 00:25:55.544 "data_size": 65536 00:25:55.544 }, 00:25:55.544 { 00:25:55.544 "name": "BaseBdev3", 00:25:55.544 "uuid": "c4ebcfbd-fe6e-4ecf-81ae-8fca9d97c8f5", 00:25:55.544 "is_configured": true, 00:25:55.544 "data_offset": 0, 00:25:55.544 "data_size": 65536 00:25:55.544 }, 00:25:55.544 { 00:25:55.544 "name": "BaseBdev4", 00:25:55.544 "uuid": "7d716799-ebb2-4943-8394-3fb6a20357a7", 00:25:55.544 "is_configured": true, 00:25:55.544 "data_offset": 0, 00:25:55.544 "data_size": 65536 00:25:55.544 } 00:25:55.544 ] 00:25:55.544 }' 00:25:55.544 14:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:55.544 14:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:56.112 14:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:56.112 14:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:25:56.679 14:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:25:56.679 14:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:25:56.679 [2024-07-15 14:18:42.643008] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:56.679 14:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:56.679 14:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:56.679 14:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:56.679 14:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:56.679 14:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:56.679 14:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:56.679 14:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:56.679 14:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:56.679 14:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:56.679 14:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:56.679 14:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:56.679 14:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:57.246 14:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:57.246 "name": "Existed_Raid", 00:25:57.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:57.246 "strip_size_kb": 0, 00:25:57.246 "state": "configuring", 00:25:57.246 "raid_level": "raid1", 00:25:57.246 "superblock": false, 00:25:57.246 "num_base_bdevs": 4, 00:25:57.246 "num_base_bdevs_discovered": 3, 00:25:57.246 "num_base_bdevs_operational": 4, 00:25:57.246 "base_bdevs_list": [ 00:25:57.246 { 00:25:57.246 "name": null, 00:25:57.246 "uuid": "26942438-b6cb-4a6a-97a9-5c65e1d97554", 00:25:57.246 "is_configured": false, 00:25:57.246 "data_offset": 0, 00:25:57.246 "data_size": 65536 00:25:57.246 }, 00:25:57.246 { 00:25:57.246 "name": "BaseBdev2", 00:25:57.246 "uuid": "1d9ad0f0-5adc-454b-9917-0f9e221d30db", 00:25:57.246 "is_configured": true, 00:25:57.246 "data_offset": 0, 00:25:57.246 "data_size": 65536 00:25:57.246 }, 00:25:57.246 { 00:25:57.246 "name": "BaseBdev3", 00:25:57.246 "uuid": "c4ebcfbd-fe6e-4ecf-81ae-8fca9d97c8f5", 00:25:57.246 "is_configured": true, 00:25:57.246 "data_offset": 0, 00:25:57.246 "data_size": 65536 00:25:57.246 }, 00:25:57.246 { 00:25:57.246 "name": "BaseBdev4", 00:25:57.246 "uuid": "7d716799-ebb2-4943-8394-3fb6a20357a7", 00:25:57.246 "is_configured": true, 00:25:57.246 "data_offset": 0, 00:25:57.246 "data_size": 65536 00:25:57.246 } 00:25:57.246 ] 00:25:57.246 }' 00:25:57.246 14:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:57.246 14:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:57.811 14:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:57.811 14:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:25:58.069 14:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:25:58.069 14:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:25:58.069 14:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:58.328 14:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 26942438-b6cb-4a6a-97a9-5c65e1d97554 00:25:58.587 [2024-07-15 14:18:44.331801] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:25:58.587 [2024-07-15 14:18:44.332147] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:25:58.587 [2024-07-15 14:18:44.332200] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:25:58.587 [2024-07-15 14:18:44.332426] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:25:58.587 [2024-07-15 14:18:44.332822] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:25:58.587 [2024-07-15 14:18:44.332957] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000009380 00:25:58.587 [2024-07-15 14:18:44.333278] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:58.587 NewBaseBdev 00:25:58.587 14:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:25:58.587 14:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:25:58.587 14:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:25:58.587 14:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:25:58.587 14:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:25:58.587 14:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:25:58.587 14:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:58.846 14:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:25:59.105 [ 00:25:59.105 { 00:25:59.105 "name": "NewBaseBdev", 00:25:59.105 "aliases": [ 00:25:59.105 "26942438-b6cb-4a6a-97a9-5c65e1d97554" 00:25:59.105 ], 00:25:59.105 "product_name": "Malloc disk", 00:25:59.105 "block_size": 512, 00:25:59.105 "num_blocks": 65536, 00:25:59.105 "uuid": "26942438-b6cb-4a6a-97a9-5c65e1d97554", 00:25:59.105 "assigned_rate_limits": { 00:25:59.105 "rw_ios_per_sec": 0, 00:25:59.105 "rw_mbytes_per_sec": 0, 00:25:59.105 "r_mbytes_per_sec": 0, 00:25:59.105 "w_mbytes_per_sec": 0 00:25:59.105 }, 00:25:59.105 "claimed": true, 00:25:59.105 "claim_type": "exclusive_write", 00:25:59.105 "zoned": false, 00:25:59.105 "supported_io_types": { 00:25:59.105 "read": true, 00:25:59.105 "write": true, 00:25:59.105 "unmap": true, 00:25:59.105 "flush": true, 00:25:59.105 "reset": true, 00:25:59.105 "nvme_admin": false, 00:25:59.105 "nvme_io": false, 00:25:59.105 "nvme_io_md": false, 00:25:59.105 "write_zeroes": true, 00:25:59.105 "zcopy": true, 00:25:59.105 "get_zone_info": false, 00:25:59.105 "zone_management": false, 00:25:59.105 "zone_append": false, 00:25:59.105 "compare": false, 00:25:59.105 "compare_and_write": false, 00:25:59.105 "abort": true, 00:25:59.105 "seek_hole": false, 00:25:59.105 "seek_data": false, 00:25:59.105 "copy": true, 00:25:59.105 "nvme_iov_md": false 00:25:59.105 }, 00:25:59.105 "memory_domains": [ 00:25:59.105 { 00:25:59.105 "dma_device_id": "system", 00:25:59.105 "dma_device_type": 1 00:25:59.105 }, 00:25:59.105 { 00:25:59.105 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:59.105 "dma_device_type": 2 00:25:59.105 } 00:25:59.105 ], 00:25:59.105 "driver_specific": {} 00:25:59.105 } 00:25:59.105 ] 00:25:59.105 14:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:25:59.105 14:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:25:59.105 14:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:59.105 14:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:59.105 14:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:59.105 14:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:59.105 14:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:59.105 14:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:59.105 14:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:59.105 14:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:59.105 14:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:59.105 14:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:59.105 14:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:59.365 14:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:59.365 "name": "Existed_Raid", 00:25:59.365 "uuid": "34a45033-f988-4f56-bad3-45ffb18f8d86", 00:25:59.365 "strip_size_kb": 0, 00:25:59.365 "state": "online", 00:25:59.365 "raid_level": "raid1", 00:25:59.365 "superblock": false, 00:25:59.365 "num_base_bdevs": 4, 00:25:59.365 "num_base_bdevs_discovered": 4, 00:25:59.365 "num_base_bdevs_operational": 4, 00:25:59.365 "base_bdevs_list": [ 00:25:59.365 { 00:25:59.365 "name": "NewBaseBdev", 00:25:59.365 "uuid": "26942438-b6cb-4a6a-97a9-5c65e1d97554", 00:25:59.365 "is_configured": true, 00:25:59.365 "data_offset": 0, 00:25:59.365 "data_size": 65536 00:25:59.365 }, 00:25:59.365 { 00:25:59.365 "name": "BaseBdev2", 00:25:59.366 "uuid": "1d9ad0f0-5adc-454b-9917-0f9e221d30db", 00:25:59.366 "is_configured": true, 00:25:59.366 "data_offset": 0, 00:25:59.366 "data_size": 65536 00:25:59.366 }, 00:25:59.366 { 00:25:59.366 "name": "BaseBdev3", 00:25:59.366 "uuid": "c4ebcfbd-fe6e-4ecf-81ae-8fca9d97c8f5", 00:25:59.366 "is_configured": true, 00:25:59.366 "data_offset": 0, 00:25:59.366 "data_size": 65536 00:25:59.366 }, 00:25:59.366 { 00:25:59.366 "name": "BaseBdev4", 00:25:59.366 "uuid": "7d716799-ebb2-4943-8394-3fb6a20357a7", 00:25:59.366 "is_configured": true, 00:25:59.366 "data_offset": 0, 00:25:59.366 "data_size": 65536 00:25:59.366 } 00:25:59.366 ] 00:25:59.366 }' 00:25:59.366 14:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:59.366 14:18:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:25:59.934 14:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:25:59.934 14:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:25:59.934 14:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:25:59.934 14:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:25:59.934 14:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:25:59.934 14:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:25:59.934 14:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:25:59.934 14:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:26:00.193 [2024-07-15 14:18:46.096387] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:00.193 14:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:26:00.193 "name": "Existed_Raid", 00:26:00.193 "aliases": [ 00:26:00.193 "34a45033-f988-4f56-bad3-45ffb18f8d86" 00:26:00.193 ], 00:26:00.193 "product_name": "Raid Volume", 00:26:00.193 "block_size": 512, 00:26:00.193 "num_blocks": 65536, 00:26:00.193 "uuid": "34a45033-f988-4f56-bad3-45ffb18f8d86", 00:26:00.193 "assigned_rate_limits": { 00:26:00.193 "rw_ios_per_sec": 0, 00:26:00.193 "rw_mbytes_per_sec": 0, 00:26:00.193 "r_mbytes_per_sec": 0, 00:26:00.193 "w_mbytes_per_sec": 0 00:26:00.193 }, 00:26:00.193 "claimed": false, 00:26:00.193 "zoned": false, 00:26:00.193 "supported_io_types": { 00:26:00.193 "read": true, 00:26:00.193 "write": true, 00:26:00.193 "unmap": false, 00:26:00.193 "flush": false, 00:26:00.193 "reset": true, 00:26:00.193 "nvme_admin": false, 00:26:00.193 "nvme_io": false, 00:26:00.193 "nvme_io_md": false, 00:26:00.193 "write_zeroes": true, 00:26:00.193 "zcopy": false, 00:26:00.193 "get_zone_info": false, 00:26:00.193 "zone_management": false, 00:26:00.193 "zone_append": false, 00:26:00.193 "compare": false, 00:26:00.193 "compare_and_write": false, 00:26:00.193 "abort": false, 00:26:00.193 "seek_hole": false, 00:26:00.193 "seek_data": false, 00:26:00.193 "copy": false, 00:26:00.193 "nvme_iov_md": false 00:26:00.193 }, 00:26:00.193 "memory_domains": [ 00:26:00.193 { 00:26:00.193 "dma_device_id": "system", 00:26:00.193 "dma_device_type": 1 00:26:00.193 }, 00:26:00.193 { 00:26:00.193 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:00.193 "dma_device_type": 2 00:26:00.193 }, 00:26:00.193 { 00:26:00.193 "dma_device_id": "system", 00:26:00.193 "dma_device_type": 1 00:26:00.193 }, 00:26:00.193 { 00:26:00.193 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:00.193 "dma_device_type": 2 00:26:00.193 }, 00:26:00.193 { 00:26:00.193 "dma_device_id": "system", 00:26:00.193 "dma_device_type": 1 00:26:00.193 }, 00:26:00.193 { 00:26:00.194 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:00.194 "dma_device_type": 2 00:26:00.194 }, 00:26:00.194 { 00:26:00.194 "dma_device_id": "system", 00:26:00.194 "dma_device_type": 1 00:26:00.194 }, 00:26:00.194 { 00:26:00.194 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:00.194 "dma_device_type": 2 00:26:00.194 } 00:26:00.194 ], 00:26:00.194 "driver_specific": { 00:26:00.194 "raid": { 00:26:00.194 "uuid": "34a45033-f988-4f56-bad3-45ffb18f8d86", 00:26:00.194 "strip_size_kb": 0, 00:26:00.194 "state": "online", 00:26:00.194 "raid_level": "raid1", 00:26:00.194 "superblock": false, 00:26:00.194 "num_base_bdevs": 4, 00:26:00.194 "num_base_bdevs_discovered": 4, 00:26:00.194 "num_base_bdevs_operational": 4, 00:26:00.194 "base_bdevs_list": [ 00:26:00.194 { 00:26:00.194 "name": "NewBaseBdev", 00:26:00.194 "uuid": "26942438-b6cb-4a6a-97a9-5c65e1d97554", 00:26:00.194 "is_configured": true, 00:26:00.194 "data_offset": 0, 00:26:00.194 "data_size": 65536 00:26:00.194 }, 00:26:00.194 { 00:26:00.194 "name": "BaseBdev2", 00:26:00.194 "uuid": "1d9ad0f0-5adc-454b-9917-0f9e221d30db", 00:26:00.194 "is_configured": true, 00:26:00.194 "data_offset": 0, 00:26:00.194 "data_size": 65536 00:26:00.194 }, 00:26:00.194 { 00:26:00.194 "name": "BaseBdev3", 00:26:00.194 "uuid": "c4ebcfbd-fe6e-4ecf-81ae-8fca9d97c8f5", 00:26:00.194 "is_configured": true, 00:26:00.194 "data_offset": 0, 00:26:00.194 "data_size": 65536 00:26:00.194 }, 00:26:00.194 { 00:26:00.194 "name": "BaseBdev4", 00:26:00.194 "uuid": "7d716799-ebb2-4943-8394-3fb6a20357a7", 00:26:00.194 "is_configured": true, 00:26:00.194 "data_offset": 0, 00:26:00.194 "data_size": 65536 00:26:00.194 } 00:26:00.194 ] 00:26:00.194 } 00:26:00.194 } 00:26:00.194 }' 00:26:00.194 14:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:00.194 14:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:26:00.194 BaseBdev2 00:26:00.194 BaseBdev3 00:26:00.194 BaseBdev4' 00:26:00.194 14:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:00.194 14:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:26:00.194 14:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:00.453 14:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:00.453 "name": "NewBaseBdev", 00:26:00.453 "aliases": [ 00:26:00.453 "26942438-b6cb-4a6a-97a9-5c65e1d97554" 00:26:00.453 ], 00:26:00.453 "product_name": "Malloc disk", 00:26:00.453 "block_size": 512, 00:26:00.453 "num_blocks": 65536, 00:26:00.453 "uuid": "26942438-b6cb-4a6a-97a9-5c65e1d97554", 00:26:00.453 "assigned_rate_limits": { 00:26:00.453 "rw_ios_per_sec": 0, 00:26:00.453 "rw_mbytes_per_sec": 0, 00:26:00.453 "r_mbytes_per_sec": 0, 00:26:00.453 "w_mbytes_per_sec": 0 00:26:00.453 }, 00:26:00.453 "claimed": true, 00:26:00.453 "claim_type": "exclusive_write", 00:26:00.453 "zoned": false, 00:26:00.453 "supported_io_types": { 00:26:00.453 "read": true, 00:26:00.453 "write": true, 00:26:00.453 "unmap": true, 00:26:00.453 "flush": true, 00:26:00.453 "reset": true, 00:26:00.453 "nvme_admin": false, 00:26:00.453 "nvme_io": false, 00:26:00.453 "nvme_io_md": false, 00:26:00.453 "write_zeroes": true, 00:26:00.453 "zcopy": true, 00:26:00.453 "get_zone_info": false, 00:26:00.453 "zone_management": false, 00:26:00.453 "zone_append": false, 00:26:00.453 "compare": false, 00:26:00.453 "compare_and_write": false, 00:26:00.453 "abort": true, 00:26:00.453 "seek_hole": false, 00:26:00.453 "seek_data": false, 00:26:00.453 "copy": true, 00:26:00.453 "nvme_iov_md": false 00:26:00.453 }, 00:26:00.453 "memory_domains": [ 00:26:00.453 { 00:26:00.453 "dma_device_id": "system", 00:26:00.453 "dma_device_type": 1 00:26:00.453 }, 00:26:00.453 { 00:26:00.453 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:00.453 "dma_device_type": 2 00:26:00.453 } 00:26:00.453 ], 00:26:00.453 "driver_specific": {} 00:26:00.453 }' 00:26:00.453 14:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:00.713 14:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:00.713 14:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:00.713 14:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:00.713 14:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:00.713 14:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:00.713 14:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:00.713 14:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:00.713 14:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:00.972 14:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:00.972 14:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:00.972 14:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:00.972 14:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:00.972 14:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:26:00.972 14:18:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:01.230 14:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:01.230 "name": "BaseBdev2", 00:26:01.230 "aliases": [ 00:26:01.230 "1d9ad0f0-5adc-454b-9917-0f9e221d30db" 00:26:01.230 ], 00:26:01.230 "product_name": "Malloc disk", 00:26:01.230 "block_size": 512, 00:26:01.230 "num_blocks": 65536, 00:26:01.230 "uuid": "1d9ad0f0-5adc-454b-9917-0f9e221d30db", 00:26:01.230 "assigned_rate_limits": { 00:26:01.230 "rw_ios_per_sec": 0, 00:26:01.230 "rw_mbytes_per_sec": 0, 00:26:01.230 "r_mbytes_per_sec": 0, 00:26:01.230 "w_mbytes_per_sec": 0 00:26:01.230 }, 00:26:01.230 "claimed": true, 00:26:01.230 "claim_type": "exclusive_write", 00:26:01.230 "zoned": false, 00:26:01.230 "supported_io_types": { 00:26:01.230 "read": true, 00:26:01.230 "write": true, 00:26:01.230 "unmap": true, 00:26:01.230 "flush": true, 00:26:01.230 "reset": true, 00:26:01.230 "nvme_admin": false, 00:26:01.230 "nvme_io": false, 00:26:01.230 "nvme_io_md": false, 00:26:01.230 "write_zeroes": true, 00:26:01.231 "zcopy": true, 00:26:01.231 "get_zone_info": false, 00:26:01.231 "zone_management": false, 00:26:01.231 "zone_append": false, 00:26:01.231 "compare": false, 00:26:01.231 "compare_and_write": false, 00:26:01.231 "abort": true, 00:26:01.231 "seek_hole": false, 00:26:01.231 "seek_data": false, 00:26:01.231 "copy": true, 00:26:01.231 "nvme_iov_md": false 00:26:01.231 }, 00:26:01.231 "memory_domains": [ 00:26:01.231 { 00:26:01.231 "dma_device_id": "system", 00:26:01.231 "dma_device_type": 1 00:26:01.231 }, 00:26:01.231 { 00:26:01.231 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:01.231 "dma_device_type": 2 00:26:01.231 } 00:26:01.231 ], 00:26:01.231 "driver_specific": {} 00:26:01.231 }' 00:26:01.231 14:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:01.231 14:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:01.231 14:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:01.231 14:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:01.231 14:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:01.489 14:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:01.489 14:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:01.489 14:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:01.489 14:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:01.489 14:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:01.489 14:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:01.489 14:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:01.489 14:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:01.489 14:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:26:01.489 14:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:01.748 14:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:01.748 "name": "BaseBdev3", 00:26:01.748 "aliases": [ 00:26:01.748 "c4ebcfbd-fe6e-4ecf-81ae-8fca9d97c8f5" 00:26:01.748 ], 00:26:01.748 "product_name": "Malloc disk", 00:26:01.748 "block_size": 512, 00:26:01.748 "num_blocks": 65536, 00:26:01.748 "uuid": "c4ebcfbd-fe6e-4ecf-81ae-8fca9d97c8f5", 00:26:01.748 "assigned_rate_limits": { 00:26:01.748 "rw_ios_per_sec": 0, 00:26:01.748 "rw_mbytes_per_sec": 0, 00:26:01.748 "r_mbytes_per_sec": 0, 00:26:01.748 "w_mbytes_per_sec": 0 00:26:01.748 }, 00:26:01.748 "claimed": true, 00:26:01.748 "claim_type": "exclusive_write", 00:26:01.748 "zoned": false, 00:26:01.748 "supported_io_types": { 00:26:01.748 "read": true, 00:26:01.748 "write": true, 00:26:01.748 "unmap": true, 00:26:01.748 "flush": true, 00:26:01.748 "reset": true, 00:26:01.748 "nvme_admin": false, 00:26:01.748 "nvme_io": false, 00:26:01.748 "nvme_io_md": false, 00:26:01.748 "write_zeroes": true, 00:26:01.748 "zcopy": true, 00:26:01.748 "get_zone_info": false, 00:26:01.748 "zone_management": false, 00:26:01.748 "zone_append": false, 00:26:01.748 "compare": false, 00:26:01.748 "compare_and_write": false, 00:26:01.748 "abort": true, 00:26:01.748 "seek_hole": false, 00:26:01.748 "seek_data": false, 00:26:01.748 "copy": true, 00:26:01.748 "nvme_iov_md": false 00:26:01.748 }, 00:26:01.748 "memory_domains": [ 00:26:01.748 { 00:26:01.748 "dma_device_id": "system", 00:26:01.748 "dma_device_type": 1 00:26:01.748 }, 00:26:01.748 { 00:26:01.748 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:01.748 "dma_device_type": 2 00:26:01.748 } 00:26:01.748 ], 00:26:01.748 "driver_specific": {} 00:26:01.748 }' 00:26:01.748 14:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:01.748 14:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:02.007 14:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:02.007 14:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:02.007 14:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:02.007 14:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:02.007 14:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:02.007 14:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:02.007 14:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:02.007 14:18:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:02.007 14:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:02.275 14:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:02.275 14:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:02.275 14:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:26:02.275 14:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:02.535 14:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:02.535 "name": "BaseBdev4", 00:26:02.535 "aliases": [ 00:26:02.535 "7d716799-ebb2-4943-8394-3fb6a20357a7" 00:26:02.535 ], 00:26:02.535 "product_name": "Malloc disk", 00:26:02.535 "block_size": 512, 00:26:02.535 "num_blocks": 65536, 00:26:02.535 "uuid": "7d716799-ebb2-4943-8394-3fb6a20357a7", 00:26:02.535 "assigned_rate_limits": { 00:26:02.535 "rw_ios_per_sec": 0, 00:26:02.535 "rw_mbytes_per_sec": 0, 00:26:02.535 "r_mbytes_per_sec": 0, 00:26:02.535 "w_mbytes_per_sec": 0 00:26:02.535 }, 00:26:02.535 "claimed": true, 00:26:02.535 "claim_type": "exclusive_write", 00:26:02.535 "zoned": false, 00:26:02.535 "supported_io_types": { 00:26:02.535 "read": true, 00:26:02.535 "write": true, 00:26:02.535 "unmap": true, 00:26:02.535 "flush": true, 00:26:02.535 "reset": true, 00:26:02.535 "nvme_admin": false, 00:26:02.535 "nvme_io": false, 00:26:02.535 "nvme_io_md": false, 00:26:02.535 "write_zeroes": true, 00:26:02.535 "zcopy": true, 00:26:02.535 "get_zone_info": false, 00:26:02.535 "zone_management": false, 00:26:02.535 "zone_append": false, 00:26:02.535 "compare": false, 00:26:02.535 "compare_and_write": false, 00:26:02.535 "abort": true, 00:26:02.535 "seek_hole": false, 00:26:02.535 "seek_data": false, 00:26:02.535 "copy": true, 00:26:02.535 "nvme_iov_md": false 00:26:02.535 }, 00:26:02.535 "memory_domains": [ 00:26:02.535 { 00:26:02.535 "dma_device_id": "system", 00:26:02.535 "dma_device_type": 1 00:26:02.535 }, 00:26:02.535 { 00:26:02.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:02.535 "dma_device_type": 2 00:26:02.535 } 00:26:02.535 ], 00:26:02.535 "driver_specific": {} 00:26:02.535 }' 00:26:02.535 14:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:02.535 14:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:02.535 14:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:02.535 14:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:02.535 14:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:02.793 14:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:02.793 14:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:02.793 14:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:02.793 14:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:02.793 14:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:02.793 14:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:02.793 14:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:02.793 14:18:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:26:03.052 [2024-07-15 14:18:49.028621] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:03.052 [2024-07-15 14:18:49.028855] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:03.052 [2024-07-15 14:18:49.029047] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:03.052 [2024-07-15 14:18:49.029358] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:03.052 [2024-07-15 14:18:49.029486] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name Existed_Raid, state offline 00:26:03.052 14:18:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 206846 00:26:03.052 14:18:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 206846 ']' 00:26:03.052 14:18:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 206846 00:26:03.052 14:18:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:26:03.052 14:18:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:03.052 14:18:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 206846 00:26:03.311 14:18:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:03.311 14:18:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:03.311 14:18:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 206846' 00:26:03.311 killing process with pid 206846 00:26:03.311 14:18:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 206846 00:26:03.311 [2024-07-15 14:18:49.072463] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:03.311 14:18:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 206846 00:26:03.575 [2024-07-15 14:18:49.413551] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:04.952 14:18:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:26:04.952 00:26:04.952 real 0m37.712s 00:26:04.952 user 1m9.575s 00:26:04.952 sys 0m4.493s 00:26:04.952 14:18:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:04.952 14:18:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:04.952 ************************************ 00:26:04.952 END TEST raid_state_function_test 00:26:04.952 ************************************ 00:26:04.952 14:18:50 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:26:04.952 14:18:50 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:26:04.952 14:18:50 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:26:04.952 14:18:50 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:04.952 14:18:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:04.952 ************************************ 00:26:04.952 START TEST raid_state_function_test_sb 00:26:04.952 ************************************ 00:26:04.952 14:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 4 true 00:26:04.952 14:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:26:04.952 14:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:26:04.952 14:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:26:04.952 14:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:26:04.952 14:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:26:04.952 14:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:26:04.952 14:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:26:04.952 14:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:26:04.952 14:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:26:04.952 14:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:26:04.952 14:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:26:04.952 14:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:26:04.952 14:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:26:04.952 14:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:26:04.952 14:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:26:04.952 14:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev4 00:26:04.952 14:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:26:04.952 14:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:26:04.952 14:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:26:04.952 14:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:26:04.952 14:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:26:04.952 14:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:26:04.952 14:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:26:04.952 14:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:26:04.952 14:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:26:04.952 14:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:26:04.952 14:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:26:04.952 14:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:26:04.952 14:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=207974 00:26:04.952 14:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:26:04.952 14:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 207974' 00:26:04.952 Process raid pid: 207974 00:26:04.952 14:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 207974 /var/tmp/spdk-raid.sock 00:26:04.952 14:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 207974 ']' 00:26:04.952 14:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:04.952 14:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:04.952 14:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:04.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:04.952 14:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:04.952 14:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:04.952 [2024-07-15 14:18:50.665716] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:26:04.952 [2024-07-15 14:18:50.666412] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:04.952 [2024-07-15 14:18:50.817758] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:05.211 [2024-07-15 14:18:51.096434] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:05.470 [2024-07-15 14:18:51.317858] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:05.729 14:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:05.729 14:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:26:05.729 14:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:26:05.987 [2024-07-15 14:18:51.914342] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:05.987 [2024-07-15 14:18:51.914893] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:05.987 [2024-07-15 14:18:51.915044] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:05.988 [2024-07-15 14:18:51.915224] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:05.988 [2024-07-15 14:18:51.915347] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:05.988 [2024-07-15 14:18:51.915494] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:05.988 [2024-07-15 14:18:51.915628] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:26:05.988 [2024-07-15 14:18:51.915800] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:26:05.988 14:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:05.988 14:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:05.988 14:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:05.988 14:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:05.988 14:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:05.988 14:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:05.988 14:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:05.988 14:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:05.988 14:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:05.988 14:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:05.988 14:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:05.988 14:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:06.246 14:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:06.246 "name": "Existed_Raid", 00:26:06.246 "uuid": "8d7694f1-3b6c-46c9-937e-13a0d246c8d7", 00:26:06.246 "strip_size_kb": 0, 00:26:06.246 "state": "configuring", 00:26:06.246 "raid_level": "raid1", 00:26:06.246 "superblock": true, 00:26:06.246 "num_base_bdevs": 4, 00:26:06.246 "num_base_bdevs_discovered": 0, 00:26:06.246 "num_base_bdevs_operational": 4, 00:26:06.246 "base_bdevs_list": [ 00:26:06.246 { 00:26:06.246 "name": "BaseBdev1", 00:26:06.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:06.246 "is_configured": false, 00:26:06.246 "data_offset": 0, 00:26:06.246 "data_size": 0 00:26:06.246 }, 00:26:06.246 { 00:26:06.247 "name": "BaseBdev2", 00:26:06.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:06.247 "is_configured": false, 00:26:06.247 "data_offset": 0, 00:26:06.247 "data_size": 0 00:26:06.247 }, 00:26:06.247 { 00:26:06.247 "name": "BaseBdev3", 00:26:06.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:06.247 "is_configured": false, 00:26:06.247 "data_offset": 0, 00:26:06.247 "data_size": 0 00:26:06.247 }, 00:26:06.247 { 00:26:06.247 "name": "BaseBdev4", 00:26:06.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:06.247 "is_configured": false, 00:26:06.247 "data_offset": 0, 00:26:06.247 "data_size": 0 00:26:06.247 } 00:26:06.247 ] 00:26:06.247 }' 00:26:06.247 14:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:06.247 14:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:07.184 14:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:26:07.184 [2024-07-15 14:18:53.166432] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:07.184 [2024-07-15 14:18:53.166911] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:26:07.442 14:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:26:07.442 [2024-07-15 14:18:53.414560] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:07.442 [2024-07-15 14:18:53.415527] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:07.442 [2024-07-15 14:18:53.415701] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:07.442 [2024-07-15 14:18:53.415920] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:07.442 [2024-07-15 14:18:53.416070] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:07.442 [2024-07-15 14:18:53.416243] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:07.442 [2024-07-15 14:18:53.416376] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:26:07.442 [2024-07-15 14:18:53.416554] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:26:07.442 14:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:26:08.008 [2024-07-15 14:18:53.749284] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:08.008 BaseBdev1 00:26:08.008 14:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:26:08.008 14:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:26:08.008 14:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:26:08.008 14:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:26:08.008 14:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:26:08.008 14:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:26:08.008 14:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:08.267 14:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:08.525 [ 00:26:08.525 { 00:26:08.525 "name": "BaseBdev1", 00:26:08.525 "aliases": [ 00:26:08.525 "4f8d9873-d347-4603-b484-4b5ded9c5d5a" 00:26:08.525 ], 00:26:08.525 "product_name": "Malloc disk", 00:26:08.525 "block_size": 512, 00:26:08.525 "num_blocks": 65536, 00:26:08.525 "uuid": "4f8d9873-d347-4603-b484-4b5ded9c5d5a", 00:26:08.525 "assigned_rate_limits": { 00:26:08.525 "rw_ios_per_sec": 0, 00:26:08.525 "rw_mbytes_per_sec": 0, 00:26:08.525 "r_mbytes_per_sec": 0, 00:26:08.525 "w_mbytes_per_sec": 0 00:26:08.525 }, 00:26:08.525 "claimed": true, 00:26:08.525 "claim_type": "exclusive_write", 00:26:08.525 "zoned": false, 00:26:08.525 "supported_io_types": { 00:26:08.525 "read": true, 00:26:08.525 "write": true, 00:26:08.525 "unmap": true, 00:26:08.525 "flush": true, 00:26:08.525 "reset": true, 00:26:08.525 "nvme_admin": false, 00:26:08.525 "nvme_io": false, 00:26:08.525 "nvme_io_md": false, 00:26:08.525 "write_zeroes": true, 00:26:08.525 "zcopy": true, 00:26:08.525 "get_zone_info": false, 00:26:08.525 "zone_management": false, 00:26:08.525 "zone_append": false, 00:26:08.525 "compare": false, 00:26:08.525 "compare_and_write": false, 00:26:08.525 "abort": true, 00:26:08.525 "seek_hole": false, 00:26:08.525 "seek_data": false, 00:26:08.525 "copy": true, 00:26:08.525 "nvme_iov_md": false 00:26:08.526 }, 00:26:08.526 "memory_domains": [ 00:26:08.526 { 00:26:08.526 "dma_device_id": "system", 00:26:08.526 "dma_device_type": 1 00:26:08.526 }, 00:26:08.526 { 00:26:08.526 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:08.526 "dma_device_type": 2 00:26:08.526 } 00:26:08.526 ], 00:26:08.526 "driver_specific": {} 00:26:08.526 } 00:26:08.526 ] 00:26:08.526 14:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:26:08.526 14:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:08.526 14:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:08.526 14:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:08.526 14:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:08.526 14:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:08.526 14:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:08.526 14:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:08.526 14:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:08.526 14:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:08.526 14:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:08.526 14:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:08.526 14:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:08.785 14:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:08.785 "name": "Existed_Raid", 00:26:08.785 "uuid": "bd78a974-9cef-4569-a146-dbf75fea14bd", 00:26:08.785 "strip_size_kb": 0, 00:26:08.785 "state": "configuring", 00:26:08.785 "raid_level": "raid1", 00:26:08.785 "superblock": true, 00:26:08.785 "num_base_bdevs": 4, 00:26:08.785 "num_base_bdevs_discovered": 1, 00:26:08.785 "num_base_bdevs_operational": 4, 00:26:08.785 "base_bdevs_list": [ 00:26:08.785 { 00:26:08.785 "name": "BaseBdev1", 00:26:08.785 "uuid": "4f8d9873-d347-4603-b484-4b5ded9c5d5a", 00:26:08.785 "is_configured": true, 00:26:08.785 "data_offset": 2048, 00:26:08.785 "data_size": 63488 00:26:08.785 }, 00:26:08.785 { 00:26:08.785 "name": "BaseBdev2", 00:26:08.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:08.785 "is_configured": false, 00:26:08.785 "data_offset": 0, 00:26:08.785 "data_size": 0 00:26:08.785 }, 00:26:08.785 { 00:26:08.785 "name": "BaseBdev3", 00:26:08.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:08.785 "is_configured": false, 00:26:08.785 "data_offset": 0, 00:26:08.785 "data_size": 0 00:26:08.785 }, 00:26:08.785 { 00:26:08.785 "name": "BaseBdev4", 00:26:08.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:08.785 "is_configured": false, 00:26:08.785 "data_offset": 0, 00:26:08.785 "data_size": 0 00:26:08.785 } 00:26:08.785 ] 00:26:08.785 }' 00:26:08.785 14:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:08.785 14:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:09.352 14:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:26:09.611 [2024-07-15 14:18:55.477739] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:09.611 [2024-07-15 14:18:55.478188] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:26:09.611 14:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:26:09.892 [2024-07-15 14:18:55.717882] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:09.892 [2024-07-15 14:18:55.719895] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:09.892 [2024-07-15 14:18:55.720638] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:09.892 [2024-07-15 14:18:55.720808] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:26:09.892 [2024-07-15 14:18:55.721128] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:26:09.892 [2024-07-15 14:18:55.721263] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:26:09.892 [2024-07-15 14:18:55.721503] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:26:09.892 14:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:26:09.892 14:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:26:09.892 14:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:09.892 14:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:09.892 14:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:09.892 14:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:09.892 14:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:09.892 14:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:09.892 14:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:09.892 14:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:09.892 14:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:09.892 14:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:09.892 14:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:09.892 14:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:10.151 14:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:10.151 "name": "Existed_Raid", 00:26:10.151 "uuid": "bdfc69e6-038b-4974-ac37-b3657c69315d", 00:26:10.151 "strip_size_kb": 0, 00:26:10.151 "state": "configuring", 00:26:10.151 "raid_level": "raid1", 00:26:10.151 "superblock": true, 00:26:10.151 "num_base_bdevs": 4, 00:26:10.151 "num_base_bdevs_discovered": 1, 00:26:10.151 "num_base_bdevs_operational": 4, 00:26:10.151 "base_bdevs_list": [ 00:26:10.151 { 00:26:10.151 "name": "BaseBdev1", 00:26:10.151 "uuid": "4f8d9873-d347-4603-b484-4b5ded9c5d5a", 00:26:10.151 "is_configured": true, 00:26:10.151 "data_offset": 2048, 00:26:10.151 "data_size": 63488 00:26:10.151 }, 00:26:10.151 { 00:26:10.151 "name": "BaseBdev2", 00:26:10.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:10.151 "is_configured": false, 00:26:10.151 "data_offset": 0, 00:26:10.151 "data_size": 0 00:26:10.151 }, 00:26:10.151 { 00:26:10.151 "name": "BaseBdev3", 00:26:10.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:10.151 "is_configured": false, 00:26:10.151 "data_offset": 0, 00:26:10.151 "data_size": 0 00:26:10.151 }, 00:26:10.151 { 00:26:10.151 "name": "BaseBdev4", 00:26:10.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:10.151 "is_configured": false, 00:26:10.151 "data_offset": 0, 00:26:10.151 "data_size": 0 00:26:10.151 } 00:26:10.151 ] 00:26:10.151 }' 00:26:10.151 14:18:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:10.151 14:18:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:10.716 14:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:26:10.975 [2024-07-15 14:18:56.914085] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:10.975 BaseBdev2 00:26:10.975 14:18:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:26:10.975 14:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:26:10.975 14:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:26:10.975 14:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:26:10.975 14:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:26:10.975 14:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:26:10.975 14:18:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:11.234 14:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:26:11.493 [ 00:26:11.493 { 00:26:11.493 "name": "BaseBdev2", 00:26:11.493 "aliases": [ 00:26:11.493 "bedcadbc-1191-46e6-b614-8a8c66ad7d2f" 00:26:11.493 ], 00:26:11.493 "product_name": "Malloc disk", 00:26:11.493 "block_size": 512, 00:26:11.493 "num_blocks": 65536, 00:26:11.493 "uuid": "bedcadbc-1191-46e6-b614-8a8c66ad7d2f", 00:26:11.493 "assigned_rate_limits": { 00:26:11.493 "rw_ios_per_sec": 0, 00:26:11.493 "rw_mbytes_per_sec": 0, 00:26:11.493 "r_mbytes_per_sec": 0, 00:26:11.493 "w_mbytes_per_sec": 0 00:26:11.493 }, 00:26:11.493 "claimed": true, 00:26:11.493 "claim_type": "exclusive_write", 00:26:11.493 "zoned": false, 00:26:11.493 "supported_io_types": { 00:26:11.493 "read": true, 00:26:11.493 "write": true, 00:26:11.493 "unmap": true, 00:26:11.493 "flush": true, 00:26:11.493 "reset": true, 00:26:11.493 "nvme_admin": false, 00:26:11.493 "nvme_io": false, 00:26:11.493 "nvme_io_md": false, 00:26:11.493 "write_zeroes": true, 00:26:11.493 "zcopy": true, 00:26:11.493 "get_zone_info": false, 00:26:11.493 "zone_management": false, 00:26:11.493 "zone_append": false, 00:26:11.493 "compare": false, 00:26:11.493 "compare_and_write": false, 00:26:11.493 "abort": true, 00:26:11.493 "seek_hole": false, 00:26:11.493 "seek_data": false, 00:26:11.493 "copy": true, 00:26:11.493 "nvme_iov_md": false 00:26:11.493 }, 00:26:11.493 "memory_domains": [ 00:26:11.493 { 00:26:11.493 "dma_device_id": "system", 00:26:11.493 "dma_device_type": 1 00:26:11.493 }, 00:26:11.493 { 00:26:11.493 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:11.493 "dma_device_type": 2 00:26:11.493 } 00:26:11.493 ], 00:26:11.493 "driver_specific": {} 00:26:11.493 } 00:26:11.493 ] 00:26:11.493 14:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:26:11.493 14:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:26:11.493 14:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:26:11.493 14:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:11.493 14:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:11.493 14:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:11.493 14:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:11.493 14:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:11.493 14:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:11.493 14:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:11.493 14:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:11.493 14:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:11.493 14:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:11.493 14:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:11.493 14:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:11.780 14:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:11.780 "name": "Existed_Raid", 00:26:11.780 "uuid": "bdfc69e6-038b-4974-ac37-b3657c69315d", 00:26:11.780 "strip_size_kb": 0, 00:26:11.780 "state": "configuring", 00:26:11.780 "raid_level": "raid1", 00:26:11.780 "superblock": true, 00:26:11.780 "num_base_bdevs": 4, 00:26:11.780 "num_base_bdevs_discovered": 2, 00:26:11.780 "num_base_bdevs_operational": 4, 00:26:11.780 "base_bdevs_list": [ 00:26:11.780 { 00:26:11.780 "name": "BaseBdev1", 00:26:11.780 "uuid": "4f8d9873-d347-4603-b484-4b5ded9c5d5a", 00:26:11.780 "is_configured": true, 00:26:11.780 "data_offset": 2048, 00:26:11.780 "data_size": 63488 00:26:11.780 }, 00:26:11.780 { 00:26:11.780 "name": "BaseBdev2", 00:26:11.780 "uuid": "bedcadbc-1191-46e6-b614-8a8c66ad7d2f", 00:26:11.780 "is_configured": true, 00:26:11.780 "data_offset": 2048, 00:26:11.780 "data_size": 63488 00:26:11.780 }, 00:26:11.780 { 00:26:11.780 "name": "BaseBdev3", 00:26:11.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:11.780 "is_configured": false, 00:26:11.780 "data_offset": 0, 00:26:11.780 "data_size": 0 00:26:11.780 }, 00:26:11.780 { 00:26:11.780 "name": "BaseBdev4", 00:26:11.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:11.780 "is_configured": false, 00:26:11.780 "data_offset": 0, 00:26:11.780 "data_size": 0 00:26:11.780 } 00:26:11.780 ] 00:26:11.780 }' 00:26:11.780 14:18:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:11.780 14:18:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:12.713 14:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:26:12.969 [2024-07-15 14:18:58.757621] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:12.969 BaseBdev3 00:26:12.969 14:18:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:26:12.969 14:18:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:26:12.969 14:18:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:26:12.969 14:18:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:26:12.969 14:18:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:26:12.969 14:18:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:26:12.969 14:18:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:13.226 14:18:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:26:13.483 [ 00:26:13.483 { 00:26:13.483 "name": "BaseBdev3", 00:26:13.483 "aliases": [ 00:26:13.483 "a7178ddd-b706-409d-95cf-291f9afc7bd6" 00:26:13.483 ], 00:26:13.483 "product_name": "Malloc disk", 00:26:13.483 "block_size": 512, 00:26:13.483 "num_blocks": 65536, 00:26:13.483 "uuid": "a7178ddd-b706-409d-95cf-291f9afc7bd6", 00:26:13.483 "assigned_rate_limits": { 00:26:13.483 "rw_ios_per_sec": 0, 00:26:13.483 "rw_mbytes_per_sec": 0, 00:26:13.483 "r_mbytes_per_sec": 0, 00:26:13.483 "w_mbytes_per_sec": 0 00:26:13.483 }, 00:26:13.483 "claimed": true, 00:26:13.483 "claim_type": "exclusive_write", 00:26:13.483 "zoned": false, 00:26:13.483 "supported_io_types": { 00:26:13.483 "read": true, 00:26:13.483 "write": true, 00:26:13.483 "unmap": true, 00:26:13.483 "flush": true, 00:26:13.483 "reset": true, 00:26:13.483 "nvme_admin": false, 00:26:13.483 "nvme_io": false, 00:26:13.483 "nvme_io_md": false, 00:26:13.483 "write_zeroes": true, 00:26:13.483 "zcopy": true, 00:26:13.483 "get_zone_info": false, 00:26:13.483 "zone_management": false, 00:26:13.483 "zone_append": false, 00:26:13.483 "compare": false, 00:26:13.483 "compare_and_write": false, 00:26:13.483 "abort": true, 00:26:13.483 "seek_hole": false, 00:26:13.483 "seek_data": false, 00:26:13.483 "copy": true, 00:26:13.483 "nvme_iov_md": false 00:26:13.483 }, 00:26:13.483 "memory_domains": [ 00:26:13.483 { 00:26:13.483 "dma_device_id": "system", 00:26:13.483 "dma_device_type": 1 00:26:13.483 }, 00:26:13.483 { 00:26:13.483 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:13.483 "dma_device_type": 2 00:26:13.483 } 00:26:13.483 ], 00:26:13.483 "driver_specific": {} 00:26:13.483 } 00:26:13.483 ] 00:26:13.483 14:18:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:26:13.483 14:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:26:13.483 14:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:26:13.483 14:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:13.483 14:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:13.483 14:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:13.483 14:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:13.483 14:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:13.483 14:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:13.483 14:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:13.483 14:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:13.483 14:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:13.483 14:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:13.483 14:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:13.483 14:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:13.741 14:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:13.741 "name": "Existed_Raid", 00:26:13.741 "uuid": "bdfc69e6-038b-4974-ac37-b3657c69315d", 00:26:13.741 "strip_size_kb": 0, 00:26:13.741 "state": "configuring", 00:26:13.741 "raid_level": "raid1", 00:26:13.741 "superblock": true, 00:26:13.741 "num_base_bdevs": 4, 00:26:13.741 "num_base_bdevs_discovered": 3, 00:26:13.741 "num_base_bdevs_operational": 4, 00:26:13.741 "base_bdevs_list": [ 00:26:13.741 { 00:26:13.741 "name": "BaseBdev1", 00:26:13.741 "uuid": "4f8d9873-d347-4603-b484-4b5ded9c5d5a", 00:26:13.741 "is_configured": true, 00:26:13.741 "data_offset": 2048, 00:26:13.741 "data_size": 63488 00:26:13.741 }, 00:26:13.741 { 00:26:13.741 "name": "BaseBdev2", 00:26:13.741 "uuid": "bedcadbc-1191-46e6-b614-8a8c66ad7d2f", 00:26:13.741 "is_configured": true, 00:26:13.741 "data_offset": 2048, 00:26:13.741 "data_size": 63488 00:26:13.741 }, 00:26:13.741 { 00:26:13.741 "name": "BaseBdev3", 00:26:13.741 "uuid": "a7178ddd-b706-409d-95cf-291f9afc7bd6", 00:26:13.741 "is_configured": true, 00:26:13.741 "data_offset": 2048, 00:26:13.741 "data_size": 63488 00:26:13.741 }, 00:26:13.741 { 00:26:13.741 "name": "BaseBdev4", 00:26:13.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:13.741 "is_configured": false, 00:26:13.741 "data_offset": 0, 00:26:13.741 "data_size": 0 00:26:13.741 } 00:26:13.741 ] 00:26:13.741 }' 00:26:13.741 14:18:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:13.741 14:18:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:14.306 14:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:26:14.565 [2024-07-15 14:19:00.526366] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:14.565 [2024-07-15 14:19:00.526844] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:26:14.565 [2024-07-15 14:19:00.526979] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:14.565 [2024-07-15 14:19:00.527136] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:26:14.565 [2024-07-15 14:19:00.527500] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:26:14.565 [2024-07-15 14:19:00.527625] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:26:14.565 [2024-07-15 14:19:00.527869] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:14.565 BaseBdev4 00:26:14.565 14:19:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:26:14.565 14:19:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:26:14.565 14:19:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:26:14.565 14:19:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:26:14.565 14:19:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:26:14.565 14:19:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:26:14.565 14:19:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:15.131 14:19:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:26:15.389 [ 00:26:15.389 { 00:26:15.389 "name": "BaseBdev4", 00:26:15.389 "aliases": [ 00:26:15.389 "7227ba3b-f6df-4bfd-a5ae-07805007be80" 00:26:15.389 ], 00:26:15.389 "product_name": "Malloc disk", 00:26:15.389 "block_size": 512, 00:26:15.389 "num_blocks": 65536, 00:26:15.389 "uuid": "7227ba3b-f6df-4bfd-a5ae-07805007be80", 00:26:15.389 "assigned_rate_limits": { 00:26:15.389 "rw_ios_per_sec": 0, 00:26:15.389 "rw_mbytes_per_sec": 0, 00:26:15.389 "r_mbytes_per_sec": 0, 00:26:15.389 "w_mbytes_per_sec": 0 00:26:15.389 }, 00:26:15.389 "claimed": true, 00:26:15.389 "claim_type": "exclusive_write", 00:26:15.389 "zoned": false, 00:26:15.389 "supported_io_types": { 00:26:15.389 "read": true, 00:26:15.389 "write": true, 00:26:15.389 "unmap": true, 00:26:15.389 "flush": true, 00:26:15.389 "reset": true, 00:26:15.389 "nvme_admin": false, 00:26:15.390 "nvme_io": false, 00:26:15.390 "nvme_io_md": false, 00:26:15.390 "write_zeroes": true, 00:26:15.390 "zcopy": true, 00:26:15.390 "get_zone_info": false, 00:26:15.390 "zone_management": false, 00:26:15.390 "zone_append": false, 00:26:15.390 "compare": false, 00:26:15.390 "compare_and_write": false, 00:26:15.390 "abort": true, 00:26:15.390 "seek_hole": false, 00:26:15.390 "seek_data": false, 00:26:15.390 "copy": true, 00:26:15.390 "nvme_iov_md": false 00:26:15.390 }, 00:26:15.390 "memory_domains": [ 00:26:15.390 { 00:26:15.390 "dma_device_id": "system", 00:26:15.390 "dma_device_type": 1 00:26:15.390 }, 00:26:15.390 { 00:26:15.390 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:15.390 "dma_device_type": 2 00:26:15.390 } 00:26:15.390 ], 00:26:15.390 "driver_specific": {} 00:26:15.390 } 00:26:15.390 ] 00:26:15.390 14:19:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:26:15.390 14:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:26:15.390 14:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:26:15.390 14:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:26:15.390 14:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:15.390 14:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:15.390 14:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:15.390 14:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:15.390 14:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:15.390 14:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:15.390 14:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:15.390 14:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:15.390 14:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:15.390 14:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:15.390 14:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:15.648 14:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:15.648 "name": "Existed_Raid", 00:26:15.648 "uuid": "bdfc69e6-038b-4974-ac37-b3657c69315d", 00:26:15.648 "strip_size_kb": 0, 00:26:15.648 "state": "online", 00:26:15.648 "raid_level": "raid1", 00:26:15.648 "superblock": true, 00:26:15.648 "num_base_bdevs": 4, 00:26:15.648 "num_base_bdevs_discovered": 4, 00:26:15.648 "num_base_bdevs_operational": 4, 00:26:15.648 "base_bdevs_list": [ 00:26:15.648 { 00:26:15.648 "name": "BaseBdev1", 00:26:15.648 "uuid": "4f8d9873-d347-4603-b484-4b5ded9c5d5a", 00:26:15.648 "is_configured": true, 00:26:15.648 "data_offset": 2048, 00:26:15.648 "data_size": 63488 00:26:15.648 }, 00:26:15.648 { 00:26:15.648 "name": "BaseBdev2", 00:26:15.648 "uuid": "bedcadbc-1191-46e6-b614-8a8c66ad7d2f", 00:26:15.648 "is_configured": true, 00:26:15.648 "data_offset": 2048, 00:26:15.648 "data_size": 63488 00:26:15.648 }, 00:26:15.648 { 00:26:15.648 "name": "BaseBdev3", 00:26:15.648 "uuid": "a7178ddd-b706-409d-95cf-291f9afc7bd6", 00:26:15.648 "is_configured": true, 00:26:15.648 "data_offset": 2048, 00:26:15.648 "data_size": 63488 00:26:15.648 }, 00:26:15.648 { 00:26:15.648 "name": "BaseBdev4", 00:26:15.648 "uuid": "7227ba3b-f6df-4bfd-a5ae-07805007be80", 00:26:15.648 "is_configured": true, 00:26:15.648 "data_offset": 2048, 00:26:15.648 "data_size": 63488 00:26:15.648 } 00:26:15.648 ] 00:26:15.648 }' 00:26:15.648 14:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:15.648 14:19:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:16.214 14:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:26:16.214 14:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:26:16.214 14:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:26:16.214 14:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:26:16.214 14:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:26:16.214 14:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:26:16.214 14:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:26:16.214 14:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:26:16.472 [2024-07-15 14:19:02.398927] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:16.472 14:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:26:16.472 "name": "Existed_Raid", 00:26:16.472 "aliases": [ 00:26:16.472 "bdfc69e6-038b-4974-ac37-b3657c69315d" 00:26:16.472 ], 00:26:16.472 "product_name": "Raid Volume", 00:26:16.472 "block_size": 512, 00:26:16.472 "num_blocks": 63488, 00:26:16.472 "uuid": "bdfc69e6-038b-4974-ac37-b3657c69315d", 00:26:16.472 "assigned_rate_limits": { 00:26:16.472 "rw_ios_per_sec": 0, 00:26:16.472 "rw_mbytes_per_sec": 0, 00:26:16.472 "r_mbytes_per_sec": 0, 00:26:16.472 "w_mbytes_per_sec": 0 00:26:16.472 }, 00:26:16.472 "claimed": false, 00:26:16.472 "zoned": false, 00:26:16.472 "supported_io_types": { 00:26:16.472 "read": true, 00:26:16.472 "write": true, 00:26:16.472 "unmap": false, 00:26:16.472 "flush": false, 00:26:16.472 "reset": true, 00:26:16.472 "nvme_admin": false, 00:26:16.472 "nvme_io": false, 00:26:16.472 "nvme_io_md": false, 00:26:16.472 "write_zeroes": true, 00:26:16.472 "zcopy": false, 00:26:16.472 "get_zone_info": false, 00:26:16.472 "zone_management": false, 00:26:16.472 "zone_append": false, 00:26:16.472 "compare": false, 00:26:16.472 "compare_and_write": false, 00:26:16.472 "abort": false, 00:26:16.472 "seek_hole": false, 00:26:16.472 "seek_data": false, 00:26:16.472 "copy": false, 00:26:16.472 "nvme_iov_md": false 00:26:16.472 }, 00:26:16.472 "memory_domains": [ 00:26:16.472 { 00:26:16.472 "dma_device_id": "system", 00:26:16.472 "dma_device_type": 1 00:26:16.472 }, 00:26:16.472 { 00:26:16.472 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:16.472 "dma_device_type": 2 00:26:16.472 }, 00:26:16.472 { 00:26:16.472 "dma_device_id": "system", 00:26:16.472 "dma_device_type": 1 00:26:16.472 }, 00:26:16.472 { 00:26:16.472 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:16.472 "dma_device_type": 2 00:26:16.472 }, 00:26:16.472 { 00:26:16.472 "dma_device_id": "system", 00:26:16.472 "dma_device_type": 1 00:26:16.472 }, 00:26:16.472 { 00:26:16.472 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:16.472 "dma_device_type": 2 00:26:16.472 }, 00:26:16.472 { 00:26:16.472 "dma_device_id": "system", 00:26:16.472 "dma_device_type": 1 00:26:16.472 }, 00:26:16.472 { 00:26:16.472 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:16.472 "dma_device_type": 2 00:26:16.472 } 00:26:16.473 ], 00:26:16.473 "driver_specific": { 00:26:16.473 "raid": { 00:26:16.473 "uuid": "bdfc69e6-038b-4974-ac37-b3657c69315d", 00:26:16.473 "strip_size_kb": 0, 00:26:16.473 "state": "online", 00:26:16.473 "raid_level": "raid1", 00:26:16.473 "superblock": true, 00:26:16.473 "num_base_bdevs": 4, 00:26:16.473 "num_base_bdevs_discovered": 4, 00:26:16.473 "num_base_bdevs_operational": 4, 00:26:16.473 "base_bdevs_list": [ 00:26:16.473 { 00:26:16.473 "name": "BaseBdev1", 00:26:16.473 "uuid": "4f8d9873-d347-4603-b484-4b5ded9c5d5a", 00:26:16.473 "is_configured": true, 00:26:16.473 "data_offset": 2048, 00:26:16.473 "data_size": 63488 00:26:16.473 }, 00:26:16.473 { 00:26:16.473 "name": "BaseBdev2", 00:26:16.473 "uuid": "bedcadbc-1191-46e6-b614-8a8c66ad7d2f", 00:26:16.473 "is_configured": true, 00:26:16.473 "data_offset": 2048, 00:26:16.473 "data_size": 63488 00:26:16.473 }, 00:26:16.473 { 00:26:16.473 "name": "BaseBdev3", 00:26:16.473 "uuid": "a7178ddd-b706-409d-95cf-291f9afc7bd6", 00:26:16.473 "is_configured": true, 00:26:16.473 "data_offset": 2048, 00:26:16.473 "data_size": 63488 00:26:16.473 }, 00:26:16.473 { 00:26:16.473 "name": "BaseBdev4", 00:26:16.473 "uuid": "7227ba3b-f6df-4bfd-a5ae-07805007be80", 00:26:16.473 "is_configured": true, 00:26:16.473 "data_offset": 2048, 00:26:16.473 "data_size": 63488 00:26:16.473 } 00:26:16.473 ] 00:26:16.473 } 00:26:16.473 } 00:26:16.473 }' 00:26:16.473 14:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:16.473 14:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:26:16.473 BaseBdev2 00:26:16.473 BaseBdev3 00:26:16.473 BaseBdev4' 00:26:16.473 14:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:16.473 14:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:26:16.473 14:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:16.732 14:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:16.732 "name": "BaseBdev1", 00:26:16.732 "aliases": [ 00:26:16.732 "4f8d9873-d347-4603-b484-4b5ded9c5d5a" 00:26:16.732 ], 00:26:16.732 "product_name": "Malloc disk", 00:26:16.732 "block_size": 512, 00:26:16.732 "num_blocks": 65536, 00:26:16.732 "uuid": "4f8d9873-d347-4603-b484-4b5ded9c5d5a", 00:26:16.732 "assigned_rate_limits": { 00:26:16.732 "rw_ios_per_sec": 0, 00:26:16.732 "rw_mbytes_per_sec": 0, 00:26:16.732 "r_mbytes_per_sec": 0, 00:26:16.732 "w_mbytes_per_sec": 0 00:26:16.732 }, 00:26:16.732 "claimed": true, 00:26:16.732 "claim_type": "exclusive_write", 00:26:16.732 "zoned": false, 00:26:16.732 "supported_io_types": { 00:26:16.732 "read": true, 00:26:16.732 "write": true, 00:26:16.732 "unmap": true, 00:26:16.732 "flush": true, 00:26:16.732 "reset": true, 00:26:16.732 "nvme_admin": false, 00:26:16.732 "nvme_io": false, 00:26:16.732 "nvme_io_md": false, 00:26:16.732 "write_zeroes": true, 00:26:16.732 "zcopy": true, 00:26:16.732 "get_zone_info": false, 00:26:16.732 "zone_management": false, 00:26:16.732 "zone_append": false, 00:26:16.732 "compare": false, 00:26:16.732 "compare_and_write": false, 00:26:16.732 "abort": true, 00:26:16.732 "seek_hole": false, 00:26:16.732 "seek_data": false, 00:26:16.732 "copy": true, 00:26:16.732 "nvme_iov_md": false 00:26:16.732 }, 00:26:16.732 "memory_domains": [ 00:26:16.732 { 00:26:16.732 "dma_device_id": "system", 00:26:16.732 "dma_device_type": 1 00:26:16.732 }, 00:26:16.732 { 00:26:16.732 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:16.732 "dma_device_type": 2 00:26:16.732 } 00:26:16.732 ], 00:26:16.732 "driver_specific": {} 00:26:16.732 }' 00:26:16.732 14:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:16.990 14:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:16.990 14:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:16.991 14:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:16.991 14:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:16.991 14:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:16.991 14:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:16.991 14:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:17.249 14:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:17.249 14:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:17.249 14:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:17.249 14:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:17.249 14:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:17.249 14:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:26:17.249 14:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:17.507 14:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:17.507 "name": "BaseBdev2", 00:26:17.507 "aliases": [ 00:26:17.507 "bedcadbc-1191-46e6-b614-8a8c66ad7d2f" 00:26:17.507 ], 00:26:17.507 "product_name": "Malloc disk", 00:26:17.507 "block_size": 512, 00:26:17.507 "num_blocks": 65536, 00:26:17.507 "uuid": "bedcadbc-1191-46e6-b614-8a8c66ad7d2f", 00:26:17.507 "assigned_rate_limits": { 00:26:17.507 "rw_ios_per_sec": 0, 00:26:17.507 "rw_mbytes_per_sec": 0, 00:26:17.507 "r_mbytes_per_sec": 0, 00:26:17.507 "w_mbytes_per_sec": 0 00:26:17.507 }, 00:26:17.507 "claimed": true, 00:26:17.507 "claim_type": "exclusive_write", 00:26:17.507 "zoned": false, 00:26:17.507 "supported_io_types": { 00:26:17.507 "read": true, 00:26:17.507 "write": true, 00:26:17.507 "unmap": true, 00:26:17.507 "flush": true, 00:26:17.507 "reset": true, 00:26:17.507 "nvme_admin": false, 00:26:17.507 "nvme_io": false, 00:26:17.507 "nvme_io_md": false, 00:26:17.507 "write_zeroes": true, 00:26:17.507 "zcopy": true, 00:26:17.507 "get_zone_info": false, 00:26:17.507 "zone_management": false, 00:26:17.507 "zone_append": false, 00:26:17.508 "compare": false, 00:26:17.508 "compare_and_write": false, 00:26:17.508 "abort": true, 00:26:17.508 "seek_hole": false, 00:26:17.508 "seek_data": false, 00:26:17.508 "copy": true, 00:26:17.508 "nvme_iov_md": false 00:26:17.508 }, 00:26:17.508 "memory_domains": [ 00:26:17.508 { 00:26:17.508 "dma_device_id": "system", 00:26:17.508 "dma_device_type": 1 00:26:17.508 }, 00:26:17.508 { 00:26:17.508 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:17.508 "dma_device_type": 2 00:26:17.508 } 00:26:17.508 ], 00:26:17.508 "driver_specific": {} 00:26:17.508 }' 00:26:17.508 14:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:17.508 14:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:17.508 14:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:17.508 14:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:17.765 14:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:17.766 14:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:17.766 14:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:17.766 14:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:17.766 14:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:17.766 14:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:17.766 14:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:18.023 14:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:18.023 14:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:18.023 14:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:26:18.023 14:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:18.282 14:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:18.282 "name": "BaseBdev3", 00:26:18.282 "aliases": [ 00:26:18.282 "a7178ddd-b706-409d-95cf-291f9afc7bd6" 00:26:18.282 ], 00:26:18.282 "product_name": "Malloc disk", 00:26:18.282 "block_size": 512, 00:26:18.282 "num_blocks": 65536, 00:26:18.282 "uuid": "a7178ddd-b706-409d-95cf-291f9afc7bd6", 00:26:18.282 "assigned_rate_limits": { 00:26:18.282 "rw_ios_per_sec": 0, 00:26:18.282 "rw_mbytes_per_sec": 0, 00:26:18.282 "r_mbytes_per_sec": 0, 00:26:18.282 "w_mbytes_per_sec": 0 00:26:18.282 }, 00:26:18.282 "claimed": true, 00:26:18.282 "claim_type": "exclusive_write", 00:26:18.282 "zoned": false, 00:26:18.282 "supported_io_types": { 00:26:18.282 "read": true, 00:26:18.282 "write": true, 00:26:18.282 "unmap": true, 00:26:18.282 "flush": true, 00:26:18.282 "reset": true, 00:26:18.282 "nvme_admin": false, 00:26:18.282 "nvme_io": false, 00:26:18.282 "nvme_io_md": false, 00:26:18.282 "write_zeroes": true, 00:26:18.282 "zcopy": true, 00:26:18.282 "get_zone_info": false, 00:26:18.282 "zone_management": false, 00:26:18.282 "zone_append": false, 00:26:18.282 "compare": false, 00:26:18.282 "compare_and_write": false, 00:26:18.282 "abort": true, 00:26:18.282 "seek_hole": false, 00:26:18.282 "seek_data": false, 00:26:18.283 "copy": true, 00:26:18.283 "nvme_iov_md": false 00:26:18.283 }, 00:26:18.283 "memory_domains": [ 00:26:18.283 { 00:26:18.283 "dma_device_id": "system", 00:26:18.283 "dma_device_type": 1 00:26:18.283 }, 00:26:18.283 { 00:26:18.283 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:18.283 "dma_device_type": 2 00:26:18.283 } 00:26:18.283 ], 00:26:18.283 "driver_specific": {} 00:26:18.283 }' 00:26:18.283 14:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:18.283 14:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:18.283 14:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:18.283 14:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:18.283 14:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:18.283 14:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:18.283 14:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:18.541 14:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:18.542 14:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:18.542 14:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:18.542 14:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:18.542 14:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:18.542 14:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:18.542 14:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:26:18.542 14:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:18.800 14:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:18.800 "name": "BaseBdev4", 00:26:18.800 "aliases": [ 00:26:18.800 "7227ba3b-f6df-4bfd-a5ae-07805007be80" 00:26:18.800 ], 00:26:18.800 "product_name": "Malloc disk", 00:26:18.800 "block_size": 512, 00:26:18.800 "num_blocks": 65536, 00:26:18.800 "uuid": "7227ba3b-f6df-4bfd-a5ae-07805007be80", 00:26:18.800 "assigned_rate_limits": { 00:26:18.800 "rw_ios_per_sec": 0, 00:26:18.800 "rw_mbytes_per_sec": 0, 00:26:18.800 "r_mbytes_per_sec": 0, 00:26:18.800 "w_mbytes_per_sec": 0 00:26:18.800 }, 00:26:18.800 "claimed": true, 00:26:18.800 "claim_type": "exclusive_write", 00:26:18.800 "zoned": false, 00:26:18.800 "supported_io_types": { 00:26:18.800 "read": true, 00:26:18.800 "write": true, 00:26:18.800 "unmap": true, 00:26:18.800 "flush": true, 00:26:18.800 "reset": true, 00:26:18.800 "nvme_admin": false, 00:26:18.800 "nvme_io": false, 00:26:18.800 "nvme_io_md": false, 00:26:18.800 "write_zeroes": true, 00:26:18.800 "zcopy": true, 00:26:18.800 "get_zone_info": false, 00:26:18.800 "zone_management": false, 00:26:18.800 "zone_append": false, 00:26:18.800 "compare": false, 00:26:18.800 "compare_and_write": false, 00:26:18.800 "abort": true, 00:26:18.800 "seek_hole": false, 00:26:18.800 "seek_data": false, 00:26:18.800 "copy": true, 00:26:18.800 "nvme_iov_md": false 00:26:18.800 }, 00:26:18.800 "memory_domains": [ 00:26:18.800 { 00:26:18.800 "dma_device_id": "system", 00:26:18.800 "dma_device_type": 1 00:26:18.800 }, 00:26:18.800 { 00:26:18.800 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:18.800 "dma_device_type": 2 00:26:18.800 } 00:26:18.800 ], 00:26:18.800 "driver_specific": {} 00:26:18.800 }' 00:26:18.800 14:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:18.800 14:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:18.801 14:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:18.801 14:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:19.118 14:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:19.118 14:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:19.118 14:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:19.118 14:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:19.118 14:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:19.118 14:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:19.118 14:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:19.118 14:19:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:19.118 14:19:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:26:19.377 [2024-07-15 14:19:05.313860] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:19.637 14:19:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:26:19.637 14:19:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:26:19.637 14:19:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:26:19.637 14:19:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:26:19.637 14:19:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:26:19.637 14:19:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:26:19.637 14:19:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:19.637 14:19:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:19.637 14:19:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:19.637 14:19:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:19.637 14:19:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:19.637 14:19:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:19.637 14:19:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:19.637 14:19:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:19.637 14:19:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:19.637 14:19:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:19.637 14:19:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:19.896 14:19:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:19.896 "name": "Existed_Raid", 00:26:19.896 "uuid": "bdfc69e6-038b-4974-ac37-b3657c69315d", 00:26:19.896 "strip_size_kb": 0, 00:26:19.896 "state": "online", 00:26:19.896 "raid_level": "raid1", 00:26:19.896 "superblock": true, 00:26:19.896 "num_base_bdevs": 4, 00:26:19.896 "num_base_bdevs_discovered": 3, 00:26:19.896 "num_base_bdevs_operational": 3, 00:26:19.896 "base_bdevs_list": [ 00:26:19.896 { 00:26:19.896 "name": null, 00:26:19.896 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:19.896 "is_configured": false, 00:26:19.896 "data_offset": 2048, 00:26:19.896 "data_size": 63488 00:26:19.896 }, 00:26:19.896 { 00:26:19.896 "name": "BaseBdev2", 00:26:19.896 "uuid": "bedcadbc-1191-46e6-b614-8a8c66ad7d2f", 00:26:19.896 "is_configured": true, 00:26:19.896 "data_offset": 2048, 00:26:19.896 "data_size": 63488 00:26:19.896 }, 00:26:19.896 { 00:26:19.896 "name": "BaseBdev3", 00:26:19.896 "uuid": "a7178ddd-b706-409d-95cf-291f9afc7bd6", 00:26:19.896 "is_configured": true, 00:26:19.896 "data_offset": 2048, 00:26:19.896 "data_size": 63488 00:26:19.896 }, 00:26:19.896 { 00:26:19.896 "name": "BaseBdev4", 00:26:19.896 "uuid": "7227ba3b-f6df-4bfd-a5ae-07805007be80", 00:26:19.896 "is_configured": true, 00:26:19.896 "data_offset": 2048, 00:26:19.896 "data_size": 63488 00:26:19.896 } 00:26:19.896 ] 00:26:19.896 }' 00:26:19.896 14:19:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:19.896 14:19:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:20.464 14:19:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:26:20.464 14:19:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:26:20.464 14:19:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:26:20.464 14:19:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:20.724 14:19:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:26:20.724 14:19:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:20.724 14:19:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:26:20.983 [2024-07-15 14:19:06.839855] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:20.983 14:19:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:26:20.983 14:19:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:26:20.983 14:19:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:20.983 14:19:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:26:21.243 14:19:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:26:21.243 14:19:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:21.243 14:19:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:26:21.503 [2024-07-15 14:19:07.430992] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:26:21.761 14:19:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:26:21.761 14:19:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:26:21.761 14:19:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:21.761 14:19:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:26:22.020 14:19:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:26:22.020 14:19:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:22.020 14:19:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:26:22.279 [2024-07-15 14:19:08.066492] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:26:22.279 [2024-07-15 14:19:08.066925] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:22.279 [2024-07-15 14:19:08.217008] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:22.279 [2024-07-15 14:19:08.217356] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:22.280 [2024-07-15 14:19:08.217504] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:26:22.280 14:19:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:26:22.280 14:19:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:26:22.280 14:19:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:22.280 14:19:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:26:22.538 14:19:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:26:22.539 14:19:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:26:22.539 14:19:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:26:22.539 14:19:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:26:22.539 14:19:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:26:22.539 14:19:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:26:22.823 BaseBdev2 00:26:22.823 14:19:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:26:22.823 14:19:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:26:22.823 14:19:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:26:22.823 14:19:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:26:22.823 14:19:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:26:22.823 14:19:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:26:22.823 14:19:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:23.100 14:19:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:26:23.359 [ 00:26:23.359 { 00:26:23.359 "name": "BaseBdev2", 00:26:23.359 "aliases": [ 00:26:23.359 "f8345ca6-4553-44c8-b3fe-b87c4b0aba3e" 00:26:23.359 ], 00:26:23.359 "product_name": "Malloc disk", 00:26:23.359 "block_size": 512, 00:26:23.359 "num_blocks": 65536, 00:26:23.359 "uuid": "f8345ca6-4553-44c8-b3fe-b87c4b0aba3e", 00:26:23.359 "assigned_rate_limits": { 00:26:23.359 "rw_ios_per_sec": 0, 00:26:23.359 "rw_mbytes_per_sec": 0, 00:26:23.359 "r_mbytes_per_sec": 0, 00:26:23.359 "w_mbytes_per_sec": 0 00:26:23.359 }, 00:26:23.359 "claimed": false, 00:26:23.359 "zoned": false, 00:26:23.359 "supported_io_types": { 00:26:23.359 "read": true, 00:26:23.359 "write": true, 00:26:23.359 "unmap": true, 00:26:23.359 "flush": true, 00:26:23.359 "reset": true, 00:26:23.359 "nvme_admin": false, 00:26:23.359 "nvme_io": false, 00:26:23.359 "nvme_io_md": false, 00:26:23.359 "write_zeroes": true, 00:26:23.359 "zcopy": true, 00:26:23.359 "get_zone_info": false, 00:26:23.359 "zone_management": false, 00:26:23.359 "zone_append": false, 00:26:23.359 "compare": false, 00:26:23.359 "compare_and_write": false, 00:26:23.359 "abort": true, 00:26:23.359 "seek_hole": false, 00:26:23.359 "seek_data": false, 00:26:23.359 "copy": true, 00:26:23.359 "nvme_iov_md": false 00:26:23.359 }, 00:26:23.359 "memory_domains": [ 00:26:23.359 { 00:26:23.359 "dma_device_id": "system", 00:26:23.359 "dma_device_type": 1 00:26:23.359 }, 00:26:23.359 { 00:26:23.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:23.359 "dma_device_type": 2 00:26:23.359 } 00:26:23.359 ], 00:26:23.359 "driver_specific": {} 00:26:23.359 } 00:26:23.359 ] 00:26:23.618 14:19:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:26:23.618 14:19:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:26:23.618 14:19:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:26:23.618 14:19:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:26:23.876 BaseBdev3 00:26:23.876 14:19:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:26:23.876 14:19:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:26:23.876 14:19:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:26:23.876 14:19:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:26:23.876 14:19:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:26:23.876 14:19:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:26:23.876 14:19:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:24.134 14:19:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:26:24.393 [ 00:26:24.393 { 00:26:24.393 "name": "BaseBdev3", 00:26:24.393 "aliases": [ 00:26:24.393 "0b203685-c6c1-451e-8065-afa425901af6" 00:26:24.393 ], 00:26:24.393 "product_name": "Malloc disk", 00:26:24.393 "block_size": 512, 00:26:24.393 "num_blocks": 65536, 00:26:24.393 "uuid": "0b203685-c6c1-451e-8065-afa425901af6", 00:26:24.393 "assigned_rate_limits": { 00:26:24.393 "rw_ios_per_sec": 0, 00:26:24.393 "rw_mbytes_per_sec": 0, 00:26:24.393 "r_mbytes_per_sec": 0, 00:26:24.393 "w_mbytes_per_sec": 0 00:26:24.393 }, 00:26:24.393 "claimed": false, 00:26:24.393 "zoned": false, 00:26:24.393 "supported_io_types": { 00:26:24.393 "read": true, 00:26:24.393 "write": true, 00:26:24.393 "unmap": true, 00:26:24.393 "flush": true, 00:26:24.393 "reset": true, 00:26:24.393 "nvme_admin": false, 00:26:24.393 "nvme_io": false, 00:26:24.393 "nvme_io_md": false, 00:26:24.393 "write_zeroes": true, 00:26:24.393 "zcopy": true, 00:26:24.393 "get_zone_info": false, 00:26:24.393 "zone_management": false, 00:26:24.393 "zone_append": false, 00:26:24.393 "compare": false, 00:26:24.393 "compare_and_write": false, 00:26:24.393 "abort": true, 00:26:24.393 "seek_hole": false, 00:26:24.393 "seek_data": false, 00:26:24.393 "copy": true, 00:26:24.393 "nvme_iov_md": false 00:26:24.393 }, 00:26:24.393 "memory_domains": [ 00:26:24.393 { 00:26:24.393 "dma_device_id": "system", 00:26:24.393 "dma_device_type": 1 00:26:24.393 }, 00:26:24.393 { 00:26:24.393 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:24.393 "dma_device_type": 2 00:26:24.393 } 00:26:24.393 ], 00:26:24.393 "driver_specific": {} 00:26:24.393 } 00:26:24.393 ] 00:26:24.393 14:19:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:26:24.393 14:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:26:24.393 14:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:26:24.393 14:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:26:24.651 BaseBdev4 00:26:24.651 14:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:26:24.651 14:19:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:26:24.651 14:19:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:26:24.651 14:19:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:26:24.651 14:19:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:26:24.651 14:19:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:26:24.651 14:19:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:24.913 14:19:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:26:24.913 [ 00:26:24.913 { 00:26:24.913 "name": "BaseBdev4", 00:26:24.913 "aliases": [ 00:26:24.913 "a4affe36-a487-457f-b939-9131fab55da8" 00:26:24.913 ], 00:26:24.913 "product_name": "Malloc disk", 00:26:24.913 "block_size": 512, 00:26:24.913 "num_blocks": 65536, 00:26:24.913 "uuid": "a4affe36-a487-457f-b939-9131fab55da8", 00:26:24.913 "assigned_rate_limits": { 00:26:24.913 "rw_ios_per_sec": 0, 00:26:24.913 "rw_mbytes_per_sec": 0, 00:26:24.913 "r_mbytes_per_sec": 0, 00:26:24.913 "w_mbytes_per_sec": 0 00:26:24.913 }, 00:26:24.913 "claimed": false, 00:26:24.913 "zoned": false, 00:26:24.913 "supported_io_types": { 00:26:24.913 "read": true, 00:26:24.913 "write": true, 00:26:24.913 "unmap": true, 00:26:24.913 "flush": true, 00:26:24.913 "reset": true, 00:26:24.913 "nvme_admin": false, 00:26:24.913 "nvme_io": false, 00:26:24.913 "nvme_io_md": false, 00:26:24.913 "write_zeroes": true, 00:26:24.913 "zcopy": true, 00:26:24.913 "get_zone_info": false, 00:26:24.913 "zone_management": false, 00:26:24.913 "zone_append": false, 00:26:24.913 "compare": false, 00:26:24.913 "compare_and_write": false, 00:26:24.913 "abort": true, 00:26:24.913 "seek_hole": false, 00:26:24.913 "seek_data": false, 00:26:24.913 "copy": true, 00:26:24.913 "nvme_iov_md": false 00:26:24.913 }, 00:26:24.913 "memory_domains": [ 00:26:24.913 { 00:26:24.913 "dma_device_id": "system", 00:26:24.913 "dma_device_type": 1 00:26:24.913 }, 00:26:24.913 { 00:26:24.913 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:24.913 "dma_device_type": 2 00:26:24.913 } 00:26:24.913 ], 00:26:24.913 "driver_specific": {} 00:26:24.913 } 00:26:24.913 ] 00:26:25.172 14:19:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:26:25.172 14:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:26:25.172 14:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:26:25.172 14:19:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:26:25.431 [2024-07-15 14:19:11.225389] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:25.431 [2024-07-15 14:19:11.225677] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:25.431 [2024-07-15 14:19:11.225828] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:25.431 [2024-07-15 14:19:11.227278] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:25.431 [2024-07-15 14:19:11.227440] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:25.431 14:19:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:25.431 14:19:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:25.431 14:19:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:25.431 14:19:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:25.431 14:19:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:25.431 14:19:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:25.431 14:19:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:25.431 14:19:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:25.431 14:19:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:25.431 14:19:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:25.431 14:19:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:25.431 14:19:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:25.689 14:19:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:25.689 "name": "Existed_Raid", 00:26:25.689 "uuid": "289fa1ec-6401-443c-b577-6e5ddf63851d", 00:26:25.689 "strip_size_kb": 0, 00:26:25.689 "state": "configuring", 00:26:25.689 "raid_level": "raid1", 00:26:25.689 "superblock": true, 00:26:25.689 "num_base_bdevs": 4, 00:26:25.689 "num_base_bdevs_discovered": 3, 00:26:25.689 "num_base_bdevs_operational": 4, 00:26:25.689 "base_bdevs_list": [ 00:26:25.689 { 00:26:25.689 "name": "BaseBdev1", 00:26:25.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:25.689 "is_configured": false, 00:26:25.689 "data_offset": 0, 00:26:25.689 "data_size": 0 00:26:25.689 }, 00:26:25.689 { 00:26:25.689 "name": "BaseBdev2", 00:26:25.689 "uuid": "f8345ca6-4553-44c8-b3fe-b87c4b0aba3e", 00:26:25.689 "is_configured": true, 00:26:25.689 "data_offset": 2048, 00:26:25.689 "data_size": 63488 00:26:25.689 }, 00:26:25.689 { 00:26:25.689 "name": "BaseBdev3", 00:26:25.689 "uuid": "0b203685-c6c1-451e-8065-afa425901af6", 00:26:25.690 "is_configured": true, 00:26:25.690 "data_offset": 2048, 00:26:25.690 "data_size": 63488 00:26:25.690 }, 00:26:25.690 { 00:26:25.690 "name": "BaseBdev4", 00:26:25.690 "uuid": "a4affe36-a487-457f-b939-9131fab55da8", 00:26:25.690 "is_configured": true, 00:26:25.690 "data_offset": 2048, 00:26:25.690 "data_size": 63488 00:26:25.690 } 00:26:25.690 ] 00:26:25.690 }' 00:26:25.690 14:19:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:25.690 14:19:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:26.258 14:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:26:26.515 [2024-07-15 14:19:12.457562] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:26.516 14:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:26.516 14:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:26.516 14:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:26.516 14:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:26.516 14:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:26.516 14:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:26.516 14:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:26.516 14:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:26.516 14:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:26.516 14:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:26.516 14:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:26.516 14:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:26.790 14:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:26.790 "name": "Existed_Raid", 00:26:26.790 "uuid": "289fa1ec-6401-443c-b577-6e5ddf63851d", 00:26:26.790 "strip_size_kb": 0, 00:26:26.790 "state": "configuring", 00:26:26.790 "raid_level": "raid1", 00:26:26.790 "superblock": true, 00:26:26.790 "num_base_bdevs": 4, 00:26:26.790 "num_base_bdevs_discovered": 2, 00:26:26.790 "num_base_bdevs_operational": 4, 00:26:26.790 "base_bdevs_list": [ 00:26:26.790 { 00:26:26.790 "name": "BaseBdev1", 00:26:26.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:26.790 "is_configured": false, 00:26:26.790 "data_offset": 0, 00:26:26.790 "data_size": 0 00:26:26.790 }, 00:26:26.790 { 00:26:26.790 "name": null, 00:26:26.790 "uuid": "f8345ca6-4553-44c8-b3fe-b87c4b0aba3e", 00:26:26.790 "is_configured": false, 00:26:26.790 "data_offset": 2048, 00:26:26.790 "data_size": 63488 00:26:26.790 }, 00:26:26.790 { 00:26:26.790 "name": "BaseBdev3", 00:26:26.790 "uuid": "0b203685-c6c1-451e-8065-afa425901af6", 00:26:26.790 "is_configured": true, 00:26:26.790 "data_offset": 2048, 00:26:26.790 "data_size": 63488 00:26:26.790 }, 00:26:26.790 { 00:26:26.790 "name": "BaseBdev4", 00:26:26.790 "uuid": "a4affe36-a487-457f-b939-9131fab55da8", 00:26:26.790 "is_configured": true, 00:26:26.790 "data_offset": 2048, 00:26:26.790 "data_size": 63488 00:26:26.790 } 00:26:26.790 ] 00:26:26.790 }' 00:26:26.790 14:19:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:26.790 14:19:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:27.724 14:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:26:27.724 14:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:27.724 14:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:26:27.724 14:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:26:27.981 [2024-07-15 14:19:13.921392] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:27.981 BaseBdev1 00:26:27.981 14:19:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:26:27.981 14:19:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:26:27.981 14:19:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:26:27.981 14:19:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:26:27.982 14:19:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:26:27.982 14:19:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:26:27.982 14:19:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:28.240 14:19:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:28.496 [ 00:26:28.496 { 00:26:28.496 "name": "BaseBdev1", 00:26:28.496 "aliases": [ 00:26:28.496 "ad8a37fb-87fc-4a51-8984-54bd4d06e42e" 00:26:28.496 ], 00:26:28.496 "product_name": "Malloc disk", 00:26:28.496 "block_size": 512, 00:26:28.496 "num_blocks": 65536, 00:26:28.496 "uuid": "ad8a37fb-87fc-4a51-8984-54bd4d06e42e", 00:26:28.496 "assigned_rate_limits": { 00:26:28.497 "rw_ios_per_sec": 0, 00:26:28.497 "rw_mbytes_per_sec": 0, 00:26:28.497 "r_mbytes_per_sec": 0, 00:26:28.497 "w_mbytes_per_sec": 0 00:26:28.497 }, 00:26:28.497 "claimed": true, 00:26:28.497 "claim_type": "exclusive_write", 00:26:28.497 "zoned": false, 00:26:28.497 "supported_io_types": { 00:26:28.497 "read": true, 00:26:28.497 "write": true, 00:26:28.497 "unmap": true, 00:26:28.497 "flush": true, 00:26:28.497 "reset": true, 00:26:28.497 "nvme_admin": false, 00:26:28.497 "nvme_io": false, 00:26:28.497 "nvme_io_md": false, 00:26:28.497 "write_zeroes": true, 00:26:28.497 "zcopy": true, 00:26:28.497 "get_zone_info": false, 00:26:28.497 "zone_management": false, 00:26:28.497 "zone_append": false, 00:26:28.497 "compare": false, 00:26:28.497 "compare_and_write": false, 00:26:28.497 "abort": true, 00:26:28.497 "seek_hole": false, 00:26:28.497 "seek_data": false, 00:26:28.497 "copy": true, 00:26:28.497 "nvme_iov_md": false 00:26:28.497 }, 00:26:28.497 "memory_domains": [ 00:26:28.497 { 00:26:28.497 "dma_device_id": "system", 00:26:28.497 "dma_device_type": 1 00:26:28.497 }, 00:26:28.497 { 00:26:28.497 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:28.497 "dma_device_type": 2 00:26:28.497 } 00:26:28.497 ], 00:26:28.497 "driver_specific": {} 00:26:28.497 } 00:26:28.497 ] 00:26:28.497 14:19:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:26:28.497 14:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:28.497 14:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:28.497 14:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:28.497 14:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:28.497 14:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:28.497 14:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:28.497 14:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:28.497 14:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:28.497 14:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:28.497 14:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:28.497 14:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:28.497 14:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:28.755 14:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:28.755 "name": "Existed_Raid", 00:26:28.755 "uuid": "289fa1ec-6401-443c-b577-6e5ddf63851d", 00:26:28.755 "strip_size_kb": 0, 00:26:28.755 "state": "configuring", 00:26:28.755 "raid_level": "raid1", 00:26:28.755 "superblock": true, 00:26:28.755 "num_base_bdevs": 4, 00:26:28.755 "num_base_bdevs_discovered": 3, 00:26:28.755 "num_base_bdevs_operational": 4, 00:26:28.755 "base_bdevs_list": [ 00:26:28.755 { 00:26:28.755 "name": "BaseBdev1", 00:26:28.755 "uuid": "ad8a37fb-87fc-4a51-8984-54bd4d06e42e", 00:26:28.755 "is_configured": true, 00:26:28.755 "data_offset": 2048, 00:26:28.755 "data_size": 63488 00:26:28.755 }, 00:26:28.755 { 00:26:28.755 "name": null, 00:26:28.755 "uuid": "f8345ca6-4553-44c8-b3fe-b87c4b0aba3e", 00:26:28.755 "is_configured": false, 00:26:28.755 "data_offset": 2048, 00:26:28.755 "data_size": 63488 00:26:28.755 }, 00:26:28.755 { 00:26:28.755 "name": "BaseBdev3", 00:26:28.755 "uuid": "0b203685-c6c1-451e-8065-afa425901af6", 00:26:28.755 "is_configured": true, 00:26:28.755 "data_offset": 2048, 00:26:28.755 "data_size": 63488 00:26:28.755 }, 00:26:28.755 { 00:26:28.755 "name": "BaseBdev4", 00:26:28.755 "uuid": "a4affe36-a487-457f-b939-9131fab55da8", 00:26:28.755 "is_configured": true, 00:26:28.755 "data_offset": 2048, 00:26:28.755 "data_size": 63488 00:26:28.755 } 00:26:28.755 ] 00:26:28.755 }' 00:26:28.755 14:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:28.755 14:19:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:29.321 14:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:29.321 14:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:26:29.578 14:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:26:29.578 14:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:26:29.836 [2024-07-15 14:19:15.777770] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:26:29.836 14:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:29.836 14:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:29.836 14:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:29.836 14:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:29.836 14:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:29.836 14:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:29.836 14:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:29.836 14:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:29.836 14:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:29.836 14:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:29.836 14:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:29.836 14:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:30.093 14:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:30.093 "name": "Existed_Raid", 00:26:30.093 "uuid": "289fa1ec-6401-443c-b577-6e5ddf63851d", 00:26:30.093 "strip_size_kb": 0, 00:26:30.093 "state": "configuring", 00:26:30.093 "raid_level": "raid1", 00:26:30.093 "superblock": true, 00:26:30.093 "num_base_bdevs": 4, 00:26:30.093 "num_base_bdevs_discovered": 2, 00:26:30.093 "num_base_bdevs_operational": 4, 00:26:30.093 "base_bdevs_list": [ 00:26:30.093 { 00:26:30.093 "name": "BaseBdev1", 00:26:30.093 "uuid": "ad8a37fb-87fc-4a51-8984-54bd4d06e42e", 00:26:30.093 "is_configured": true, 00:26:30.093 "data_offset": 2048, 00:26:30.093 "data_size": 63488 00:26:30.093 }, 00:26:30.093 { 00:26:30.093 "name": null, 00:26:30.094 "uuid": "f8345ca6-4553-44c8-b3fe-b87c4b0aba3e", 00:26:30.094 "is_configured": false, 00:26:30.094 "data_offset": 2048, 00:26:30.094 "data_size": 63488 00:26:30.094 }, 00:26:30.094 { 00:26:30.094 "name": null, 00:26:30.094 "uuid": "0b203685-c6c1-451e-8065-afa425901af6", 00:26:30.094 "is_configured": false, 00:26:30.094 "data_offset": 2048, 00:26:30.094 "data_size": 63488 00:26:30.094 }, 00:26:30.094 { 00:26:30.094 "name": "BaseBdev4", 00:26:30.094 "uuid": "a4affe36-a487-457f-b939-9131fab55da8", 00:26:30.094 "is_configured": true, 00:26:30.094 "data_offset": 2048, 00:26:30.094 "data_size": 63488 00:26:30.094 } 00:26:30.094 ] 00:26:30.094 }' 00:26:30.094 14:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:30.094 14:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:31.066 14:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:26:31.066 14:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:31.066 14:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:26:31.066 14:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:26:31.325 [2024-07-15 14:19:17.253953] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:31.325 14:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:31.325 14:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:31.325 14:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:31.325 14:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:31.325 14:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:31.325 14:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:31.325 14:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:31.325 14:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:31.325 14:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:31.325 14:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:31.325 14:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:31.325 14:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:31.584 14:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:31.584 "name": "Existed_Raid", 00:26:31.584 "uuid": "289fa1ec-6401-443c-b577-6e5ddf63851d", 00:26:31.584 "strip_size_kb": 0, 00:26:31.584 "state": "configuring", 00:26:31.584 "raid_level": "raid1", 00:26:31.584 "superblock": true, 00:26:31.584 "num_base_bdevs": 4, 00:26:31.584 "num_base_bdevs_discovered": 3, 00:26:31.584 "num_base_bdevs_operational": 4, 00:26:31.584 "base_bdevs_list": [ 00:26:31.584 { 00:26:31.584 "name": "BaseBdev1", 00:26:31.584 "uuid": "ad8a37fb-87fc-4a51-8984-54bd4d06e42e", 00:26:31.584 "is_configured": true, 00:26:31.584 "data_offset": 2048, 00:26:31.584 "data_size": 63488 00:26:31.584 }, 00:26:31.584 { 00:26:31.584 "name": null, 00:26:31.584 "uuid": "f8345ca6-4553-44c8-b3fe-b87c4b0aba3e", 00:26:31.584 "is_configured": false, 00:26:31.584 "data_offset": 2048, 00:26:31.584 "data_size": 63488 00:26:31.584 }, 00:26:31.584 { 00:26:31.584 "name": "BaseBdev3", 00:26:31.584 "uuid": "0b203685-c6c1-451e-8065-afa425901af6", 00:26:31.584 "is_configured": true, 00:26:31.584 "data_offset": 2048, 00:26:31.584 "data_size": 63488 00:26:31.584 }, 00:26:31.584 { 00:26:31.584 "name": "BaseBdev4", 00:26:31.584 "uuid": "a4affe36-a487-457f-b939-9131fab55da8", 00:26:31.584 "is_configured": true, 00:26:31.584 "data_offset": 2048, 00:26:31.584 "data_size": 63488 00:26:31.584 } 00:26:31.584 ] 00:26:31.584 }' 00:26:31.584 14:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:31.584 14:19:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:32.520 14:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:32.520 14:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:26:32.520 14:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:26:32.520 14:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:26:32.778 [2024-07-15 14:19:18.742192] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:33.037 14:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:33.037 14:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:33.037 14:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:33.037 14:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:33.037 14:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:33.037 14:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:33.037 14:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:33.037 14:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:33.037 14:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:33.037 14:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:33.037 14:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:33.037 14:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:33.296 14:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:33.296 "name": "Existed_Raid", 00:26:33.296 "uuid": "289fa1ec-6401-443c-b577-6e5ddf63851d", 00:26:33.296 "strip_size_kb": 0, 00:26:33.296 "state": "configuring", 00:26:33.296 "raid_level": "raid1", 00:26:33.296 "superblock": true, 00:26:33.296 "num_base_bdevs": 4, 00:26:33.296 "num_base_bdevs_discovered": 2, 00:26:33.296 "num_base_bdevs_operational": 4, 00:26:33.296 "base_bdevs_list": [ 00:26:33.296 { 00:26:33.296 "name": null, 00:26:33.296 "uuid": "ad8a37fb-87fc-4a51-8984-54bd4d06e42e", 00:26:33.296 "is_configured": false, 00:26:33.296 "data_offset": 2048, 00:26:33.296 "data_size": 63488 00:26:33.296 }, 00:26:33.296 { 00:26:33.296 "name": null, 00:26:33.296 "uuid": "f8345ca6-4553-44c8-b3fe-b87c4b0aba3e", 00:26:33.296 "is_configured": false, 00:26:33.296 "data_offset": 2048, 00:26:33.296 "data_size": 63488 00:26:33.296 }, 00:26:33.296 { 00:26:33.296 "name": "BaseBdev3", 00:26:33.296 "uuid": "0b203685-c6c1-451e-8065-afa425901af6", 00:26:33.296 "is_configured": true, 00:26:33.296 "data_offset": 2048, 00:26:33.296 "data_size": 63488 00:26:33.296 }, 00:26:33.296 { 00:26:33.296 "name": "BaseBdev4", 00:26:33.296 "uuid": "a4affe36-a487-457f-b939-9131fab55da8", 00:26:33.296 "is_configured": true, 00:26:33.296 "data_offset": 2048, 00:26:33.296 "data_size": 63488 00:26:33.296 } 00:26:33.296 ] 00:26:33.296 }' 00:26:33.296 14:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:33.296 14:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:33.865 14:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:33.865 14:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:26:34.124 14:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:26:34.124 14:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:26:34.382 [2024-07-15 14:19:20.188663] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:34.382 14:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:26:34.382 14:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:34.382 14:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:34.382 14:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:34.382 14:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:34.382 14:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:34.382 14:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:34.382 14:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:34.382 14:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:34.382 14:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:34.382 14:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:34.382 14:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:34.641 14:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:34.641 "name": "Existed_Raid", 00:26:34.641 "uuid": "289fa1ec-6401-443c-b577-6e5ddf63851d", 00:26:34.641 "strip_size_kb": 0, 00:26:34.641 "state": "configuring", 00:26:34.641 "raid_level": "raid1", 00:26:34.641 "superblock": true, 00:26:34.641 "num_base_bdevs": 4, 00:26:34.641 "num_base_bdevs_discovered": 3, 00:26:34.641 "num_base_bdevs_operational": 4, 00:26:34.641 "base_bdevs_list": [ 00:26:34.641 { 00:26:34.641 "name": null, 00:26:34.641 "uuid": "ad8a37fb-87fc-4a51-8984-54bd4d06e42e", 00:26:34.641 "is_configured": false, 00:26:34.641 "data_offset": 2048, 00:26:34.641 "data_size": 63488 00:26:34.641 }, 00:26:34.641 { 00:26:34.641 "name": "BaseBdev2", 00:26:34.641 "uuid": "f8345ca6-4553-44c8-b3fe-b87c4b0aba3e", 00:26:34.641 "is_configured": true, 00:26:34.641 "data_offset": 2048, 00:26:34.641 "data_size": 63488 00:26:34.641 }, 00:26:34.641 { 00:26:34.641 "name": "BaseBdev3", 00:26:34.641 "uuid": "0b203685-c6c1-451e-8065-afa425901af6", 00:26:34.641 "is_configured": true, 00:26:34.641 "data_offset": 2048, 00:26:34.641 "data_size": 63488 00:26:34.641 }, 00:26:34.641 { 00:26:34.641 "name": "BaseBdev4", 00:26:34.641 "uuid": "a4affe36-a487-457f-b939-9131fab55da8", 00:26:34.641 "is_configured": true, 00:26:34.641 "data_offset": 2048, 00:26:34.641 "data_size": 63488 00:26:34.641 } 00:26:34.641 ] 00:26:34.641 }' 00:26:34.641 14:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:34.641 14:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:35.208 14:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:35.208 14:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:26:35.468 14:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:26:35.468 14:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:35.468 14:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:26:35.726 14:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u ad8a37fb-87fc-4a51-8984-54bd4d06e42e 00:26:35.984 [2024-07-15 14:19:21.851941] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:26:35.984 [2024-07-15 14:19:21.852352] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:26:35.984 [2024-07-15 14:19:21.852483] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:35.984 [2024-07-15 14:19:21.852626] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:26:35.984 [2024-07-15 14:19:21.853034] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:26:35.984 [2024-07-15 14:19:21.853161] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000009380 00:26:35.984 [2024-07-15 14:19:21.853368] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:35.984 NewBaseBdev 00:26:35.984 14:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:26:35.984 14:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:26:35.984 14:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:26:35.984 14:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:26:35.984 14:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:26:35.984 14:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:26:35.984 14:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:26:36.244 14:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:26:36.504 [ 00:26:36.504 { 00:26:36.504 "name": "NewBaseBdev", 00:26:36.505 "aliases": [ 00:26:36.505 "ad8a37fb-87fc-4a51-8984-54bd4d06e42e" 00:26:36.505 ], 00:26:36.505 "product_name": "Malloc disk", 00:26:36.505 "block_size": 512, 00:26:36.505 "num_blocks": 65536, 00:26:36.505 "uuid": "ad8a37fb-87fc-4a51-8984-54bd4d06e42e", 00:26:36.505 "assigned_rate_limits": { 00:26:36.505 "rw_ios_per_sec": 0, 00:26:36.505 "rw_mbytes_per_sec": 0, 00:26:36.505 "r_mbytes_per_sec": 0, 00:26:36.505 "w_mbytes_per_sec": 0 00:26:36.505 }, 00:26:36.505 "claimed": true, 00:26:36.505 "claim_type": "exclusive_write", 00:26:36.505 "zoned": false, 00:26:36.505 "supported_io_types": { 00:26:36.505 "read": true, 00:26:36.505 "write": true, 00:26:36.505 "unmap": true, 00:26:36.505 "flush": true, 00:26:36.505 "reset": true, 00:26:36.505 "nvme_admin": false, 00:26:36.505 "nvme_io": false, 00:26:36.505 "nvme_io_md": false, 00:26:36.505 "write_zeroes": true, 00:26:36.505 "zcopy": true, 00:26:36.505 "get_zone_info": false, 00:26:36.505 "zone_management": false, 00:26:36.505 "zone_append": false, 00:26:36.505 "compare": false, 00:26:36.505 "compare_and_write": false, 00:26:36.505 "abort": true, 00:26:36.505 "seek_hole": false, 00:26:36.505 "seek_data": false, 00:26:36.505 "copy": true, 00:26:36.505 "nvme_iov_md": false 00:26:36.505 }, 00:26:36.505 "memory_domains": [ 00:26:36.505 { 00:26:36.505 "dma_device_id": "system", 00:26:36.505 "dma_device_type": 1 00:26:36.505 }, 00:26:36.505 { 00:26:36.505 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:36.505 "dma_device_type": 2 00:26:36.505 } 00:26:36.505 ], 00:26:36.505 "driver_specific": {} 00:26:36.505 } 00:26:36.505 ] 00:26:36.505 14:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:26:36.505 14:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:26:36.505 14:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:26:36.505 14:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:36.505 14:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:36.505 14:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:36.505 14:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:36.505 14:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:36.505 14:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:36.505 14:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:36.505 14:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:36.505 14:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:36.505 14:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:36.764 14:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:36.764 "name": "Existed_Raid", 00:26:36.764 "uuid": "289fa1ec-6401-443c-b577-6e5ddf63851d", 00:26:36.764 "strip_size_kb": 0, 00:26:36.764 "state": "online", 00:26:36.764 "raid_level": "raid1", 00:26:36.764 "superblock": true, 00:26:36.764 "num_base_bdevs": 4, 00:26:36.764 "num_base_bdevs_discovered": 4, 00:26:36.764 "num_base_bdevs_operational": 4, 00:26:36.764 "base_bdevs_list": [ 00:26:36.764 { 00:26:36.764 "name": "NewBaseBdev", 00:26:36.764 "uuid": "ad8a37fb-87fc-4a51-8984-54bd4d06e42e", 00:26:36.764 "is_configured": true, 00:26:36.764 "data_offset": 2048, 00:26:36.764 "data_size": 63488 00:26:36.764 }, 00:26:36.764 { 00:26:36.764 "name": "BaseBdev2", 00:26:36.764 "uuid": "f8345ca6-4553-44c8-b3fe-b87c4b0aba3e", 00:26:36.764 "is_configured": true, 00:26:36.764 "data_offset": 2048, 00:26:36.764 "data_size": 63488 00:26:36.764 }, 00:26:36.764 { 00:26:36.764 "name": "BaseBdev3", 00:26:36.764 "uuid": "0b203685-c6c1-451e-8065-afa425901af6", 00:26:36.764 "is_configured": true, 00:26:36.764 "data_offset": 2048, 00:26:36.764 "data_size": 63488 00:26:36.764 }, 00:26:36.764 { 00:26:36.764 "name": "BaseBdev4", 00:26:36.764 "uuid": "a4affe36-a487-457f-b939-9131fab55da8", 00:26:36.764 "is_configured": true, 00:26:36.764 "data_offset": 2048, 00:26:36.764 "data_size": 63488 00:26:36.764 } 00:26:36.764 ] 00:26:36.764 }' 00:26:36.764 14:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:36.764 14:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:37.420 14:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:26:37.420 14:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:26:37.420 14:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:26:37.420 14:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:26:37.420 14:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:26:37.420 14:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:26:37.420 14:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:26:37.420 14:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:26:37.678 [2024-07-15 14:19:23.564439] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:37.678 14:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:26:37.678 "name": "Existed_Raid", 00:26:37.678 "aliases": [ 00:26:37.678 "289fa1ec-6401-443c-b577-6e5ddf63851d" 00:26:37.678 ], 00:26:37.678 "product_name": "Raid Volume", 00:26:37.678 "block_size": 512, 00:26:37.678 "num_blocks": 63488, 00:26:37.678 "uuid": "289fa1ec-6401-443c-b577-6e5ddf63851d", 00:26:37.678 "assigned_rate_limits": { 00:26:37.678 "rw_ios_per_sec": 0, 00:26:37.678 "rw_mbytes_per_sec": 0, 00:26:37.678 "r_mbytes_per_sec": 0, 00:26:37.678 "w_mbytes_per_sec": 0 00:26:37.678 }, 00:26:37.678 "claimed": false, 00:26:37.678 "zoned": false, 00:26:37.678 "supported_io_types": { 00:26:37.678 "read": true, 00:26:37.678 "write": true, 00:26:37.678 "unmap": false, 00:26:37.678 "flush": false, 00:26:37.678 "reset": true, 00:26:37.678 "nvme_admin": false, 00:26:37.678 "nvme_io": false, 00:26:37.678 "nvme_io_md": false, 00:26:37.678 "write_zeroes": true, 00:26:37.678 "zcopy": false, 00:26:37.678 "get_zone_info": false, 00:26:37.678 "zone_management": false, 00:26:37.678 "zone_append": false, 00:26:37.678 "compare": false, 00:26:37.678 "compare_and_write": false, 00:26:37.678 "abort": false, 00:26:37.678 "seek_hole": false, 00:26:37.678 "seek_data": false, 00:26:37.678 "copy": false, 00:26:37.678 "nvme_iov_md": false 00:26:37.678 }, 00:26:37.678 "memory_domains": [ 00:26:37.678 { 00:26:37.678 "dma_device_id": "system", 00:26:37.678 "dma_device_type": 1 00:26:37.678 }, 00:26:37.678 { 00:26:37.678 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:37.678 "dma_device_type": 2 00:26:37.678 }, 00:26:37.678 { 00:26:37.678 "dma_device_id": "system", 00:26:37.678 "dma_device_type": 1 00:26:37.678 }, 00:26:37.678 { 00:26:37.678 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:37.678 "dma_device_type": 2 00:26:37.678 }, 00:26:37.678 { 00:26:37.678 "dma_device_id": "system", 00:26:37.678 "dma_device_type": 1 00:26:37.678 }, 00:26:37.678 { 00:26:37.678 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:37.678 "dma_device_type": 2 00:26:37.678 }, 00:26:37.678 { 00:26:37.678 "dma_device_id": "system", 00:26:37.678 "dma_device_type": 1 00:26:37.678 }, 00:26:37.678 { 00:26:37.678 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:37.678 "dma_device_type": 2 00:26:37.678 } 00:26:37.678 ], 00:26:37.678 "driver_specific": { 00:26:37.678 "raid": { 00:26:37.678 "uuid": "289fa1ec-6401-443c-b577-6e5ddf63851d", 00:26:37.678 "strip_size_kb": 0, 00:26:37.678 "state": "online", 00:26:37.678 "raid_level": "raid1", 00:26:37.678 "superblock": true, 00:26:37.678 "num_base_bdevs": 4, 00:26:37.678 "num_base_bdevs_discovered": 4, 00:26:37.678 "num_base_bdevs_operational": 4, 00:26:37.678 "base_bdevs_list": [ 00:26:37.678 { 00:26:37.678 "name": "NewBaseBdev", 00:26:37.678 "uuid": "ad8a37fb-87fc-4a51-8984-54bd4d06e42e", 00:26:37.678 "is_configured": true, 00:26:37.678 "data_offset": 2048, 00:26:37.678 "data_size": 63488 00:26:37.678 }, 00:26:37.678 { 00:26:37.678 "name": "BaseBdev2", 00:26:37.678 "uuid": "f8345ca6-4553-44c8-b3fe-b87c4b0aba3e", 00:26:37.678 "is_configured": true, 00:26:37.678 "data_offset": 2048, 00:26:37.678 "data_size": 63488 00:26:37.678 }, 00:26:37.678 { 00:26:37.678 "name": "BaseBdev3", 00:26:37.678 "uuid": "0b203685-c6c1-451e-8065-afa425901af6", 00:26:37.678 "is_configured": true, 00:26:37.678 "data_offset": 2048, 00:26:37.678 "data_size": 63488 00:26:37.678 }, 00:26:37.678 { 00:26:37.678 "name": "BaseBdev4", 00:26:37.678 "uuid": "a4affe36-a487-457f-b939-9131fab55da8", 00:26:37.678 "is_configured": true, 00:26:37.678 "data_offset": 2048, 00:26:37.678 "data_size": 63488 00:26:37.678 } 00:26:37.678 ] 00:26:37.678 } 00:26:37.678 } 00:26:37.678 }' 00:26:37.678 14:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:37.678 14:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:26:37.678 BaseBdev2 00:26:37.678 BaseBdev3 00:26:37.678 BaseBdev4' 00:26:37.678 14:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:37.678 14:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:26:37.678 14:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:37.935 14:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:37.935 "name": "NewBaseBdev", 00:26:37.935 "aliases": [ 00:26:37.935 "ad8a37fb-87fc-4a51-8984-54bd4d06e42e" 00:26:37.935 ], 00:26:37.935 "product_name": "Malloc disk", 00:26:37.935 "block_size": 512, 00:26:37.935 "num_blocks": 65536, 00:26:37.935 "uuid": "ad8a37fb-87fc-4a51-8984-54bd4d06e42e", 00:26:37.935 "assigned_rate_limits": { 00:26:37.935 "rw_ios_per_sec": 0, 00:26:37.935 "rw_mbytes_per_sec": 0, 00:26:37.935 "r_mbytes_per_sec": 0, 00:26:37.935 "w_mbytes_per_sec": 0 00:26:37.935 }, 00:26:37.935 "claimed": true, 00:26:37.935 "claim_type": "exclusive_write", 00:26:37.935 "zoned": false, 00:26:37.935 "supported_io_types": { 00:26:37.935 "read": true, 00:26:37.935 "write": true, 00:26:37.935 "unmap": true, 00:26:37.935 "flush": true, 00:26:37.935 "reset": true, 00:26:37.935 "nvme_admin": false, 00:26:37.935 "nvme_io": false, 00:26:37.935 "nvme_io_md": false, 00:26:37.935 "write_zeroes": true, 00:26:37.935 "zcopy": true, 00:26:37.935 "get_zone_info": false, 00:26:37.935 "zone_management": false, 00:26:37.935 "zone_append": false, 00:26:37.935 "compare": false, 00:26:37.935 "compare_and_write": false, 00:26:37.935 "abort": true, 00:26:37.935 "seek_hole": false, 00:26:37.935 "seek_data": false, 00:26:37.935 "copy": true, 00:26:37.935 "nvme_iov_md": false 00:26:37.935 }, 00:26:37.935 "memory_domains": [ 00:26:37.935 { 00:26:37.935 "dma_device_id": "system", 00:26:37.935 "dma_device_type": 1 00:26:37.935 }, 00:26:37.936 { 00:26:37.936 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:37.936 "dma_device_type": 2 00:26:37.936 } 00:26:37.936 ], 00:26:37.936 "driver_specific": {} 00:26:37.936 }' 00:26:37.936 14:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:38.192 14:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:38.192 14:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:38.192 14:19:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:38.192 14:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:38.192 14:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:38.192 14:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:38.192 14:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:38.192 14:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:38.192 14:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:38.449 14:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:38.449 14:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:38.449 14:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:38.449 14:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:26:38.449 14:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:38.707 14:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:38.707 "name": "BaseBdev2", 00:26:38.707 "aliases": [ 00:26:38.707 "f8345ca6-4553-44c8-b3fe-b87c4b0aba3e" 00:26:38.707 ], 00:26:38.707 "product_name": "Malloc disk", 00:26:38.707 "block_size": 512, 00:26:38.707 "num_blocks": 65536, 00:26:38.707 "uuid": "f8345ca6-4553-44c8-b3fe-b87c4b0aba3e", 00:26:38.707 "assigned_rate_limits": { 00:26:38.707 "rw_ios_per_sec": 0, 00:26:38.707 "rw_mbytes_per_sec": 0, 00:26:38.707 "r_mbytes_per_sec": 0, 00:26:38.707 "w_mbytes_per_sec": 0 00:26:38.707 }, 00:26:38.707 "claimed": true, 00:26:38.707 "claim_type": "exclusive_write", 00:26:38.707 "zoned": false, 00:26:38.707 "supported_io_types": { 00:26:38.707 "read": true, 00:26:38.707 "write": true, 00:26:38.707 "unmap": true, 00:26:38.707 "flush": true, 00:26:38.707 "reset": true, 00:26:38.707 "nvme_admin": false, 00:26:38.707 "nvme_io": false, 00:26:38.707 "nvme_io_md": false, 00:26:38.707 "write_zeroes": true, 00:26:38.707 "zcopy": true, 00:26:38.707 "get_zone_info": false, 00:26:38.707 "zone_management": false, 00:26:38.707 "zone_append": false, 00:26:38.707 "compare": false, 00:26:38.707 "compare_and_write": false, 00:26:38.707 "abort": true, 00:26:38.707 "seek_hole": false, 00:26:38.707 "seek_data": false, 00:26:38.707 "copy": true, 00:26:38.707 "nvme_iov_md": false 00:26:38.707 }, 00:26:38.707 "memory_domains": [ 00:26:38.707 { 00:26:38.707 "dma_device_id": "system", 00:26:38.707 "dma_device_type": 1 00:26:38.707 }, 00:26:38.707 { 00:26:38.707 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:38.707 "dma_device_type": 2 00:26:38.707 } 00:26:38.707 ], 00:26:38.707 "driver_specific": {} 00:26:38.707 }' 00:26:38.708 14:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:38.708 14:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:38.708 14:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:38.708 14:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:38.708 14:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:38.965 14:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:38.965 14:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:38.965 14:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:38.965 14:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:38.965 14:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:38.965 14:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:38.965 14:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:38.965 14:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:38.965 14:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:38.965 14:19:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:26:39.223 14:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:39.223 "name": "BaseBdev3", 00:26:39.223 "aliases": [ 00:26:39.223 "0b203685-c6c1-451e-8065-afa425901af6" 00:26:39.223 ], 00:26:39.223 "product_name": "Malloc disk", 00:26:39.223 "block_size": 512, 00:26:39.223 "num_blocks": 65536, 00:26:39.223 "uuid": "0b203685-c6c1-451e-8065-afa425901af6", 00:26:39.223 "assigned_rate_limits": { 00:26:39.223 "rw_ios_per_sec": 0, 00:26:39.223 "rw_mbytes_per_sec": 0, 00:26:39.223 "r_mbytes_per_sec": 0, 00:26:39.223 "w_mbytes_per_sec": 0 00:26:39.223 }, 00:26:39.223 "claimed": true, 00:26:39.223 "claim_type": "exclusive_write", 00:26:39.223 "zoned": false, 00:26:39.223 "supported_io_types": { 00:26:39.223 "read": true, 00:26:39.223 "write": true, 00:26:39.223 "unmap": true, 00:26:39.223 "flush": true, 00:26:39.223 "reset": true, 00:26:39.223 "nvme_admin": false, 00:26:39.223 "nvme_io": false, 00:26:39.223 "nvme_io_md": false, 00:26:39.223 "write_zeroes": true, 00:26:39.223 "zcopy": true, 00:26:39.223 "get_zone_info": false, 00:26:39.223 "zone_management": false, 00:26:39.223 "zone_append": false, 00:26:39.223 "compare": false, 00:26:39.223 "compare_and_write": false, 00:26:39.223 "abort": true, 00:26:39.223 "seek_hole": false, 00:26:39.223 "seek_data": false, 00:26:39.223 "copy": true, 00:26:39.223 "nvme_iov_md": false 00:26:39.223 }, 00:26:39.223 "memory_domains": [ 00:26:39.223 { 00:26:39.223 "dma_device_id": "system", 00:26:39.223 "dma_device_type": 1 00:26:39.223 }, 00:26:39.223 { 00:26:39.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:39.223 "dma_device_type": 2 00:26:39.223 } 00:26:39.223 ], 00:26:39.223 "driver_specific": {} 00:26:39.223 }' 00:26:39.223 14:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:39.481 14:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:39.481 14:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:39.481 14:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:39.481 14:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:39.481 14:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:39.481 14:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:39.481 14:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:39.739 14:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:39.739 14:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:39.739 14:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:39.739 14:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:39.739 14:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:39.739 14:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:26:39.739 14:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:39.997 14:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:39.997 "name": "BaseBdev4", 00:26:39.997 "aliases": [ 00:26:39.997 "a4affe36-a487-457f-b939-9131fab55da8" 00:26:39.997 ], 00:26:39.997 "product_name": "Malloc disk", 00:26:39.997 "block_size": 512, 00:26:39.997 "num_blocks": 65536, 00:26:39.997 "uuid": "a4affe36-a487-457f-b939-9131fab55da8", 00:26:39.997 "assigned_rate_limits": { 00:26:39.997 "rw_ios_per_sec": 0, 00:26:39.997 "rw_mbytes_per_sec": 0, 00:26:39.997 "r_mbytes_per_sec": 0, 00:26:39.997 "w_mbytes_per_sec": 0 00:26:39.997 }, 00:26:39.997 "claimed": true, 00:26:39.997 "claim_type": "exclusive_write", 00:26:39.997 "zoned": false, 00:26:39.997 "supported_io_types": { 00:26:39.997 "read": true, 00:26:39.997 "write": true, 00:26:39.997 "unmap": true, 00:26:39.997 "flush": true, 00:26:39.997 "reset": true, 00:26:39.997 "nvme_admin": false, 00:26:39.997 "nvme_io": false, 00:26:39.997 "nvme_io_md": false, 00:26:39.997 "write_zeroes": true, 00:26:39.997 "zcopy": true, 00:26:39.997 "get_zone_info": false, 00:26:39.997 "zone_management": false, 00:26:39.997 "zone_append": false, 00:26:39.997 "compare": false, 00:26:39.997 "compare_and_write": false, 00:26:39.997 "abort": true, 00:26:39.997 "seek_hole": false, 00:26:39.997 "seek_data": false, 00:26:39.997 "copy": true, 00:26:39.997 "nvme_iov_md": false 00:26:39.997 }, 00:26:39.997 "memory_domains": [ 00:26:39.997 { 00:26:39.997 "dma_device_id": "system", 00:26:39.997 "dma_device_type": 1 00:26:39.997 }, 00:26:39.997 { 00:26:39.997 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:39.997 "dma_device_type": 2 00:26:39.997 } 00:26:39.997 ], 00:26:39.997 "driver_specific": {} 00:26:39.997 }' 00:26:39.997 14:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:39.997 14:19:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:40.255 14:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:40.255 14:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:40.255 14:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:40.255 14:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:40.255 14:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:40.255 14:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:40.255 14:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:40.255 14:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:40.255 14:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:40.513 14:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:40.513 14:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:26:40.771 [2024-07-15 14:19:26.584620] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:40.771 [2024-07-15 14:19:26.584916] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:40.771 [2024-07-15 14:19:26.585095] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:40.771 [2024-07-15 14:19:26.585463] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:40.771 [2024-07-15 14:19:26.585586] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name Existed_Raid, state offline 00:26:40.771 14:19:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 207974 00:26:40.771 14:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 207974 ']' 00:26:40.771 14:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 207974 00:26:40.771 14:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:26:40.771 14:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:40.771 14:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 207974 00:26:40.771 14:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:40.771 14:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:40.771 14:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 207974' 00:26:40.771 killing process with pid 207974 00:26:40.771 14:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 207974 00:26:40.771 [2024-07-15 14:19:26.628336] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:40.771 14:19:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 207974 00:26:41.030 [2024-07-15 14:19:26.972441] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:42.406 14:19:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:26:42.406 00:26:42.406 real 0m37.486s 00:26:42.406 user 1m9.073s 00:26:42.406 sys 0m4.420s 00:26:42.406 14:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:42.406 14:19:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:42.406 ************************************ 00:26:42.406 END TEST raid_state_function_test_sb 00:26:42.406 ************************************ 00:26:42.406 14:19:28 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:26:42.406 14:19:28 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:26:42.406 14:19:28 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:26:42.406 14:19:28 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:42.406 14:19:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:42.406 ************************************ 00:26:42.406 START TEST raid_superblock_test 00:26:42.406 ************************************ 00:26:42.406 14:19:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 4 00:26:42.406 14:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:26:42.406 14:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:26:42.406 14:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:26:42.406 14:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:26:42.406 14:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:26:42.406 14:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:26:42.406 14:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:26:42.406 14:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:26:42.406 14:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:26:42.406 14:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:26:42.406 14:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:26:42.406 14:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:26:42.406 14:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:26:42.406 14:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:26:42.406 14:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:26:42.406 14:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=209102 00:26:42.406 14:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:26:42.406 14:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 209102 /var/tmp/spdk-raid.sock 00:26:42.406 14:19:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 209102 ']' 00:26:42.406 14:19:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:42.406 14:19:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:42.406 14:19:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:42.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:42.406 14:19:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:42.406 14:19:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:42.406 [2024-07-15 14:19:28.212255] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:26:42.406 [2024-07-15 14:19:28.212557] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid209102 ] 00:26:42.406 [2024-07-15 14:19:28.364792] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:42.664 [2024-07-15 14:19:28.621086] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:42.923 [2024-07-15 14:19:28.821149] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:43.491 14:19:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:43.491 14:19:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:26:43.491 14:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:26:43.491 14:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:26:43.491 14:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:26:43.491 14:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:26:43.491 14:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:26:43.491 14:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:43.491 14:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:26:43.491 14:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:43.491 14:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:26:43.750 malloc1 00:26:43.750 14:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:44.009 [2024-07-15 14:19:29.778953] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:44.009 [2024-07-15 14:19:29.779324] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:44.009 [2024-07-15 14:19:29.779485] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:26:44.009 [2024-07-15 14:19:29.779620] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:44.009 [2024-07-15 14:19:29.781557] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:44.009 [2024-07-15 14:19:29.781753] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:44.009 pt1 00:26:44.009 14:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:26:44.009 14:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:26:44.009 14:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:26:44.009 14:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:26:44.009 14:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:26:44.009 14:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:44.009 14:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:26:44.009 14:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:44.009 14:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:26:44.267 malloc2 00:26:44.267 14:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:44.536 [2024-07-15 14:19:30.345251] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:44.536 [2024-07-15 14:19:30.346653] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:44.537 [2024-07-15 14:19:30.346750] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:26:44.537 [2024-07-15 14:19:30.347003] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:44.537 [2024-07-15 14:19:30.348840] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:44.537 [2024-07-15 14:19:30.349012] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:44.537 pt2 00:26:44.537 14:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:26:44.537 14:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:26:44.537 14:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:26:44.537 14:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:26:44.537 14:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:26:44.537 14:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:44.537 14:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:26:44.537 14:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:44.537 14:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:26:44.803 malloc3 00:26:44.803 14:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:26:45.062 [2024-07-15 14:19:30.923957] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:26:45.062 [2024-07-15 14:19:30.924237] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:45.062 [2024-07-15 14:19:30.924379] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:26:45.062 [2024-07-15 14:19:30.924525] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:45.062 [2024-07-15 14:19:30.926391] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:45.062 [2024-07-15 14:19:30.926570] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:26:45.062 pt3 00:26:45.062 14:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:26:45.062 14:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:26:45.062 14:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:26:45.062 14:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:26:45.062 14:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:26:45.062 14:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:45.062 14:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:26:45.062 14:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:45.062 14:19:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:26:45.320 malloc4 00:26:45.320 14:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:26:45.578 [2024-07-15 14:19:31.555329] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:26:45.578 [2024-07-15 14:19:31.555657] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:45.578 [2024-07-15 14:19:31.555751] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:26:45.578 [2024-07-15 14:19:31.555991] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:45.578 [2024-07-15 14:19:31.557797] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:45.578 [2024-07-15 14:19:31.557965] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:26:45.578 pt4 00:26:45.578 14:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:26:45.578 14:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:26:45.578 14:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:26:46.146 [2024-07-15 14:19:31.847504] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:46.146 [2024-07-15 14:19:31.850132] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:46.146 [2024-07-15 14:19:31.850426] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:46.146 [2024-07-15 14:19:31.850716] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:26:46.146 [2024-07-15 14:19:31.851187] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:26:46.146 [2024-07-15 14:19:31.851367] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:46.146 [2024-07-15 14:19:31.851843] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:26:46.146 [2024-07-15 14:19:31.852489] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:26:46.146 [2024-07-15 14:19:31.852718] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:26:46.146 [2024-07-15 14:19:31.853215] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:46.146 14:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:26:46.146 14:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:46.146 14:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:46.146 14:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:46.146 14:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:46.146 14:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:46.146 14:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:46.146 14:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:46.146 14:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:46.146 14:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:46.146 14:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:46.146 14:19:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:46.146 14:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:46.146 "name": "raid_bdev1", 00:26:46.146 "uuid": "9282d4cb-92df-4a38-b6f5-6430988b1527", 00:26:46.146 "strip_size_kb": 0, 00:26:46.146 "state": "online", 00:26:46.146 "raid_level": "raid1", 00:26:46.146 "superblock": true, 00:26:46.146 "num_base_bdevs": 4, 00:26:46.146 "num_base_bdevs_discovered": 4, 00:26:46.146 "num_base_bdevs_operational": 4, 00:26:46.146 "base_bdevs_list": [ 00:26:46.146 { 00:26:46.146 "name": "pt1", 00:26:46.146 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:46.146 "is_configured": true, 00:26:46.146 "data_offset": 2048, 00:26:46.146 "data_size": 63488 00:26:46.146 }, 00:26:46.146 { 00:26:46.146 "name": "pt2", 00:26:46.146 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:46.146 "is_configured": true, 00:26:46.146 "data_offset": 2048, 00:26:46.146 "data_size": 63488 00:26:46.146 }, 00:26:46.146 { 00:26:46.146 "name": "pt3", 00:26:46.146 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:46.146 "is_configured": true, 00:26:46.146 "data_offset": 2048, 00:26:46.146 "data_size": 63488 00:26:46.146 }, 00:26:46.146 { 00:26:46.146 "name": "pt4", 00:26:46.146 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:46.146 "is_configured": true, 00:26:46.146 "data_offset": 2048, 00:26:46.146 "data_size": 63488 00:26:46.146 } 00:26:46.146 ] 00:26:46.146 }' 00:26:46.146 14:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:46.146 14:19:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:47.081 14:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:26:47.081 14:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:26:47.081 14:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:26:47.081 14:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:26:47.081 14:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:26:47.081 14:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:26:47.081 14:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:26:47.081 14:19:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:47.081 [2024-07-15 14:19:33.049701] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:47.081 14:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:26:47.081 "name": "raid_bdev1", 00:26:47.081 "aliases": [ 00:26:47.081 "9282d4cb-92df-4a38-b6f5-6430988b1527" 00:26:47.081 ], 00:26:47.081 "product_name": "Raid Volume", 00:26:47.081 "block_size": 512, 00:26:47.081 "num_blocks": 63488, 00:26:47.081 "uuid": "9282d4cb-92df-4a38-b6f5-6430988b1527", 00:26:47.081 "assigned_rate_limits": { 00:26:47.081 "rw_ios_per_sec": 0, 00:26:47.081 "rw_mbytes_per_sec": 0, 00:26:47.081 "r_mbytes_per_sec": 0, 00:26:47.081 "w_mbytes_per_sec": 0 00:26:47.081 }, 00:26:47.081 "claimed": false, 00:26:47.081 "zoned": false, 00:26:47.081 "supported_io_types": { 00:26:47.081 "read": true, 00:26:47.081 "write": true, 00:26:47.081 "unmap": false, 00:26:47.081 "flush": false, 00:26:47.081 "reset": true, 00:26:47.081 "nvme_admin": false, 00:26:47.081 "nvme_io": false, 00:26:47.081 "nvme_io_md": false, 00:26:47.081 "write_zeroes": true, 00:26:47.081 "zcopy": false, 00:26:47.081 "get_zone_info": false, 00:26:47.081 "zone_management": false, 00:26:47.081 "zone_append": false, 00:26:47.081 "compare": false, 00:26:47.081 "compare_and_write": false, 00:26:47.081 "abort": false, 00:26:47.081 "seek_hole": false, 00:26:47.081 "seek_data": false, 00:26:47.081 "copy": false, 00:26:47.081 "nvme_iov_md": false 00:26:47.081 }, 00:26:47.081 "memory_domains": [ 00:26:47.081 { 00:26:47.081 "dma_device_id": "system", 00:26:47.081 "dma_device_type": 1 00:26:47.081 }, 00:26:47.081 { 00:26:47.081 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:47.081 "dma_device_type": 2 00:26:47.081 }, 00:26:47.081 { 00:26:47.081 "dma_device_id": "system", 00:26:47.081 "dma_device_type": 1 00:26:47.081 }, 00:26:47.081 { 00:26:47.081 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:47.081 "dma_device_type": 2 00:26:47.081 }, 00:26:47.081 { 00:26:47.081 "dma_device_id": "system", 00:26:47.081 "dma_device_type": 1 00:26:47.081 }, 00:26:47.081 { 00:26:47.081 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:47.081 "dma_device_type": 2 00:26:47.081 }, 00:26:47.081 { 00:26:47.081 "dma_device_id": "system", 00:26:47.081 "dma_device_type": 1 00:26:47.081 }, 00:26:47.081 { 00:26:47.081 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:47.081 "dma_device_type": 2 00:26:47.081 } 00:26:47.081 ], 00:26:47.081 "driver_specific": { 00:26:47.081 "raid": { 00:26:47.081 "uuid": "9282d4cb-92df-4a38-b6f5-6430988b1527", 00:26:47.081 "strip_size_kb": 0, 00:26:47.081 "state": "online", 00:26:47.081 "raid_level": "raid1", 00:26:47.081 "superblock": true, 00:26:47.081 "num_base_bdevs": 4, 00:26:47.082 "num_base_bdevs_discovered": 4, 00:26:47.082 "num_base_bdevs_operational": 4, 00:26:47.082 "base_bdevs_list": [ 00:26:47.082 { 00:26:47.082 "name": "pt1", 00:26:47.082 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:47.082 "is_configured": true, 00:26:47.082 "data_offset": 2048, 00:26:47.082 "data_size": 63488 00:26:47.082 }, 00:26:47.082 { 00:26:47.082 "name": "pt2", 00:26:47.082 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:47.082 "is_configured": true, 00:26:47.082 "data_offset": 2048, 00:26:47.082 "data_size": 63488 00:26:47.082 }, 00:26:47.082 { 00:26:47.082 "name": "pt3", 00:26:47.082 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:47.082 "is_configured": true, 00:26:47.082 "data_offset": 2048, 00:26:47.082 "data_size": 63488 00:26:47.082 }, 00:26:47.082 { 00:26:47.082 "name": "pt4", 00:26:47.082 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:47.082 "is_configured": true, 00:26:47.082 "data_offset": 2048, 00:26:47.082 "data_size": 63488 00:26:47.082 } 00:26:47.082 ] 00:26:47.082 } 00:26:47.082 } 00:26:47.082 }' 00:26:47.082 14:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:47.339 14:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:26:47.339 pt2 00:26:47.339 pt3 00:26:47.339 pt4' 00:26:47.340 14:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:47.340 14:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:26:47.340 14:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:47.597 14:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:47.597 "name": "pt1", 00:26:47.597 "aliases": [ 00:26:47.597 "00000000-0000-0000-0000-000000000001" 00:26:47.597 ], 00:26:47.597 "product_name": "passthru", 00:26:47.597 "block_size": 512, 00:26:47.597 "num_blocks": 65536, 00:26:47.597 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:47.597 "assigned_rate_limits": { 00:26:47.597 "rw_ios_per_sec": 0, 00:26:47.597 "rw_mbytes_per_sec": 0, 00:26:47.597 "r_mbytes_per_sec": 0, 00:26:47.597 "w_mbytes_per_sec": 0 00:26:47.597 }, 00:26:47.597 "claimed": true, 00:26:47.597 "claim_type": "exclusive_write", 00:26:47.597 "zoned": false, 00:26:47.597 "supported_io_types": { 00:26:47.597 "read": true, 00:26:47.597 "write": true, 00:26:47.597 "unmap": true, 00:26:47.597 "flush": true, 00:26:47.597 "reset": true, 00:26:47.597 "nvme_admin": false, 00:26:47.597 "nvme_io": false, 00:26:47.597 "nvme_io_md": false, 00:26:47.597 "write_zeroes": true, 00:26:47.597 "zcopy": true, 00:26:47.597 "get_zone_info": false, 00:26:47.597 "zone_management": false, 00:26:47.597 "zone_append": false, 00:26:47.597 "compare": false, 00:26:47.597 "compare_and_write": false, 00:26:47.597 "abort": true, 00:26:47.597 "seek_hole": false, 00:26:47.597 "seek_data": false, 00:26:47.597 "copy": true, 00:26:47.597 "nvme_iov_md": false 00:26:47.597 }, 00:26:47.597 "memory_domains": [ 00:26:47.597 { 00:26:47.597 "dma_device_id": "system", 00:26:47.597 "dma_device_type": 1 00:26:47.597 }, 00:26:47.597 { 00:26:47.597 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:47.597 "dma_device_type": 2 00:26:47.597 } 00:26:47.597 ], 00:26:47.597 "driver_specific": { 00:26:47.597 "passthru": { 00:26:47.597 "name": "pt1", 00:26:47.597 "base_bdev_name": "malloc1" 00:26:47.597 } 00:26:47.597 } 00:26:47.597 }' 00:26:47.597 14:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:47.597 14:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:47.597 14:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:47.597 14:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:47.597 14:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:47.597 14:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:47.597 14:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:47.597 14:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:47.854 14:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:47.854 14:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:47.854 14:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:47.854 14:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:47.854 14:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:47.854 14:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:26:47.854 14:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:48.111 14:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:48.111 "name": "pt2", 00:26:48.111 "aliases": [ 00:26:48.111 "00000000-0000-0000-0000-000000000002" 00:26:48.111 ], 00:26:48.111 "product_name": "passthru", 00:26:48.111 "block_size": 512, 00:26:48.111 "num_blocks": 65536, 00:26:48.111 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:48.111 "assigned_rate_limits": { 00:26:48.111 "rw_ios_per_sec": 0, 00:26:48.111 "rw_mbytes_per_sec": 0, 00:26:48.111 "r_mbytes_per_sec": 0, 00:26:48.111 "w_mbytes_per_sec": 0 00:26:48.111 }, 00:26:48.111 "claimed": true, 00:26:48.111 "claim_type": "exclusive_write", 00:26:48.111 "zoned": false, 00:26:48.111 "supported_io_types": { 00:26:48.111 "read": true, 00:26:48.111 "write": true, 00:26:48.111 "unmap": true, 00:26:48.111 "flush": true, 00:26:48.111 "reset": true, 00:26:48.111 "nvme_admin": false, 00:26:48.111 "nvme_io": false, 00:26:48.111 "nvme_io_md": false, 00:26:48.111 "write_zeroes": true, 00:26:48.111 "zcopy": true, 00:26:48.111 "get_zone_info": false, 00:26:48.111 "zone_management": false, 00:26:48.111 "zone_append": false, 00:26:48.111 "compare": false, 00:26:48.111 "compare_and_write": false, 00:26:48.111 "abort": true, 00:26:48.111 "seek_hole": false, 00:26:48.111 "seek_data": false, 00:26:48.111 "copy": true, 00:26:48.111 "nvme_iov_md": false 00:26:48.111 }, 00:26:48.111 "memory_domains": [ 00:26:48.111 { 00:26:48.111 "dma_device_id": "system", 00:26:48.111 "dma_device_type": 1 00:26:48.111 }, 00:26:48.111 { 00:26:48.111 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:48.111 "dma_device_type": 2 00:26:48.111 } 00:26:48.111 ], 00:26:48.111 "driver_specific": { 00:26:48.111 "passthru": { 00:26:48.111 "name": "pt2", 00:26:48.111 "base_bdev_name": "malloc2" 00:26:48.111 } 00:26:48.111 } 00:26:48.111 }' 00:26:48.111 14:19:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:48.111 14:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:48.111 14:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:48.111 14:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:48.368 14:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:48.368 14:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:48.368 14:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:48.368 14:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:48.368 14:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:48.368 14:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:48.368 14:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:48.625 14:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:48.625 14:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:48.625 14:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:26:48.625 14:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:48.883 14:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:48.883 "name": "pt3", 00:26:48.883 "aliases": [ 00:26:48.883 "00000000-0000-0000-0000-000000000003" 00:26:48.883 ], 00:26:48.883 "product_name": "passthru", 00:26:48.883 "block_size": 512, 00:26:48.883 "num_blocks": 65536, 00:26:48.883 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:48.883 "assigned_rate_limits": { 00:26:48.883 "rw_ios_per_sec": 0, 00:26:48.883 "rw_mbytes_per_sec": 0, 00:26:48.883 "r_mbytes_per_sec": 0, 00:26:48.883 "w_mbytes_per_sec": 0 00:26:48.883 }, 00:26:48.883 "claimed": true, 00:26:48.883 "claim_type": "exclusive_write", 00:26:48.883 "zoned": false, 00:26:48.883 "supported_io_types": { 00:26:48.883 "read": true, 00:26:48.883 "write": true, 00:26:48.883 "unmap": true, 00:26:48.883 "flush": true, 00:26:48.883 "reset": true, 00:26:48.883 "nvme_admin": false, 00:26:48.883 "nvme_io": false, 00:26:48.883 "nvme_io_md": false, 00:26:48.883 "write_zeroes": true, 00:26:48.883 "zcopy": true, 00:26:48.883 "get_zone_info": false, 00:26:48.883 "zone_management": false, 00:26:48.883 "zone_append": false, 00:26:48.883 "compare": false, 00:26:48.883 "compare_and_write": false, 00:26:48.883 "abort": true, 00:26:48.883 "seek_hole": false, 00:26:48.883 "seek_data": false, 00:26:48.883 "copy": true, 00:26:48.883 "nvme_iov_md": false 00:26:48.883 }, 00:26:48.883 "memory_domains": [ 00:26:48.883 { 00:26:48.883 "dma_device_id": "system", 00:26:48.883 "dma_device_type": 1 00:26:48.883 }, 00:26:48.883 { 00:26:48.883 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:48.883 "dma_device_type": 2 00:26:48.883 } 00:26:48.883 ], 00:26:48.883 "driver_specific": { 00:26:48.883 "passthru": { 00:26:48.883 "name": "pt3", 00:26:48.883 "base_bdev_name": "malloc3" 00:26:48.883 } 00:26:48.883 } 00:26:48.883 }' 00:26:48.883 14:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:48.883 14:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:48.883 14:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:48.883 14:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:48.883 14:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:48.883 14:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:48.883 14:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:48.883 14:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:49.140 14:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:49.140 14:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:49.140 14:19:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:49.140 14:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:49.140 14:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:49.140 14:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:26:49.140 14:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:49.431 14:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:49.431 "name": "pt4", 00:26:49.431 "aliases": [ 00:26:49.431 "00000000-0000-0000-0000-000000000004" 00:26:49.431 ], 00:26:49.431 "product_name": "passthru", 00:26:49.431 "block_size": 512, 00:26:49.431 "num_blocks": 65536, 00:26:49.431 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:49.431 "assigned_rate_limits": { 00:26:49.431 "rw_ios_per_sec": 0, 00:26:49.431 "rw_mbytes_per_sec": 0, 00:26:49.431 "r_mbytes_per_sec": 0, 00:26:49.431 "w_mbytes_per_sec": 0 00:26:49.431 }, 00:26:49.431 "claimed": true, 00:26:49.431 "claim_type": "exclusive_write", 00:26:49.431 "zoned": false, 00:26:49.431 "supported_io_types": { 00:26:49.431 "read": true, 00:26:49.431 "write": true, 00:26:49.431 "unmap": true, 00:26:49.431 "flush": true, 00:26:49.431 "reset": true, 00:26:49.431 "nvme_admin": false, 00:26:49.432 "nvme_io": false, 00:26:49.432 "nvme_io_md": false, 00:26:49.432 "write_zeroes": true, 00:26:49.432 "zcopy": true, 00:26:49.432 "get_zone_info": false, 00:26:49.432 "zone_management": false, 00:26:49.432 "zone_append": false, 00:26:49.432 "compare": false, 00:26:49.432 "compare_and_write": false, 00:26:49.432 "abort": true, 00:26:49.432 "seek_hole": false, 00:26:49.432 "seek_data": false, 00:26:49.432 "copy": true, 00:26:49.432 "nvme_iov_md": false 00:26:49.432 }, 00:26:49.432 "memory_domains": [ 00:26:49.432 { 00:26:49.432 "dma_device_id": "system", 00:26:49.432 "dma_device_type": 1 00:26:49.432 }, 00:26:49.432 { 00:26:49.432 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:49.432 "dma_device_type": 2 00:26:49.432 } 00:26:49.432 ], 00:26:49.432 "driver_specific": { 00:26:49.432 "passthru": { 00:26:49.432 "name": "pt4", 00:26:49.432 "base_bdev_name": "malloc4" 00:26:49.432 } 00:26:49.432 } 00:26:49.432 }' 00:26:49.432 14:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:49.432 14:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:49.432 14:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:49.432 14:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:49.706 14:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:49.706 14:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:49.706 14:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:49.706 14:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:49.706 14:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:49.706 14:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:49.706 14:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:49.706 14:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:49.706 14:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:49.706 14:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:26:49.964 [2024-07-15 14:19:35.850142] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:49.964 14:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=9282d4cb-92df-4a38-b6f5-6430988b1527 00:26:49.964 14:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 9282d4cb-92df-4a38-b6f5-6430988b1527 ']' 00:26:49.964 14:19:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:50.222 [2024-07-15 14:19:36.093914] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:50.222 [2024-07-15 14:19:36.094121] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:50.222 [2024-07-15 14:19:36.094300] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:50.222 [2024-07-15 14:19:36.094459] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:50.222 [2024-07-15 14:19:36.094564] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:26:50.222 14:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:50.222 14:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:26:50.479 14:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:26:50.479 14:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:26:50.479 14:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:26:50.479 14:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:26:50.737 14:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:26:50.737 14:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:26:50.995 14:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:26:50.995 14:19:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:26:51.253 14:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:26:51.253 14:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:26:51.512 14:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:26:51.512 14:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:26:51.771 14:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:26:51.771 14:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:26:51.771 14:19:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:26:51.771 14:19:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:26:51.771 14:19:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:51.771 14:19:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:51.771 14:19:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:51.771 14:19:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:51.771 14:19:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:51.771 14:19:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:51.771 14:19:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:51.771 14:19:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:26:51.771 14:19:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:26:52.029 [2024-07-15 14:19:37.941303] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:26:52.029 [2024-07-15 14:19:37.943062] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:26:52.029 [2024-07-15 14:19:37.943248] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:26:52.029 [2024-07-15 14:19:37.943446] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:26:52.029 [2024-07-15 14:19:37.943614] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:26:52.029 [2024-07-15 14:19:37.943847] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:26:52.029 [2024-07-15 14:19:37.944033] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:26:52.029 [2024-07-15 14:19:37.944191] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:26:52.029 [2024-07-15 14:19:37.944326] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:52.029 [2024-07-15 14:19:37.944459] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state configuring 00:26:52.029 request: 00:26:52.029 { 00:26:52.029 "name": "raid_bdev1", 00:26:52.029 "raid_level": "raid1", 00:26:52.029 "base_bdevs": [ 00:26:52.029 "malloc1", 00:26:52.029 "malloc2", 00:26:52.029 "malloc3", 00:26:52.029 "malloc4" 00:26:52.029 ], 00:26:52.029 "superblock": false, 00:26:52.029 "method": "bdev_raid_create", 00:26:52.029 "req_id": 1 00:26:52.029 } 00:26:52.029 Got JSON-RPC error response 00:26:52.029 response: 00:26:52.029 { 00:26:52.029 "code": -17, 00:26:52.029 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:26:52.029 } 00:26:52.029 14:19:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:26:52.029 14:19:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:52.029 14:19:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:52.029 14:19:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:52.029 14:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:52.029 14:19:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:26:52.287 14:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:26:52.287 14:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:26:52.287 14:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:52.545 [2024-07-15 14:19:38.420123] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:52.545 [2024-07-15 14:19:38.420429] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:52.545 [2024-07-15 14:19:38.420509] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:26:52.545 [2024-07-15 14:19:38.420809] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:52.545 [2024-07-15 14:19:38.422716] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:52.545 [2024-07-15 14:19:38.422929] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:52.545 [2024-07-15 14:19:38.423135] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:26:52.545 [2024-07-15 14:19:38.423318] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:52.545 pt1 00:26:52.545 14:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:26:52.545 14:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:52.545 14:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:52.545 14:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:52.545 14:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:52.545 14:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:52.545 14:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:52.545 14:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:52.545 14:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:52.545 14:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:52.545 14:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:52.545 14:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:52.803 14:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:52.803 "name": "raid_bdev1", 00:26:52.803 "uuid": "9282d4cb-92df-4a38-b6f5-6430988b1527", 00:26:52.803 "strip_size_kb": 0, 00:26:52.803 "state": "configuring", 00:26:52.803 "raid_level": "raid1", 00:26:52.803 "superblock": true, 00:26:52.803 "num_base_bdevs": 4, 00:26:52.803 "num_base_bdevs_discovered": 1, 00:26:52.803 "num_base_bdevs_operational": 4, 00:26:52.803 "base_bdevs_list": [ 00:26:52.803 { 00:26:52.803 "name": "pt1", 00:26:52.803 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:52.803 "is_configured": true, 00:26:52.803 "data_offset": 2048, 00:26:52.803 "data_size": 63488 00:26:52.803 }, 00:26:52.803 { 00:26:52.803 "name": null, 00:26:52.803 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:52.803 "is_configured": false, 00:26:52.803 "data_offset": 2048, 00:26:52.803 "data_size": 63488 00:26:52.803 }, 00:26:52.803 { 00:26:52.803 "name": null, 00:26:52.803 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:52.803 "is_configured": false, 00:26:52.803 "data_offset": 2048, 00:26:52.803 "data_size": 63488 00:26:52.803 }, 00:26:52.803 { 00:26:52.803 "name": null, 00:26:52.803 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:52.803 "is_configured": false, 00:26:52.803 "data_offset": 2048, 00:26:52.803 "data_size": 63488 00:26:52.803 } 00:26:52.803 ] 00:26:52.803 }' 00:26:52.803 14:19:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:52.803 14:19:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:53.368 14:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:26:53.368 14:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:53.627 [2024-07-15 14:19:39.566118] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:53.627 [2024-07-15 14:19:39.566484] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:53.627 [2024-07-15 14:19:39.566572] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:26:53.627 [2024-07-15 14:19:39.566867] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:53.627 [2024-07-15 14:19:39.567378] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:53.627 [2024-07-15 14:19:39.567543] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:53.627 [2024-07-15 14:19:39.567798] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:26:53.627 [2024-07-15 14:19:39.567939] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:53.627 pt2 00:26:53.627 14:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:26:53.885 [2024-07-15 14:19:39.850226] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:26:53.885 14:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:26:53.885 14:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:53.885 14:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:26:53.885 14:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:53.885 14:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:53.885 14:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:53.885 14:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:53.885 14:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:53.885 14:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:53.885 14:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:53.885 14:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:53.885 14:19:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:54.143 14:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:54.143 "name": "raid_bdev1", 00:26:54.143 "uuid": "9282d4cb-92df-4a38-b6f5-6430988b1527", 00:26:54.143 "strip_size_kb": 0, 00:26:54.143 "state": "configuring", 00:26:54.143 "raid_level": "raid1", 00:26:54.143 "superblock": true, 00:26:54.143 "num_base_bdevs": 4, 00:26:54.143 "num_base_bdevs_discovered": 1, 00:26:54.143 "num_base_bdevs_operational": 4, 00:26:54.143 "base_bdevs_list": [ 00:26:54.143 { 00:26:54.143 "name": "pt1", 00:26:54.143 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:54.143 "is_configured": true, 00:26:54.143 "data_offset": 2048, 00:26:54.143 "data_size": 63488 00:26:54.143 }, 00:26:54.143 { 00:26:54.143 "name": null, 00:26:54.143 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:54.143 "is_configured": false, 00:26:54.143 "data_offset": 2048, 00:26:54.143 "data_size": 63488 00:26:54.143 }, 00:26:54.143 { 00:26:54.143 "name": null, 00:26:54.143 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:54.143 "is_configured": false, 00:26:54.143 "data_offset": 2048, 00:26:54.144 "data_size": 63488 00:26:54.144 }, 00:26:54.144 { 00:26:54.144 "name": null, 00:26:54.144 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:54.144 "is_configured": false, 00:26:54.144 "data_offset": 2048, 00:26:54.144 "data_size": 63488 00:26:54.144 } 00:26:54.144 ] 00:26:54.144 }' 00:26:54.144 14:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:54.144 14:19:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:55.079 14:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:26:55.079 14:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:26:55.079 14:19:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:55.337 [2024-07-15 14:19:41.126319] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:55.337 [2024-07-15 14:19:41.126568] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:55.337 [2024-07-15 14:19:41.126781] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:26:55.337 [2024-07-15 14:19:41.126979] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:55.337 [2024-07-15 14:19:41.127439] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:55.337 [2024-07-15 14:19:41.127605] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:55.337 [2024-07-15 14:19:41.127822] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:26:55.337 [2024-07-15 14:19:41.127962] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:55.337 pt2 00:26:55.337 14:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:26:55.337 14:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:26:55.337 14:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:26:55.595 [2024-07-15 14:19:41.370362] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:26:55.595 [2024-07-15 14:19:41.370619] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:55.595 [2024-07-15 14:19:41.370829] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:26:55.595 [2024-07-15 14:19:41.370997] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:55.595 [2024-07-15 14:19:41.371483] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:55.595 [2024-07-15 14:19:41.371647] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:26:55.596 [2024-07-15 14:19:41.371858] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:26:55.596 [2024-07-15 14:19:41.372006] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:26:55.596 pt3 00:26:55.596 14:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:26:55.596 14:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:26:55.596 14:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:26:55.855 [2024-07-15 14:19:41.622438] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:26:55.855 [2024-07-15 14:19:41.622708] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:55.855 [2024-07-15 14:19:41.622885] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:26:55.855 [2024-07-15 14:19:41.623048] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:55.855 [2024-07-15 14:19:41.623540] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:55.855 [2024-07-15 14:19:41.623753] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:26:55.856 [2024-07-15 14:19:41.623978] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:26:55.856 [2024-07-15 14:19:41.624116] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:26:55.856 [2024-07-15 14:19:41.624354] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:26:55.856 [2024-07-15 14:19:41.624479] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:55.856 [2024-07-15 14:19:41.624611] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:26:55.856 [2024-07-15 14:19:41.624988] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:26:55.856 [2024-07-15 14:19:41.625141] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:26:55.856 [2024-07-15 14:19:41.625350] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:55.856 pt4 00:26:55.856 14:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:26:55.856 14:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:26:55.856 14:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:26:55.856 14:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:55.856 14:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:55.856 14:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:55.856 14:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:55.856 14:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:55.856 14:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:55.856 14:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:55.856 14:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:55.856 14:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:55.856 14:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:55.856 14:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:56.118 14:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:56.118 "name": "raid_bdev1", 00:26:56.118 "uuid": "9282d4cb-92df-4a38-b6f5-6430988b1527", 00:26:56.118 "strip_size_kb": 0, 00:26:56.118 "state": "online", 00:26:56.118 "raid_level": "raid1", 00:26:56.118 "superblock": true, 00:26:56.118 "num_base_bdevs": 4, 00:26:56.118 "num_base_bdevs_discovered": 4, 00:26:56.118 "num_base_bdevs_operational": 4, 00:26:56.118 "base_bdevs_list": [ 00:26:56.118 { 00:26:56.118 "name": "pt1", 00:26:56.118 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:56.118 "is_configured": true, 00:26:56.118 "data_offset": 2048, 00:26:56.118 "data_size": 63488 00:26:56.118 }, 00:26:56.118 { 00:26:56.118 "name": "pt2", 00:26:56.118 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:56.118 "is_configured": true, 00:26:56.118 "data_offset": 2048, 00:26:56.118 "data_size": 63488 00:26:56.118 }, 00:26:56.118 { 00:26:56.118 "name": "pt3", 00:26:56.118 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:56.118 "is_configured": true, 00:26:56.118 "data_offset": 2048, 00:26:56.118 "data_size": 63488 00:26:56.118 }, 00:26:56.118 { 00:26:56.118 "name": "pt4", 00:26:56.118 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:56.118 "is_configured": true, 00:26:56.118 "data_offset": 2048, 00:26:56.118 "data_size": 63488 00:26:56.118 } 00:26:56.118 ] 00:26:56.118 }' 00:26:56.118 14:19:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:56.118 14:19:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:56.686 14:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:26:56.686 14:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:26:56.686 14:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:26:56.686 14:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:26:56.686 14:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:26:56.686 14:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:26:56.686 14:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:56.686 14:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:26:56.944 [2024-07-15 14:19:42.906893] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:56.944 14:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:26:56.944 "name": "raid_bdev1", 00:26:56.944 "aliases": [ 00:26:56.944 "9282d4cb-92df-4a38-b6f5-6430988b1527" 00:26:56.944 ], 00:26:56.944 "product_name": "Raid Volume", 00:26:56.944 "block_size": 512, 00:26:56.944 "num_blocks": 63488, 00:26:56.944 "uuid": "9282d4cb-92df-4a38-b6f5-6430988b1527", 00:26:56.944 "assigned_rate_limits": { 00:26:56.944 "rw_ios_per_sec": 0, 00:26:56.944 "rw_mbytes_per_sec": 0, 00:26:56.944 "r_mbytes_per_sec": 0, 00:26:56.944 "w_mbytes_per_sec": 0 00:26:56.944 }, 00:26:56.944 "claimed": false, 00:26:56.944 "zoned": false, 00:26:56.944 "supported_io_types": { 00:26:56.944 "read": true, 00:26:56.944 "write": true, 00:26:56.944 "unmap": false, 00:26:56.944 "flush": false, 00:26:56.944 "reset": true, 00:26:56.944 "nvme_admin": false, 00:26:56.944 "nvme_io": false, 00:26:56.944 "nvme_io_md": false, 00:26:56.944 "write_zeroes": true, 00:26:56.944 "zcopy": false, 00:26:56.944 "get_zone_info": false, 00:26:56.944 "zone_management": false, 00:26:56.944 "zone_append": false, 00:26:56.944 "compare": false, 00:26:56.944 "compare_and_write": false, 00:26:56.944 "abort": false, 00:26:56.944 "seek_hole": false, 00:26:56.944 "seek_data": false, 00:26:56.944 "copy": false, 00:26:56.944 "nvme_iov_md": false 00:26:56.944 }, 00:26:56.944 "memory_domains": [ 00:26:56.944 { 00:26:56.944 "dma_device_id": "system", 00:26:56.944 "dma_device_type": 1 00:26:56.944 }, 00:26:56.944 { 00:26:56.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:56.944 "dma_device_type": 2 00:26:56.944 }, 00:26:56.944 { 00:26:56.944 "dma_device_id": "system", 00:26:56.944 "dma_device_type": 1 00:26:56.944 }, 00:26:56.944 { 00:26:56.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:56.944 "dma_device_type": 2 00:26:56.944 }, 00:26:56.944 { 00:26:56.944 "dma_device_id": "system", 00:26:56.944 "dma_device_type": 1 00:26:56.944 }, 00:26:56.944 { 00:26:56.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:56.944 "dma_device_type": 2 00:26:56.944 }, 00:26:56.944 { 00:26:56.944 "dma_device_id": "system", 00:26:56.944 "dma_device_type": 1 00:26:56.944 }, 00:26:56.944 { 00:26:56.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:56.944 "dma_device_type": 2 00:26:56.944 } 00:26:56.944 ], 00:26:56.944 "driver_specific": { 00:26:56.944 "raid": { 00:26:56.944 "uuid": "9282d4cb-92df-4a38-b6f5-6430988b1527", 00:26:56.944 "strip_size_kb": 0, 00:26:56.944 "state": "online", 00:26:56.944 "raid_level": "raid1", 00:26:56.944 "superblock": true, 00:26:56.944 "num_base_bdevs": 4, 00:26:56.944 "num_base_bdevs_discovered": 4, 00:26:56.944 "num_base_bdevs_operational": 4, 00:26:56.944 "base_bdevs_list": [ 00:26:56.944 { 00:26:56.944 "name": "pt1", 00:26:56.944 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:56.944 "is_configured": true, 00:26:56.944 "data_offset": 2048, 00:26:56.944 "data_size": 63488 00:26:56.944 }, 00:26:56.944 { 00:26:56.944 "name": "pt2", 00:26:56.944 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:56.944 "is_configured": true, 00:26:56.944 "data_offset": 2048, 00:26:56.944 "data_size": 63488 00:26:56.944 }, 00:26:56.944 { 00:26:56.944 "name": "pt3", 00:26:56.944 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:56.944 "is_configured": true, 00:26:56.944 "data_offset": 2048, 00:26:56.944 "data_size": 63488 00:26:56.944 }, 00:26:56.944 { 00:26:56.944 "name": "pt4", 00:26:56.944 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:56.944 "is_configured": true, 00:26:56.944 "data_offset": 2048, 00:26:56.944 "data_size": 63488 00:26:56.944 } 00:26:56.944 ] 00:26:56.944 } 00:26:56.944 } 00:26:56.944 }' 00:26:56.944 14:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:57.202 14:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:26:57.202 pt2 00:26:57.202 pt3 00:26:57.202 pt4' 00:26:57.202 14:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:57.202 14:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:57.202 14:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:26:57.460 14:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:57.460 "name": "pt1", 00:26:57.460 "aliases": [ 00:26:57.460 "00000000-0000-0000-0000-000000000001" 00:26:57.460 ], 00:26:57.460 "product_name": "passthru", 00:26:57.460 "block_size": 512, 00:26:57.460 "num_blocks": 65536, 00:26:57.460 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:57.460 "assigned_rate_limits": { 00:26:57.460 "rw_ios_per_sec": 0, 00:26:57.460 "rw_mbytes_per_sec": 0, 00:26:57.460 "r_mbytes_per_sec": 0, 00:26:57.460 "w_mbytes_per_sec": 0 00:26:57.460 }, 00:26:57.460 "claimed": true, 00:26:57.460 "claim_type": "exclusive_write", 00:26:57.460 "zoned": false, 00:26:57.460 "supported_io_types": { 00:26:57.460 "read": true, 00:26:57.460 "write": true, 00:26:57.460 "unmap": true, 00:26:57.460 "flush": true, 00:26:57.460 "reset": true, 00:26:57.460 "nvme_admin": false, 00:26:57.460 "nvme_io": false, 00:26:57.460 "nvme_io_md": false, 00:26:57.460 "write_zeroes": true, 00:26:57.460 "zcopy": true, 00:26:57.460 "get_zone_info": false, 00:26:57.460 "zone_management": false, 00:26:57.460 "zone_append": false, 00:26:57.460 "compare": false, 00:26:57.460 "compare_and_write": false, 00:26:57.460 "abort": true, 00:26:57.460 "seek_hole": false, 00:26:57.460 "seek_data": false, 00:26:57.460 "copy": true, 00:26:57.460 "nvme_iov_md": false 00:26:57.460 }, 00:26:57.460 "memory_domains": [ 00:26:57.460 { 00:26:57.460 "dma_device_id": "system", 00:26:57.460 "dma_device_type": 1 00:26:57.460 }, 00:26:57.460 { 00:26:57.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:57.460 "dma_device_type": 2 00:26:57.460 } 00:26:57.460 ], 00:26:57.460 "driver_specific": { 00:26:57.460 "passthru": { 00:26:57.460 "name": "pt1", 00:26:57.460 "base_bdev_name": "malloc1" 00:26:57.460 } 00:26:57.460 } 00:26:57.460 }' 00:26:57.460 14:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:57.460 14:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:57.460 14:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:57.460 14:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:57.460 14:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:57.460 14:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:57.460 14:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:57.460 14:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:57.717 14:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:57.717 14:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:57.717 14:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:57.717 14:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:57.717 14:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:57.717 14:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:26:57.717 14:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:57.975 14:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:57.975 "name": "pt2", 00:26:57.975 "aliases": [ 00:26:57.975 "00000000-0000-0000-0000-000000000002" 00:26:57.975 ], 00:26:57.975 "product_name": "passthru", 00:26:57.975 "block_size": 512, 00:26:57.975 "num_blocks": 65536, 00:26:57.975 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:57.975 "assigned_rate_limits": { 00:26:57.975 "rw_ios_per_sec": 0, 00:26:57.975 "rw_mbytes_per_sec": 0, 00:26:57.975 "r_mbytes_per_sec": 0, 00:26:57.975 "w_mbytes_per_sec": 0 00:26:57.975 }, 00:26:57.975 "claimed": true, 00:26:57.975 "claim_type": "exclusive_write", 00:26:57.975 "zoned": false, 00:26:57.975 "supported_io_types": { 00:26:57.975 "read": true, 00:26:57.975 "write": true, 00:26:57.975 "unmap": true, 00:26:57.975 "flush": true, 00:26:57.975 "reset": true, 00:26:57.975 "nvme_admin": false, 00:26:57.975 "nvme_io": false, 00:26:57.975 "nvme_io_md": false, 00:26:57.975 "write_zeroes": true, 00:26:57.975 "zcopy": true, 00:26:57.975 "get_zone_info": false, 00:26:57.975 "zone_management": false, 00:26:57.975 "zone_append": false, 00:26:57.975 "compare": false, 00:26:57.975 "compare_and_write": false, 00:26:57.975 "abort": true, 00:26:57.975 "seek_hole": false, 00:26:57.975 "seek_data": false, 00:26:57.975 "copy": true, 00:26:57.975 "nvme_iov_md": false 00:26:57.975 }, 00:26:57.975 "memory_domains": [ 00:26:57.975 { 00:26:57.975 "dma_device_id": "system", 00:26:57.975 "dma_device_type": 1 00:26:57.975 }, 00:26:57.975 { 00:26:57.975 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:57.975 "dma_device_type": 2 00:26:57.975 } 00:26:57.975 ], 00:26:57.975 "driver_specific": { 00:26:57.975 "passthru": { 00:26:57.975 "name": "pt2", 00:26:57.975 "base_bdev_name": "malloc2" 00:26:57.975 } 00:26:57.975 } 00:26:57.975 }' 00:26:57.975 14:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:57.975 14:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:57.975 14:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:57.975 14:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:57.975 14:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:58.233 14:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:58.233 14:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:58.233 14:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:58.233 14:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:58.233 14:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:58.233 14:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:58.233 14:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:58.233 14:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:58.233 14:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:26:58.233 14:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:58.491 14:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:58.491 "name": "pt3", 00:26:58.491 "aliases": [ 00:26:58.491 "00000000-0000-0000-0000-000000000003" 00:26:58.491 ], 00:26:58.491 "product_name": "passthru", 00:26:58.491 "block_size": 512, 00:26:58.491 "num_blocks": 65536, 00:26:58.491 "uuid": "00000000-0000-0000-0000-000000000003", 00:26:58.491 "assigned_rate_limits": { 00:26:58.491 "rw_ios_per_sec": 0, 00:26:58.491 "rw_mbytes_per_sec": 0, 00:26:58.491 "r_mbytes_per_sec": 0, 00:26:58.491 "w_mbytes_per_sec": 0 00:26:58.491 }, 00:26:58.491 "claimed": true, 00:26:58.491 "claim_type": "exclusive_write", 00:26:58.491 "zoned": false, 00:26:58.491 "supported_io_types": { 00:26:58.491 "read": true, 00:26:58.491 "write": true, 00:26:58.491 "unmap": true, 00:26:58.491 "flush": true, 00:26:58.491 "reset": true, 00:26:58.491 "nvme_admin": false, 00:26:58.491 "nvme_io": false, 00:26:58.491 "nvme_io_md": false, 00:26:58.491 "write_zeroes": true, 00:26:58.491 "zcopy": true, 00:26:58.491 "get_zone_info": false, 00:26:58.491 "zone_management": false, 00:26:58.491 "zone_append": false, 00:26:58.491 "compare": false, 00:26:58.491 "compare_and_write": false, 00:26:58.491 "abort": true, 00:26:58.491 "seek_hole": false, 00:26:58.491 "seek_data": false, 00:26:58.491 "copy": true, 00:26:58.491 "nvme_iov_md": false 00:26:58.491 }, 00:26:58.491 "memory_domains": [ 00:26:58.491 { 00:26:58.491 "dma_device_id": "system", 00:26:58.491 "dma_device_type": 1 00:26:58.491 }, 00:26:58.491 { 00:26:58.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:58.491 "dma_device_type": 2 00:26:58.491 } 00:26:58.491 ], 00:26:58.491 "driver_specific": { 00:26:58.491 "passthru": { 00:26:58.491 "name": "pt3", 00:26:58.491 "base_bdev_name": "malloc3" 00:26:58.491 } 00:26:58.491 } 00:26:58.491 }' 00:26:58.491 14:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:58.749 14:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:58.749 14:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:58.749 14:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:58.749 14:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:58.749 14:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:58.749 14:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:58.749 14:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:58.749 14:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:58.749 14:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:59.007 14:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:59.007 14:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:59.007 14:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:26:59.007 14:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:26:59.007 14:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:26:59.285 14:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:26:59.285 "name": "pt4", 00:26:59.285 "aliases": [ 00:26:59.285 "00000000-0000-0000-0000-000000000004" 00:26:59.285 ], 00:26:59.285 "product_name": "passthru", 00:26:59.285 "block_size": 512, 00:26:59.285 "num_blocks": 65536, 00:26:59.285 "uuid": "00000000-0000-0000-0000-000000000004", 00:26:59.285 "assigned_rate_limits": { 00:26:59.285 "rw_ios_per_sec": 0, 00:26:59.285 "rw_mbytes_per_sec": 0, 00:26:59.285 "r_mbytes_per_sec": 0, 00:26:59.285 "w_mbytes_per_sec": 0 00:26:59.285 }, 00:26:59.285 "claimed": true, 00:26:59.285 "claim_type": "exclusive_write", 00:26:59.285 "zoned": false, 00:26:59.285 "supported_io_types": { 00:26:59.285 "read": true, 00:26:59.285 "write": true, 00:26:59.285 "unmap": true, 00:26:59.285 "flush": true, 00:26:59.285 "reset": true, 00:26:59.285 "nvme_admin": false, 00:26:59.285 "nvme_io": false, 00:26:59.285 "nvme_io_md": false, 00:26:59.285 "write_zeroes": true, 00:26:59.285 "zcopy": true, 00:26:59.285 "get_zone_info": false, 00:26:59.285 "zone_management": false, 00:26:59.285 "zone_append": false, 00:26:59.285 "compare": false, 00:26:59.285 "compare_and_write": false, 00:26:59.285 "abort": true, 00:26:59.285 "seek_hole": false, 00:26:59.285 "seek_data": false, 00:26:59.285 "copy": true, 00:26:59.285 "nvme_iov_md": false 00:26:59.285 }, 00:26:59.285 "memory_domains": [ 00:26:59.285 { 00:26:59.285 "dma_device_id": "system", 00:26:59.285 "dma_device_type": 1 00:26:59.285 }, 00:26:59.285 { 00:26:59.285 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:59.285 "dma_device_type": 2 00:26:59.285 } 00:26:59.285 ], 00:26:59.285 "driver_specific": { 00:26:59.285 "passthru": { 00:26:59.285 "name": "pt4", 00:26:59.285 "base_bdev_name": "malloc4" 00:26:59.285 } 00:26:59.285 } 00:26:59.285 }' 00:26:59.285 14:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:59.285 14:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:26:59.285 14:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:26:59.285 14:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:59.285 14:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:26:59.568 14:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:26:59.568 14:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:59.568 14:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:26:59.568 14:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:26:59.568 14:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:59.568 14:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:26:59.568 14:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:26:59.568 14:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:59.568 14:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:26:59.827 [2024-07-15 14:19:45.766409] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:59.827 14:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 9282d4cb-92df-4a38-b6f5-6430988b1527 '!=' 9282d4cb-92df-4a38-b6f5-6430988b1527 ']' 00:26:59.827 14:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:26:59.827 14:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:26:59.827 14:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:26:59.827 14:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:27:00.086 [2024-07-15 14:19:46.062236] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:27:00.086 14:19:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:27:00.086 14:19:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:00.345 14:19:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:00.345 14:19:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:00.345 14:19:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:00.345 14:19:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:27:00.345 14:19:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:00.345 14:19:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:00.345 14:19:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:00.345 14:19:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:00.345 14:19:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:00.345 14:19:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:00.345 14:19:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:00.345 "name": "raid_bdev1", 00:27:00.345 "uuid": "9282d4cb-92df-4a38-b6f5-6430988b1527", 00:27:00.345 "strip_size_kb": 0, 00:27:00.345 "state": "online", 00:27:00.345 "raid_level": "raid1", 00:27:00.345 "superblock": true, 00:27:00.345 "num_base_bdevs": 4, 00:27:00.345 "num_base_bdevs_discovered": 3, 00:27:00.345 "num_base_bdevs_operational": 3, 00:27:00.345 "base_bdevs_list": [ 00:27:00.345 { 00:27:00.345 "name": null, 00:27:00.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:00.345 "is_configured": false, 00:27:00.345 "data_offset": 2048, 00:27:00.345 "data_size": 63488 00:27:00.345 }, 00:27:00.345 { 00:27:00.345 "name": "pt2", 00:27:00.345 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:00.345 "is_configured": true, 00:27:00.345 "data_offset": 2048, 00:27:00.345 "data_size": 63488 00:27:00.345 }, 00:27:00.345 { 00:27:00.345 "name": "pt3", 00:27:00.345 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:00.345 "is_configured": true, 00:27:00.345 "data_offset": 2048, 00:27:00.345 "data_size": 63488 00:27:00.345 }, 00:27:00.345 { 00:27:00.345 "name": "pt4", 00:27:00.345 "uuid": "00000000-0000-0000-0000-000000000004", 00:27:00.345 "is_configured": true, 00:27:00.345 "data_offset": 2048, 00:27:00.345 "data_size": 63488 00:27:00.345 } 00:27:00.345 ] 00:27:00.345 }' 00:27:00.345 14:19:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:00.345 14:19:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:01.281 14:19:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:01.281 [2024-07-15 14:19:47.222387] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:01.281 [2024-07-15 14:19:47.222658] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:01.281 [2024-07-15 14:19:47.222850] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:01.281 [2024-07-15 14:19:47.223055] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:01.281 [2024-07-15 14:19:47.223185] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:27:01.281 14:19:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:27:01.281 14:19:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:01.540 14:19:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:27:01.540 14:19:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:27:01.540 14:19:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:27:01.540 14:19:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:27:01.540 14:19:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:27:01.799 14:19:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:27:01.799 14:19:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:27:01.799 14:19:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:27:02.058 14:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:27:02.058 14:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:27:02.058 14:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:27:02.316 14:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:27:02.316 14:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:27:02.316 14:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:27:02.316 14:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:27:02.316 14:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:02.574 [2024-07-15 14:19:48.509277] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:02.574 [2024-07-15 14:19:48.509621] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:02.574 [2024-07-15 14:19:48.509804] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:27:02.574 [2024-07-15 14:19:48.509968] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:02.574 [2024-07-15 14:19:48.511874] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:02.574 [2024-07-15 14:19:48.512044] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:02.574 [2024-07-15 14:19:48.512244] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:27:02.574 [2024-07-15 14:19:48.512385] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:02.574 pt2 00:27:02.574 14:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:27:02.574 14:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:02.574 14:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:02.574 14:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:02.574 14:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:02.574 14:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:27:02.574 14:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:02.574 14:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:02.574 14:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:02.574 14:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:02.574 14:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:02.574 14:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:02.865 14:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:02.865 "name": "raid_bdev1", 00:27:02.865 "uuid": "9282d4cb-92df-4a38-b6f5-6430988b1527", 00:27:02.865 "strip_size_kb": 0, 00:27:02.865 "state": "configuring", 00:27:02.865 "raid_level": "raid1", 00:27:02.865 "superblock": true, 00:27:02.866 "num_base_bdevs": 4, 00:27:02.866 "num_base_bdevs_discovered": 1, 00:27:02.866 "num_base_bdevs_operational": 3, 00:27:02.866 "base_bdevs_list": [ 00:27:02.866 { 00:27:02.866 "name": null, 00:27:02.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:02.866 "is_configured": false, 00:27:02.866 "data_offset": 2048, 00:27:02.866 "data_size": 63488 00:27:02.866 }, 00:27:02.866 { 00:27:02.866 "name": "pt2", 00:27:02.866 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:02.866 "is_configured": true, 00:27:02.866 "data_offset": 2048, 00:27:02.866 "data_size": 63488 00:27:02.866 }, 00:27:02.866 { 00:27:02.866 "name": null, 00:27:02.866 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:02.866 "is_configured": false, 00:27:02.866 "data_offset": 2048, 00:27:02.866 "data_size": 63488 00:27:02.866 }, 00:27:02.866 { 00:27:02.866 "name": null, 00:27:02.866 "uuid": "00000000-0000-0000-0000-000000000004", 00:27:02.866 "is_configured": false, 00:27:02.866 "data_offset": 2048, 00:27:02.866 "data_size": 63488 00:27:02.866 } 00:27:02.866 ] 00:27:02.866 }' 00:27:02.866 14:19:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:02.866 14:19:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:03.431 14:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:27:03.432 14:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:27:03.432 14:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:27:04.001 [2024-07-15 14:19:49.701043] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:27:04.001 [2024-07-15 14:19:49.701353] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:04.001 [2024-07-15 14:19:49.701513] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:27:04.001 [2024-07-15 14:19:49.701661] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:04.001 [2024-07-15 14:19:49.702090] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:04.001 [2024-07-15 14:19:49.702250] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:27:04.001 [2024-07-15 14:19:49.702477] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:27:04.001 [2024-07-15 14:19:49.702602] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:27:04.001 pt3 00:27:04.001 14:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:27:04.001 14:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:04.001 14:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:04.001 14:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:04.001 14:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:04.001 14:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:27:04.001 14:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:04.001 14:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:04.001 14:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:04.001 14:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:04.001 14:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:04.001 14:19:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:04.258 14:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:04.258 "name": "raid_bdev1", 00:27:04.258 "uuid": "9282d4cb-92df-4a38-b6f5-6430988b1527", 00:27:04.258 "strip_size_kb": 0, 00:27:04.258 "state": "configuring", 00:27:04.258 "raid_level": "raid1", 00:27:04.258 "superblock": true, 00:27:04.258 "num_base_bdevs": 4, 00:27:04.258 "num_base_bdevs_discovered": 2, 00:27:04.258 "num_base_bdevs_operational": 3, 00:27:04.258 "base_bdevs_list": [ 00:27:04.258 { 00:27:04.258 "name": null, 00:27:04.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:04.258 "is_configured": false, 00:27:04.259 "data_offset": 2048, 00:27:04.259 "data_size": 63488 00:27:04.259 }, 00:27:04.259 { 00:27:04.259 "name": "pt2", 00:27:04.259 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:04.259 "is_configured": true, 00:27:04.259 "data_offset": 2048, 00:27:04.259 "data_size": 63488 00:27:04.259 }, 00:27:04.259 { 00:27:04.259 "name": "pt3", 00:27:04.259 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:04.259 "is_configured": true, 00:27:04.259 "data_offset": 2048, 00:27:04.259 "data_size": 63488 00:27:04.259 }, 00:27:04.259 { 00:27:04.259 "name": null, 00:27:04.259 "uuid": "00000000-0000-0000-0000-000000000004", 00:27:04.259 "is_configured": false, 00:27:04.259 "data_offset": 2048, 00:27:04.259 "data_size": 63488 00:27:04.259 } 00:27:04.259 ] 00:27:04.259 }' 00:27:04.259 14:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:04.259 14:19:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:04.824 14:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:27:04.824 14:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:27:04.824 14:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@518 -- # i=3 00:27:04.824 14:19:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:27:05.082 [2024-07-15 14:19:50.985232] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:27:05.082 [2024-07-15 14:19:50.985588] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:05.082 [2024-07-15 14:19:50.985673] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:27:05.082 [2024-07-15 14:19:50.985869] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:05.082 [2024-07-15 14:19:50.986270] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:05.082 [2024-07-15 14:19:50.986427] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:27:05.082 [2024-07-15 14:19:50.986651] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:27:05.082 [2024-07-15 14:19:50.986802] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:27:05.082 [2024-07-15 14:19:50.986997] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000bd80 00:27:05.082 [2024-07-15 14:19:50.987114] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:27:05.082 [2024-07-15 14:19:50.987247] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:27:05.082 [2024-07-15 14:19:50.987585] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000bd80 00:27:05.082 [2024-07-15 14:19:50.987744] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000bd80 00:27:05.082 [2024-07-15 14:19:50.987954] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:05.082 pt4 00:27:05.082 14:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:27:05.082 14:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:05.082 14:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:05.082 14:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:05.082 14:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:05.082 14:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:27:05.082 14:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:05.082 14:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:05.082 14:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:05.082 14:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:05.082 14:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:05.082 14:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:05.340 14:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:05.340 "name": "raid_bdev1", 00:27:05.340 "uuid": "9282d4cb-92df-4a38-b6f5-6430988b1527", 00:27:05.340 "strip_size_kb": 0, 00:27:05.340 "state": "online", 00:27:05.340 "raid_level": "raid1", 00:27:05.340 "superblock": true, 00:27:05.340 "num_base_bdevs": 4, 00:27:05.340 "num_base_bdevs_discovered": 3, 00:27:05.340 "num_base_bdevs_operational": 3, 00:27:05.340 "base_bdevs_list": [ 00:27:05.340 { 00:27:05.340 "name": null, 00:27:05.340 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:05.340 "is_configured": false, 00:27:05.340 "data_offset": 2048, 00:27:05.340 "data_size": 63488 00:27:05.340 }, 00:27:05.340 { 00:27:05.340 "name": "pt2", 00:27:05.340 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:05.340 "is_configured": true, 00:27:05.340 "data_offset": 2048, 00:27:05.340 "data_size": 63488 00:27:05.340 }, 00:27:05.340 { 00:27:05.340 "name": "pt3", 00:27:05.340 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:05.340 "is_configured": true, 00:27:05.340 "data_offset": 2048, 00:27:05.340 "data_size": 63488 00:27:05.340 }, 00:27:05.340 { 00:27:05.340 "name": "pt4", 00:27:05.340 "uuid": "00000000-0000-0000-0000-000000000004", 00:27:05.340 "is_configured": true, 00:27:05.340 "data_offset": 2048, 00:27:05.340 "data_size": 63488 00:27:05.340 } 00:27:05.340 ] 00:27:05.340 }' 00:27:05.340 14:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:05.340 14:19:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:06.271 14:19:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:06.271 [2024-07-15 14:19:52.224211] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:06.271 [2024-07-15 14:19:52.224498] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:06.271 [2024-07-15 14:19:52.224696] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:06.271 [2024-07-15 14:19:52.224876] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:06.271 [2024-07-15 14:19:52.225020] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000bd80 name raid_bdev1, state offline 00:27:06.271 14:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:06.271 14:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:27:06.836 14:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:27:06.836 14:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:27:06.836 14:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 4 -gt 2 ']' 00:27:06.836 14:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@533 -- # i=3 00:27:06.836 14:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:27:06.836 14:19:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:07.401 [2024-07-15 14:19:53.100364] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:07.401 [2024-07-15 14:19:53.100707] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:07.401 [2024-07-15 14:19:53.100944] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:27:07.401 [2024-07-15 14:19:53.101106] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:07.401 [2024-07-15 14:19:53.103198] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:07.401 [2024-07-15 14:19:53.103372] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:07.401 [2024-07-15 14:19:53.103605] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:27:07.401 [2024-07-15 14:19:53.103787] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:07.401 [2024-07-15 14:19:53.104047] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:27:07.401 [2024-07-15 14:19:53.104172] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:07.401 [2024-07-15 14:19:53.104305] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000cc80 name raid_bdev1, state configuring 00:27:07.401 [2024-07-15 14:19:53.104486] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:07.401 [2024-07-15 14:19:53.104709] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:27:07.401 pt1 00:27:07.401 14:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 4 -gt 2 ']' 00:27:07.401 14:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@544 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:27:07.401 14:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:07.401 14:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:27:07.401 14:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:07.401 14:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:07.401 14:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:27:07.401 14:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:07.401 14:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:07.401 14:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:07.401 14:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:07.401 14:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:07.401 14:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:07.658 14:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:07.658 "name": "raid_bdev1", 00:27:07.658 "uuid": "9282d4cb-92df-4a38-b6f5-6430988b1527", 00:27:07.658 "strip_size_kb": 0, 00:27:07.658 "state": "configuring", 00:27:07.658 "raid_level": "raid1", 00:27:07.658 "superblock": true, 00:27:07.658 "num_base_bdevs": 4, 00:27:07.658 "num_base_bdevs_discovered": 2, 00:27:07.658 "num_base_bdevs_operational": 3, 00:27:07.658 "base_bdevs_list": [ 00:27:07.658 { 00:27:07.658 "name": null, 00:27:07.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:07.658 "is_configured": false, 00:27:07.658 "data_offset": 2048, 00:27:07.658 "data_size": 63488 00:27:07.658 }, 00:27:07.658 { 00:27:07.658 "name": "pt2", 00:27:07.658 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:07.658 "is_configured": true, 00:27:07.658 "data_offset": 2048, 00:27:07.658 "data_size": 63488 00:27:07.658 }, 00:27:07.658 { 00:27:07.658 "name": "pt3", 00:27:07.658 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:07.658 "is_configured": true, 00:27:07.658 "data_offset": 2048, 00:27:07.658 "data_size": 63488 00:27:07.658 }, 00:27:07.658 { 00:27:07.658 "name": null, 00:27:07.658 "uuid": "00000000-0000-0000-0000-000000000004", 00:27:07.658 "is_configured": false, 00:27:07.658 "data_offset": 2048, 00:27:07.658 "data_size": 63488 00:27:07.658 } 00:27:07.658 ] 00:27:07.658 }' 00:27:07.658 14:19:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:07.659 14:19:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:08.224 14:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:27:08.224 14:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:27:08.481 14:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # [[ false == \f\a\l\s\e ]] 00:27:08.481 14:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@548 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:27:08.740 [2024-07-15 14:19:54.617088] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:27:08.740 [2024-07-15 14:19:54.617495] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:08.740 [2024-07-15 14:19:54.617586] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000d280 00:27:08.740 [2024-07-15 14:19:54.617901] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:08.740 [2024-07-15 14:19:54.618513] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:08.740 [2024-07-15 14:19:54.618685] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:27:08.740 [2024-07-15 14:19:54.618935] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:27:08.740 [2024-07-15 14:19:54.619079] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:27:08.740 [2024-07-15 14:19:54.619308] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000cf80 00:27:08.740 [2024-07-15 14:19:54.619432] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:27:08.740 [2024-07-15 14:19:54.619620] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006970 00:27:08.740 [2024-07-15 14:19:54.620012] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000cf80 00:27:08.740 [2024-07-15 14:19:54.620150] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000cf80 00:27:08.740 [2024-07-15 14:19:54.620367] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:08.740 pt4 00:27:08.740 14:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:27:08.740 14:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:08.740 14:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:08.740 14:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:08.740 14:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:08.740 14:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:27:08.740 14:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:08.740 14:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:08.740 14:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:08.740 14:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:08.740 14:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:08.740 14:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:08.998 14:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:08.998 "name": "raid_bdev1", 00:27:08.998 "uuid": "9282d4cb-92df-4a38-b6f5-6430988b1527", 00:27:08.998 "strip_size_kb": 0, 00:27:08.998 "state": "online", 00:27:08.998 "raid_level": "raid1", 00:27:08.998 "superblock": true, 00:27:08.998 "num_base_bdevs": 4, 00:27:08.998 "num_base_bdevs_discovered": 3, 00:27:08.998 "num_base_bdevs_operational": 3, 00:27:08.998 "base_bdevs_list": [ 00:27:08.998 { 00:27:08.998 "name": null, 00:27:08.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:08.998 "is_configured": false, 00:27:08.998 "data_offset": 2048, 00:27:08.998 "data_size": 63488 00:27:08.998 }, 00:27:08.998 { 00:27:08.998 "name": "pt2", 00:27:08.998 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:08.998 "is_configured": true, 00:27:08.998 "data_offset": 2048, 00:27:08.998 "data_size": 63488 00:27:08.998 }, 00:27:08.998 { 00:27:08.998 "name": "pt3", 00:27:08.998 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:08.998 "is_configured": true, 00:27:08.998 "data_offset": 2048, 00:27:08.998 "data_size": 63488 00:27:08.998 }, 00:27:08.998 { 00:27:08.998 "name": "pt4", 00:27:08.998 "uuid": "00000000-0000-0000-0000-000000000004", 00:27:08.998 "is_configured": true, 00:27:08.998 "data_offset": 2048, 00:27:08.998 "data_size": 63488 00:27:08.998 } 00:27:08.998 ] 00:27:08.998 }' 00:27:08.998 14:19:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:08.998 14:19:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:09.933 14:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:27:09.933 14:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:27:09.933 14:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:27:09.933 14:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:09.933 14:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:27:10.191 [2024-07-15 14:19:56.184651] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:10.471 14:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 9282d4cb-92df-4a38-b6f5-6430988b1527 '!=' 9282d4cb-92df-4a38-b6f5-6430988b1527 ']' 00:27:10.471 14:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 209102 00:27:10.471 14:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 209102 ']' 00:27:10.471 14:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 209102 00:27:10.471 14:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:27:10.471 14:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:10.471 14:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 209102 00:27:10.471 14:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:10.471 14:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:10.471 14:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 209102' 00:27:10.471 killing process with pid 209102 00:27:10.471 14:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 209102 00:27:10.471 14:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 209102 00:27:10.471 [2024-07-15 14:19:56.228108] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:10.471 [2024-07-15 14:19:56.228191] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:10.471 [2024-07-15 14:19:56.228247] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:10.471 [2024-07-15 14:19:56.228379] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000cf80 name raid_bdev1, state offline 00:27:10.729 [2024-07-15 14:19:56.570498] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:12.103 14:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:27:12.103 00:27:12.103 real 0m29.536s 00:27:12.103 user 0m54.290s 00:27:12.103 sys 0m3.346s 00:27:12.103 14:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:12.103 14:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:12.103 ************************************ 00:27:12.103 END TEST raid_superblock_test 00:27:12.103 ************************************ 00:27:12.103 14:19:57 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:27:12.103 14:19:57 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:27:12.103 14:19:57 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:27:12.103 14:19:57 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:12.103 14:19:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:12.103 ************************************ 00:27:12.103 START TEST raid_read_error_test 00:27:12.103 ************************************ 00:27:12.103 14:19:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 4 read 00:27:12.103 14:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:27:12.103 14:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:27:12.103 14:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:27:12.103 14:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:27:12.103 14:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:27:12.103 14:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:27:12.103 14:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:27:12.103 14:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:27:12.103 14:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:27:12.103 14:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:27:12.103 14:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:27:12.103 14:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:27:12.103 14:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:27:12.103 14:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:27:12.103 14:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev4 00:27:12.104 14:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:27:12.104 14:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:27:12.104 14:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:27:12.104 14:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:27:12.104 14:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:27:12.104 14:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:27:12.104 14:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:27:12.104 14:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:27:12.104 14:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:27:12.104 14:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:27:12.104 14:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:27:12.104 14:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:27:12.104 14:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.w6foA7tV9j 00:27:12.104 14:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=209977 00:27:12.104 14:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:27:12.104 14:19:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 209977 /var/tmp/spdk-raid.sock 00:27:12.104 14:19:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 209977 ']' 00:27:12.104 14:19:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:12.104 14:19:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:12.104 14:19:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:12.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:12.104 14:19:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:12.104 14:19:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:12.104 [2024-07-15 14:19:57.821880] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:27:12.104 [2024-07-15 14:19:57.822303] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid209977 ] 00:27:12.104 [2024-07-15 14:19:57.983574] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:12.362 [2024-07-15 14:19:58.201953] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:12.620 [2024-07-15 14:19:58.405910] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:13.186 14:19:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:13.186 14:19:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:27:13.186 14:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:27:13.187 14:19:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:27:13.445 BaseBdev1_malloc 00:27:13.445 14:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:27:13.704 true 00:27:13.704 14:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:27:13.962 [2024-07-15 14:19:59.742148] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:27:13.962 [2024-07-15 14:19:59.742541] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:13.962 [2024-07-15 14:19:59.742630] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:27:13.962 [2024-07-15 14:19:59.742865] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:13.962 [2024-07-15 14:19:59.744662] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:13.962 [2024-07-15 14:19:59.744862] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:13.962 BaseBdev1 00:27:13.962 14:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:27:13.962 14:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:27:14.219 BaseBdev2_malloc 00:27:14.219 14:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:27:14.478 true 00:27:14.478 14:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:27:14.737 [2024-07-15 14:20:00.552692] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:27:14.737 [2024-07-15 14:20:00.553023] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:14.737 [2024-07-15 14:20:00.553199] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:27:14.737 [2024-07-15 14:20:00.553337] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:14.737 [2024-07-15 14:20:00.555185] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:14.737 [2024-07-15 14:20:00.555363] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:27:14.737 BaseBdev2 00:27:14.737 14:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:27:14.737 14:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:27:14.995 BaseBdev3_malloc 00:27:14.995 14:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:27:15.253 true 00:27:15.253 14:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:27:15.511 [2024-07-15 14:20:01.355236] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:27:15.511 [2024-07-15 14:20:01.355517] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:15.511 [2024-07-15 14:20:01.355670] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:27:15.512 [2024-07-15 14:20:01.355815] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:15.512 [2024-07-15 14:20:01.357650] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:15.512 [2024-07-15 14:20:01.357828] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:27:15.512 BaseBdev3 00:27:15.512 14:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:27:15.512 14:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:27:15.770 BaseBdev4_malloc 00:27:15.770 14:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:27:16.030 true 00:27:16.030 14:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:27:16.289 [2024-07-15 14:20:02.201637] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:27:16.289 [2024-07-15 14:20:02.201974] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:16.289 [2024-07-15 14:20:02.202140] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:27:16.289 [2024-07-15 14:20:02.202277] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:16.289 [2024-07-15 14:20:02.204136] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:16.289 [2024-07-15 14:20:02.204311] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:27:16.289 BaseBdev4 00:27:16.289 14:20:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:27:16.546 [2024-07-15 14:20:02.445709] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:16.546 [2024-07-15 14:20:02.447413] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:16.546 [2024-07-15 14:20:02.447596] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:16.546 [2024-07-15 14:20:02.447780] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:27:16.546 [2024-07-15 14:20:02.448058] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a280 00:27:16.546 [2024-07-15 14:20:02.448176] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:27:16.546 [2024-07-15 14:20:02.448330] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:27:16.546 [2024-07-15 14:20:02.448703] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a280 00:27:16.546 [2024-07-15 14:20:02.448847] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a280 00:27:16.546 [2024-07-15 14:20:02.449084] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:16.546 14:20:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:27:16.546 14:20:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:16.546 14:20:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:16.546 14:20:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:16.546 14:20:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:16.546 14:20:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:16.546 14:20:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:16.546 14:20:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:16.546 14:20:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:16.546 14:20:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:16.546 14:20:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:16.546 14:20:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:16.804 14:20:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:16.804 "name": "raid_bdev1", 00:27:16.804 "uuid": "dc11eb8c-a734-4407-8ceb-8fc3ed00e604", 00:27:16.804 "strip_size_kb": 0, 00:27:16.804 "state": "online", 00:27:16.804 "raid_level": "raid1", 00:27:16.804 "superblock": true, 00:27:16.804 "num_base_bdevs": 4, 00:27:16.804 "num_base_bdevs_discovered": 4, 00:27:16.804 "num_base_bdevs_operational": 4, 00:27:16.804 "base_bdevs_list": [ 00:27:16.804 { 00:27:16.804 "name": "BaseBdev1", 00:27:16.804 "uuid": "c85b7706-4acc-51b2-8999-7e977445ded3", 00:27:16.804 "is_configured": true, 00:27:16.804 "data_offset": 2048, 00:27:16.804 "data_size": 63488 00:27:16.804 }, 00:27:16.804 { 00:27:16.804 "name": "BaseBdev2", 00:27:16.804 "uuid": "ae8eabcb-cffd-5c3e-9d17-af194bc130fc", 00:27:16.804 "is_configured": true, 00:27:16.804 "data_offset": 2048, 00:27:16.804 "data_size": 63488 00:27:16.804 }, 00:27:16.804 { 00:27:16.804 "name": "BaseBdev3", 00:27:16.804 "uuid": "bf24af48-ad08-5f33-9633-75b8e229a01e", 00:27:16.804 "is_configured": true, 00:27:16.804 "data_offset": 2048, 00:27:16.804 "data_size": 63488 00:27:16.804 }, 00:27:16.804 { 00:27:16.804 "name": "BaseBdev4", 00:27:16.804 "uuid": "151d7b04-d274-5fea-95a0-cbf128c5e6d1", 00:27:16.804 "is_configured": true, 00:27:16.804 "data_offset": 2048, 00:27:16.804 "data_size": 63488 00:27:16.804 } 00:27:16.804 ] 00:27:16.804 }' 00:27:16.804 14:20:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:16.804 14:20:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:17.369 14:20:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:27:17.369 14:20:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:27:17.627 [2024-07-15 14:20:03.475033] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:27:18.561 14:20:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:27:18.819 14:20:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:27:18.819 14:20:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:27:18.819 14:20:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ read = \w\r\i\t\e ]] 00:27:18.819 14:20:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:27:18.819 14:20:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:27:18.819 14:20:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:18.819 14:20:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:18.819 14:20:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:18.819 14:20:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:18.819 14:20:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:18.819 14:20:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:18.819 14:20:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:18.819 14:20:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:18.819 14:20:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:18.819 14:20:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:18.819 14:20:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:19.077 14:20:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:19.077 "name": "raid_bdev1", 00:27:19.077 "uuid": "dc11eb8c-a734-4407-8ceb-8fc3ed00e604", 00:27:19.077 "strip_size_kb": 0, 00:27:19.077 "state": "online", 00:27:19.077 "raid_level": "raid1", 00:27:19.077 "superblock": true, 00:27:19.077 "num_base_bdevs": 4, 00:27:19.077 "num_base_bdevs_discovered": 4, 00:27:19.077 "num_base_bdevs_operational": 4, 00:27:19.077 "base_bdevs_list": [ 00:27:19.077 { 00:27:19.077 "name": "BaseBdev1", 00:27:19.077 "uuid": "c85b7706-4acc-51b2-8999-7e977445ded3", 00:27:19.077 "is_configured": true, 00:27:19.077 "data_offset": 2048, 00:27:19.077 "data_size": 63488 00:27:19.077 }, 00:27:19.077 { 00:27:19.077 "name": "BaseBdev2", 00:27:19.077 "uuid": "ae8eabcb-cffd-5c3e-9d17-af194bc130fc", 00:27:19.077 "is_configured": true, 00:27:19.077 "data_offset": 2048, 00:27:19.077 "data_size": 63488 00:27:19.077 }, 00:27:19.077 { 00:27:19.077 "name": "BaseBdev3", 00:27:19.077 "uuid": "bf24af48-ad08-5f33-9633-75b8e229a01e", 00:27:19.077 "is_configured": true, 00:27:19.077 "data_offset": 2048, 00:27:19.077 "data_size": 63488 00:27:19.077 }, 00:27:19.077 { 00:27:19.077 "name": "BaseBdev4", 00:27:19.077 "uuid": "151d7b04-d274-5fea-95a0-cbf128c5e6d1", 00:27:19.077 "is_configured": true, 00:27:19.077 "data_offset": 2048, 00:27:19.077 "data_size": 63488 00:27:19.077 } 00:27:19.077 ] 00:27:19.077 }' 00:27:19.077 14:20:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:19.077 14:20:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:19.644 14:20:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:20.210 [2024-07-15 14:20:05.934450] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:20.210 [2024-07-15 14:20:05.934761] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:20.210 [2024-07-15 14:20:05.936357] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:20.210 [2024-07-15 14:20:05.936517] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:20.210 [2024-07-15 14:20:05.936655] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:20.210 [2024-07-15 14:20:05.936819] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a280 name raid_bdev1, state offline 00:27:20.211 0 00:27:20.211 14:20:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 209977 00:27:20.211 14:20:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 209977 ']' 00:27:20.211 14:20:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 209977 00:27:20.211 14:20:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:27:20.211 14:20:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:20.211 14:20:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 209977 00:27:20.211 14:20:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:20.211 14:20:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:20.211 14:20:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 209977' 00:27:20.211 killing process with pid 209977 00:27:20.211 14:20:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 209977 00:27:20.211 14:20:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 209977 00:27:20.211 [2024-07-15 14:20:05.976345] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:20.469 [2024-07-15 14:20:06.269640] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:21.843 14:20:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.w6foA7tV9j 00:27:21.843 14:20:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:27:21.843 14:20:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:27:21.843 14:20:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:27:21.843 14:20:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:27:21.843 14:20:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:27:21.843 14:20:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:27:21.843 14:20:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:27:21.843 00:27:21.843 real 0m9.688s 00:27:21.843 user 0m15.131s 00:27:21.843 sys 0m1.082s 00:27:21.843 14:20:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:21.843 14:20:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:21.843 ************************************ 00:27:21.843 END TEST raid_read_error_test 00:27:21.843 ************************************ 00:27:21.843 14:20:07 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:27:21.843 14:20:07 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:27:21.843 14:20:07 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:27:21.843 14:20:07 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:21.843 14:20:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:21.843 ************************************ 00:27:21.843 START TEST raid_write_error_test 00:27:21.843 ************************************ 00:27:21.843 14:20:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 4 write 00:27:21.843 14:20:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:27:21.843 14:20:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:27:21.843 14:20:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:27:21.843 14:20:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:27:21.843 14:20:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:27:21.843 14:20:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:27:21.843 14:20:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:27:21.844 14:20:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:27:21.844 14:20:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:27:21.844 14:20:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:27:21.844 14:20:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:27:21.844 14:20:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:27:21.844 14:20:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:27:21.844 14:20:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:27:21.844 14:20:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev4 00:27:21.844 14:20:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:27:21.844 14:20:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:27:21.844 14:20:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:27:21.844 14:20:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:27:21.844 14:20:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:27:21.844 14:20:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:27:21.844 14:20:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:27:21.844 14:20:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:27:21.844 14:20:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:27:21.844 14:20:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:27:21.844 14:20:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:27:21.844 14:20:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:27:21.844 14:20:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.F4ISx5DYiI 00:27:21.844 14:20:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=210192 00:27:21.844 14:20:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 210192 /var/tmp/spdk-raid.sock 00:27:21.844 14:20:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:27:21.844 14:20:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 210192 ']' 00:27:21.844 14:20:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:21.844 14:20:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:21.844 14:20:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:21.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:21.844 14:20:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:21.844 14:20:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:21.844 [2024-07-15 14:20:07.587824] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:27:21.844 [2024-07-15 14:20:07.588330] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid210192 ] 00:27:21.844 [2024-07-15 14:20:07.760332] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:22.102 [2024-07-15 14:20:08.011232] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:22.360 [2024-07-15 14:20:08.206118] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:22.618 14:20:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:22.618 14:20:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:27:22.618 14:20:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:27:22.618 14:20:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:27:22.877 BaseBdev1_malloc 00:27:22.877 14:20:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:27:23.134 true 00:27:23.134 14:20:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:27:23.392 [2024-07-15 14:20:09.345493] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:27:23.392 [2024-07-15 14:20:09.346109] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:23.392 [2024-07-15 14:20:09.346360] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:27:23.392 [2024-07-15 14:20:09.346582] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:23.392 [2024-07-15 14:20:09.348523] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:23.392 [2024-07-15 14:20:09.348795] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:23.392 BaseBdev1 00:27:23.392 14:20:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:27:23.392 14:20:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:27:23.650 BaseBdev2_malloc 00:27:23.650 14:20:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:27:23.910 true 00:27:23.910 14:20:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:27:24.167 [2024-07-15 14:20:10.126866] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:27:24.167 [2024-07-15 14:20:10.127348] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:24.167 [2024-07-15 14:20:10.127587] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:27:24.167 [2024-07-15 14:20:10.127828] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:24.167 [2024-07-15 14:20:10.129790] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:24.167 [2024-07-15 14:20:10.130010] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:27:24.167 BaseBdev2 00:27:24.167 14:20:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:27:24.167 14:20:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:27:24.425 BaseBdev3_malloc 00:27:24.425 14:20:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:27:24.688 true 00:27:24.688 14:20:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:27:24.947 [2024-07-15 14:20:10.925227] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:27:24.947 [2024-07-15 14:20:10.925861] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:24.947 [2024-07-15 14:20:10.926102] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:27:24.947 [2024-07-15 14:20:10.926316] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:24.947 [2024-07-15 14:20:10.928230] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:24.947 [2024-07-15 14:20:10.928458] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:27:24.947 BaseBdev3 00:27:24.947 14:20:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:27:24.947 14:20:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:27:25.204 BaseBdev4_malloc 00:27:25.462 14:20:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:27:25.462 true 00:27:25.720 14:20:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:27:25.977 [2024-07-15 14:20:11.751306] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:27:25.977 [2024-07-15 14:20:11.751832] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:25.977 [2024-07-15 14:20:11.752064] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:27:25.977 [2024-07-15 14:20:11.752285] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:25.977 [2024-07-15 14:20:11.754191] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:25.977 [2024-07-15 14:20:11.754424] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:27:25.977 BaseBdev4 00:27:25.977 14:20:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:27:26.235 [2024-07-15 14:20:11.995391] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:26.235 [2024-07-15 14:20:11.997127] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:26.235 [2024-07-15 14:20:11.997313] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:26.235 [2024-07-15 14:20:11.997413] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:27:26.235 [2024-07-15 14:20:11.997695] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a280 00:27:26.235 [2024-07-15 14:20:11.997772] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:27:26.235 [2024-07-15 14:20:11.997986] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:27:26.235 [2024-07-15 14:20:11.998354] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a280 00:27:26.235 [2024-07-15 14:20:11.998475] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a280 00:27:26.235 [2024-07-15 14:20:11.998696] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:26.235 14:20:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:27:26.235 14:20:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:26.235 14:20:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:26.235 14:20:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:26.235 14:20:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:26.235 14:20:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:27:26.235 14:20:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:26.235 14:20:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:26.235 14:20:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:26.235 14:20:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:26.235 14:20:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:26.235 14:20:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:26.494 14:20:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:26.494 "name": "raid_bdev1", 00:27:26.494 "uuid": "cb8eb020-7d3f-46f6-8468-148660b407be", 00:27:26.494 "strip_size_kb": 0, 00:27:26.494 "state": "online", 00:27:26.494 "raid_level": "raid1", 00:27:26.494 "superblock": true, 00:27:26.494 "num_base_bdevs": 4, 00:27:26.494 "num_base_bdevs_discovered": 4, 00:27:26.494 "num_base_bdevs_operational": 4, 00:27:26.494 "base_bdevs_list": [ 00:27:26.494 { 00:27:26.494 "name": "BaseBdev1", 00:27:26.494 "uuid": "bd4ab265-0ca3-5226-8b8f-d3728250ea6f", 00:27:26.494 "is_configured": true, 00:27:26.494 "data_offset": 2048, 00:27:26.494 "data_size": 63488 00:27:26.494 }, 00:27:26.494 { 00:27:26.494 "name": "BaseBdev2", 00:27:26.494 "uuid": "5fe2a987-ea99-5b0c-bd12-d6de01c84fca", 00:27:26.494 "is_configured": true, 00:27:26.494 "data_offset": 2048, 00:27:26.494 "data_size": 63488 00:27:26.494 }, 00:27:26.494 { 00:27:26.494 "name": "BaseBdev3", 00:27:26.494 "uuid": "1ce6d325-7009-5ca5-932a-fae205b0ef5c", 00:27:26.494 "is_configured": true, 00:27:26.494 "data_offset": 2048, 00:27:26.494 "data_size": 63488 00:27:26.494 }, 00:27:26.494 { 00:27:26.494 "name": "BaseBdev4", 00:27:26.494 "uuid": "01045c7c-c767-54c5-9b0e-02e1915fa6a1", 00:27:26.494 "is_configured": true, 00:27:26.494 "data_offset": 2048, 00:27:26.494 "data_size": 63488 00:27:26.494 } 00:27:26.494 ] 00:27:26.494 }' 00:27:26.494 14:20:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:26.494 14:20:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:27.060 14:20:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:27:27.060 14:20:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:27:27.060 [2024-07-15 14:20:13.000698] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:27:27.993 14:20:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:27:28.250 [2024-07-15 14:20:14.179285] bdev_raid.c:2221:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:27:28.250 [2024-07-15 14:20:14.180001] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:28.250 [2024-07-15 14:20:14.180332] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:27:28.250 14:20:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:27:28.250 14:20:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:27:28.250 14:20:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ write = \w\r\i\t\e ]] 00:27:28.250 14:20:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # expected_num_base_bdevs=3 00:27:28.250 14:20:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:27:28.250 14:20:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:28.250 14:20:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:28.250 14:20:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:28.250 14:20:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:28.250 14:20:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:27:28.250 14:20:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:28.250 14:20:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:28.250 14:20:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:28.250 14:20:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:28.250 14:20:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:28.250 14:20:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:28.507 14:20:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:28.507 "name": "raid_bdev1", 00:27:28.507 "uuid": "cb8eb020-7d3f-46f6-8468-148660b407be", 00:27:28.507 "strip_size_kb": 0, 00:27:28.507 "state": "online", 00:27:28.507 "raid_level": "raid1", 00:27:28.507 "superblock": true, 00:27:28.507 "num_base_bdevs": 4, 00:27:28.507 "num_base_bdevs_discovered": 3, 00:27:28.507 "num_base_bdevs_operational": 3, 00:27:28.507 "base_bdevs_list": [ 00:27:28.507 { 00:27:28.507 "name": null, 00:27:28.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:28.507 "is_configured": false, 00:27:28.507 "data_offset": 2048, 00:27:28.507 "data_size": 63488 00:27:28.507 }, 00:27:28.507 { 00:27:28.507 "name": "BaseBdev2", 00:27:28.507 "uuid": "5fe2a987-ea99-5b0c-bd12-d6de01c84fca", 00:27:28.507 "is_configured": true, 00:27:28.507 "data_offset": 2048, 00:27:28.507 "data_size": 63488 00:27:28.507 }, 00:27:28.507 { 00:27:28.507 "name": "BaseBdev3", 00:27:28.507 "uuid": "1ce6d325-7009-5ca5-932a-fae205b0ef5c", 00:27:28.507 "is_configured": true, 00:27:28.507 "data_offset": 2048, 00:27:28.507 "data_size": 63488 00:27:28.507 }, 00:27:28.507 { 00:27:28.507 "name": "BaseBdev4", 00:27:28.507 "uuid": "01045c7c-c767-54c5-9b0e-02e1915fa6a1", 00:27:28.507 "is_configured": true, 00:27:28.507 "data_offset": 2048, 00:27:28.507 "data_size": 63488 00:27:28.507 } 00:27:28.507 ] 00:27:28.507 }' 00:27:28.507 14:20:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:28.507 14:20:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:29.440 14:20:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:29.440 [2024-07-15 14:20:15.370541] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:29.440 [2024-07-15 14:20:15.370841] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:29.440 [2024-07-15 14:20:15.372234] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:29.440 [2024-07-15 14:20:15.372382] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:29.440 [2024-07-15 14:20:15.372541] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:29.440 [2024-07-15 14:20:15.372648] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a280 name raid_bdev1, state offline 00:27:29.440 0 00:27:29.440 14:20:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 210192 00:27:29.440 14:20:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 210192 ']' 00:27:29.440 14:20:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 210192 00:27:29.440 14:20:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:27:29.440 14:20:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:29.440 14:20:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 210192 00:27:29.440 14:20:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:29.440 14:20:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:29.440 14:20:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 210192' 00:27:29.440 killing process with pid 210192 00:27:29.440 14:20:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 210192 00:27:29.440 [2024-07-15 14:20:15.427571] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:29.440 14:20:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 210192 00:27:30.005 [2024-07-15 14:20:15.710828] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:30.940 14:20:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.F4ISx5DYiI 00:27:30.940 14:20:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:27:30.940 14:20:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:27:30.940 14:20:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:27:30.940 14:20:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:27:30.940 14:20:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:27:30.940 14:20:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:27:30.940 14:20:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:27:30.940 00:27:30.940 real 0m9.391s 00:27:30.940 user 0m14.568s 00:27:30.940 sys 0m1.033s 00:27:30.940 14:20:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:30.940 14:20:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:30.940 ************************************ 00:27:30.940 END TEST raid_write_error_test 00:27:30.940 ************************************ 00:27:31.198 14:20:16 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:27:31.198 14:20:16 bdev_raid -- bdev/bdev_raid.sh@875 -- # '[' true = true ']' 00:27:31.198 14:20:16 bdev_raid -- bdev/bdev_raid.sh@876 -- # for n in 2 4 00:27:31.198 14:20:16 bdev_raid -- bdev/bdev_raid.sh@877 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:27:31.198 14:20:16 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:27:31.198 14:20:16 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:31.198 14:20:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:31.198 ************************************ 00:27:31.198 START TEST raid_rebuild_test 00:27:31.198 ************************************ 00:27:31.198 14:20:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 false false true 00:27:31.198 14:20:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:27:31.198 14:20:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:27:31.198 14:20:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:27:31.198 14:20:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:27:31.198 14:20:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local verify=true 00:27:31.198 14:20:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:27:31.198 14:20:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:27:31.198 14:20:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:27:31.198 14:20:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:27:31.198 14:20:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:27:31.198 14:20:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:27:31.198 14:20:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:27:31.198 14:20:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:27:31.198 14:20:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:27:31.198 14:20:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:27:31.198 14:20:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:27:31.198 14:20:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local strip_size 00:27:31.198 14:20:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local create_arg 00:27:31.198 14:20:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:27:31.198 14:20:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local data_offset 00:27:31.199 14:20:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:27:31.199 14:20:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:27:31.199 14:20:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:27:31.199 14:20:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # raid_pid=210412 00:27:31.199 14:20:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # waitforlisten 210412 /var/tmp/spdk-raid.sock 00:27:31.199 14:20:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:27:31.199 14:20:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@829 -- # '[' -z 210412 ']' 00:27:31.199 14:20:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:31.199 14:20:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:31.199 14:20:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:31.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:31.199 14:20:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:31.199 14:20:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:31.199 [2024-07-15 14:20:17.023704] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:27:31.199 [2024-07-15 14:20:17.024213] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid210412 ] 00:27:31.199 I/O size of 3145728 is greater than zero copy threshold (65536). 00:27:31.199 Zero copy mechanism will not be used. 00:27:31.199 [2024-07-15 14:20:17.197702] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:31.456 [2024-07-15 14:20:17.413169] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:31.713 [2024-07-15 14:20:17.609397] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:32.280 14:20:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:32.280 14:20:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@862 -- # return 0 00:27:32.280 14:20:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:27:32.280 14:20:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:27:32.280 BaseBdev1_malloc 00:27:32.280 14:20:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:27:32.539 [2024-07-15 14:20:18.466197] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:27:32.539 [2024-07-15 14:20:18.466936] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:32.539 [2024-07-15 14:20:18.467211] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:27:32.539 [2024-07-15 14:20:18.467418] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:32.539 [2024-07-15 14:20:18.469372] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:32.539 [2024-07-15 14:20:18.469646] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:32.539 BaseBdev1 00:27:32.539 14:20:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:27:32.539 14:20:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:27:32.796 BaseBdev2_malloc 00:27:32.796 14:20:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:27:33.361 [2024-07-15 14:20:19.064050] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:27:33.362 [2024-07-15 14:20:19.064508] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:33.362 [2024-07-15 14:20:19.064754] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:27:33.362 [2024-07-15 14:20:19.064964] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:33.362 [2024-07-15 14:20:19.066905] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:33.362 [2024-07-15 14:20:19.067136] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:27:33.362 BaseBdev2 00:27:33.362 14:20:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:27:33.362 spare_malloc 00:27:33.362 14:20:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:27:33.620 spare_delay 00:27:33.879 14:20:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:27:33.879 [2024-07-15 14:20:19.858668] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:33.879 [2024-07-15 14:20:19.859332] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:33.879 [2024-07-15 14:20:19.859572] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:27:33.880 [2024-07-15 14:20:19.859820] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:33.880 [2024-07-15 14:20:19.861721] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:33.880 [2024-07-15 14:20:19.861964] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:33.880 spare 00:27:33.880 14:20:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:27:34.138 [2024-07-15 14:20:20.086760] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:34.138 [2024-07-15 14:20:20.088440] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:34.138 [2024-07-15 14:20:20.088634] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:27:34.138 [2024-07-15 14:20:20.088686] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:27:34.138 [2024-07-15 14:20:20.088927] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:27:34.138 [2024-07-15 14:20:20.089336] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:27:34.138 [2024-07-15 14:20:20.089466] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:27:34.138 [2024-07-15 14:20:20.089699] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:34.138 14:20:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:34.138 14:20:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:34.138 14:20:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:34.138 14:20:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:34.138 14:20:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:34.138 14:20:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:34.138 14:20:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:34.138 14:20:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:34.138 14:20:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:34.138 14:20:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:34.138 14:20:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:34.138 14:20:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:34.705 14:20:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:34.705 "name": "raid_bdev1", 00:27:34.705 "uuid": "4f497c1f-4050-4e92-8aea-523781894d2c", 00:27:34.705 "strip_size_kb": 0, 00:27:34.705 "state": "online", 00:27:34.705 "raid_level": "raid1", 00:27:34.705 "superblock": false, 00:27:34.705 "num_base_bdevs": 2, 00:27:34.705 "num_base_bdevs_discovered": 2, 00:27:34.705 "num_base_bdevs_operational": 2, 00:27:34.705 "base_bdevs_list": [ 00:27:34.705 { 00:27:34.705 "name": "BaseBdev1", 00:27:34.705 "uuid": "fd7477de-ee9a-5e4b-b281-01fbe966198a", 00:27:34.705 "is_configured": true, 00:27:34.705 "data_offset": 0, 00:27:34.705 "data_size": 65536 00:27:34.705 }, 00:27:34.705 { 00:27:34.705 "name": "BaseBdev2", 00:27:34.705 "uuid": "6a88a00b-8338-582d-a3cd-95d2e815b483", 00:27:34.705 "is_configured": true, 00:27:34.705 "data_offset": 0, 00:27:34.705 "data_size": 65536 00:27:34.705 } 00:27:34.705 ] 00:27:34.705 }' 00:27:34.705 14:20:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:34.705 14:20:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:35.272 14:20:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:35.272 14:20:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:27:35.272 [2024-07-15 14:20:21.251075] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:35.272 14:20:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=65536 00:27:35.272 14:20:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:35.272 14:20:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:27:35.530 14:20:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:27:35.530 14:20:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:27:35.530 14:20:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:27:35.530 14:20:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:27:35.530 14:20:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:27:35.530 14:20:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:35.530 14:20:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:27:35.530 14:20:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:35.531 14:20:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:27:35.531 14:20:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:35.531 14:20:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:27:35.531 14:20:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:35.531 14:20:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:35.531 14:20:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:27:35.789 [2024-07-15 14:20:21.783060] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:27:36.048 /dev/nbd0 00:27:36.048 14:20:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:36.048 14:20:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:36.048 14:20:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:27:36.048 14:20:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:27:36.048 14:20:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:27:36.048 14:20:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:27:36.048 14:20:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:27:36.048 14:20:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # break 00:27:36.048 14:20:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:27:36.048 14:20:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:27:36.048 14:20:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:36.048 1+0 records in 00:27:36.048 1+0 records out 00:27:36.048 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000426211 s, 9.6 MB/s 00:27:36.048 14:20:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:36.048 14:20:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:27:36.048 14:20:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:36.048 14:20:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:27:36.048 14:20:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:27:36.048 14:20:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:36.048 14:20:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:36.048 14:20:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:27:36.048 14:20:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:27:36.048 14:20:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:27:40.233 65536+0 records in 00:27:40.233 65536+0 records out 00:27:40.233 33554432 bytes (34 MB, 32 MiB) copied, 3.77191 s, 8.9 MB/s 00:27:40.233 14:20:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:27:40.233 14:20:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:40.233 14:20:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:27:40.233 14:20:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:40.233 14:20:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:27:40.233 14:20:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:40.233 14:20:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:27:40.233 14:20:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:40.233 14:20:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:40.233 [2024-07-15 14:20:25.914806] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:40.233 14:20:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:40.233 14:20:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:40.233 14:20:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:40.233 14:20:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:40.233 14:20:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:27:40.233 14:20:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:27:40.233 14:20:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:27:40.233 [2024-07-15 14:20:26.146614] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:40.233 14:20:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:40.233 14:20:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:40.233 14:20:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:40.233 14:20:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:40.233 14:20:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:40.233 14:20:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:40.233 14:20:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:40.233 14:20:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:40.233 14:20:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:40.233 14:20:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:40.233 14:20:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:40.233 14:20:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:40.501 14:20:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:40.501 "name": "raid_bdev1", 00:27:40.501 "uuid": "4f497c1f-4050-4e92-8aea-523781894d2c", 00:27:40.501 "strip_size_kb": 0, 00:27:40.501 "state": "online", 00:27:40.501 "raid_level": "raid1", 00:27:40.501 "superblock": false, 00:27:40.501 "num_base_bdevs": 2, 00:27:40.501 "num_base_bdevs_discovered": 1, 00:27:40.501 "num_base_bdevs_operational": 1, 00:27:40.501 "base_bdevs_list": [ 00:27:40.501 { 00:27:40.501 "name": null, 00:27:40.501 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:40.501 "is_configured": false, 00:27:40.501 "data_offset": 0, 00:27:40.501 "data_size": 65536 00:27:40.501 }, 00:27:40.501 { 00:27:40.501 "name": "BaseBdev2", 00:27:40.501 "uuid": "6a88a00b-8338-582d-a3cd-95d2e815b483", 00:27:40.501 "is_configured": true, 00:27:40.501 "data_offset": 0, 00:27:40.501 "data_size": 65536 00:27:40.501 } 00:27:40.501 ] 00:27:40.501 }' 00:27:40.501 14:20:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:40.501 14:20:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:41.435 14:20:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:41.435 [2024-07-15 14:20:27.346831] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:41.435 [2024-07-15 14:20:27.361949] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09b00 00:27:41.435 [2024-07-15 14:20:27.363553] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:41.435 14:20:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # sleep 1 00:27:42.810 14:20:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:42.810 14:20:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:42.810 14:20:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:42.810 14:20:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:42.810 14:20:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:42.810 14:20:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:42.810 14:20:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:42.810 14:20:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:42.810 "name": "raid_bdev1", 00:27:42.810 "uuid": "4f497c1f-4050-4e92-8aea-523781894d2c", 00:27:42.810 "strip_size_kb": 0, 00:27:42.810 "state": "online", 00:27:42.810 "raid_level": "raid1", 00:27:42.810 "superblock": false, 00:27:42.810 "num_base_bdevs": 2, 00:27:42.810 "num_base_bdevs_discovered": 2, 00:27:42.810 "num_base_bdevs_operational": 2, 00:27:42.810 "process": { 00:27:42.810 "type": "rebuild", 00:27:42.810 "target": "spare", 00:27:42.810 "progress": { 00:27:42.810 "blocks": 24576, 00:27:42.810 "percent": 37 00:27:42.810 } 00:27:42.810 }, 00:27:42.810 "base_bdevs_list": [ 00:27:42.810 { 00:27:42.810 "name": "spare", 00:27:42.810 "uuid": "ab10e5e6-c740-54d9-8fdd-6fa3babb240a", 00:27:42.810 "is_configured": true, 00:27:42.810 "data_offset": 0, 00:27:42.810 "data_size": 65536 00:27:42.810 }, 00:27:42.810 { 00:27:42.810 "name": "BaseBdev2", 00:27:42.810 "uuid": "6a88a00b-8338-582d-a3cd-95d2e815b483", 00:27:42.810 "is_configured": true, 00:27:42.810 "data_offset": 0, 00:27:42.810 "data_size": 65536 00:27:42.810 } 00:27:42.810 ] 00:27:42.810 }' 00:27:42.810 14:20:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:42.810 14:20:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:42.810 14:20:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:42.810 14:20:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:42.810 14:20:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:27:43.068 [2024-07-15 14:20:28.995287] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:43.326 [2024-07-15 14:20:29.073957] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:43.326 [2024-07-15 14:20:29.074231] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:43.326 [2024-07-15 14:20:29.074396] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:43.326 [2024-07-15 14:20:29.074509] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:43.326 14:20:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:43.326 14:20:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:43.326 14:20:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:43.326 14:20:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:43.326 14:20:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:43.326 14:20:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:43.326 14:20:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:43.326 14:20:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:43.326 14:20:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:43.326 14:20:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:43.326 14:20:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:43.326 14:20:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:43.584 14:20:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:43.584 "name": "raid_bdev1", 00:27:43.584 "uuid": "4f497c1f-4050-4e92-8aea-523781894d2c", 00:27:43.584 "strip_size_kb": 0, 00:27:43.584 "state": "online", 00:27:43.584 "raid_level": "raid1", 00:27:43.584 "superblock": false, 00:27:43.584 "num_base_bdevs": 2, 00:27:43.584 "num_base_bdevs_discovered": 1, 00:27:43.584 "num_base_bdevs_operational": 1, 00:27:43.584 "base_bdevs_list": [ 00:27:43.584 { 00:27:43.584 "name": null, 00:27:43.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:43.584 "is_configured": false, 00:27:43.584 "data_offset": 0, 00:27:43.584 "data_size": 65536 00:27:43.584 }, 00:27:43.584 { 00:27:43.584 "name": "BaseBdev2", 00:27:43.584 "uuid": "6a88a00b-8338-582d-a3cd-95d2e815b483", 00:27:43.584 "is_configured": true, 00:27:43.584 "data_offset": 0, 00:27:43.584 "data_size": 65536 00:27:43.584 } 00:27:43.584 ] 00:27:43.584 }' 00:27:43.584 14:20:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:43.584 14:20:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:44.150 14:20:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:44.150 14:20:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:44.150 14:20:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:44.150 14:20:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:44.150 14:20:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:44.150 14:20:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:44.150 14:20:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:44.408 14:20:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:44.408 "name": "raid_bdev1", 00:27:44.408 "uuid": "4f497c1f-4050-4e92-8aea-523781894d2c", 00:27:44.408 "strip_size_kb": 0, 00:27:44.408 "state": "online", 00:27:44.408 "raid_level": "raid1", 00:27:44.408 "superblock": false, 00:27:44.408 "num_base_bdevs": 2, 00:27:44.408 "num_base_bdevs_discovered": 1, 00:27:44.408 "num_base_bdevs_operational": 1, 00:27:44.408 "base_bdevs_list": [ 00:27:44.408 { 00:27:44.408 "name": null, 00:27:44.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:44.408 "is_configured": false, 00:27:44.408 "data_offset": 0, 00:27:44.408 "data_size": 65536 00:27:44.408 }, 00:27:44.408 { 00:27:44.408 "name": "BaseBdev2", 00:27:44.408 "uuid": "6a88a00b-8338-582d-a3cd-95d2e815b483", 00:27:44.408 "is_configured": true, 00:27:44.408 "data_offset": 0, 00:27:44.408 "data_size": 65536 00:27:44.408 } 00:27:44.408 ] 00:27:44.408 }' 00:27:44.408 14:20:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:44.408 14:20:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:44.408 14:20:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:44.408 14:20:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:44.408 14:20:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:44.667 [2024-07-15 14:20:30.638985] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:44.667 [2024-07-15 14:20:30.653217] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:27:44.667 [2024-07-15 14:20:30.654930] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:44.926 14:20:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # sleep 1 00:27:45.859 14:20:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:45.859 14:20:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:45.860 14:20:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:45.860 14:20:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:45.860 14:20:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:45.860 14:20:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:45.860 14:20:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:46.118 14:20:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:46.118 "name": "raid_bdev1", 00:27:46.118 "uuid": "4f497c1f-4050-4e92-8aea-523781894d2c", 00:27:46.118 "strip_size_kb": 0, 00:27:46.118 "state": "online", 00:27:46.118 "raid_level": "raid1", 00:27:46.118 "superblock": false, 00:27:46.118 "num_base_bdevs": 2, 00:27:46.118 "num_base_bdevs_discovered": 2, 00:27:46.118 "num_base_bdevs_operational": 2, 00:27:46.118 "process": { 00:27:46.118 "type": "rebuild", 00:27:46.118 "target": "spare", 00:27:46.118 "progress": { 00:27:46.118 "blocks": 24576, 00:27:46.118 "percent": 37 00:27:46.118 } 00:27:46.118 }, 00:27:46.118 "base_bdevs_list": [ 00:27:46.118 { 00:27:46.118 "name": "spare", 00:27:46.118 "uuid": "ab10e5e6-c740-54d9-8fdd-6fa3babb240a", 00:27:46.118 "is_configured": true, 00:27:46.118 "data_offset": 0, 00:27:46.118 "data_size": 65536 00:27:46.118 }, 00:27:46.118 { 00:27:46.118 "name": "BaseBdev2", 00:27:46.118 "uuid": "6a88a00b-8338-582d-a3cd-95d2e815b483", 00:27:46.118 "is_configured": true, 00:27:46.118 "data_offset": 0, 00:27:46.118 "data_size": 65536 00:27:46.118 } 00:27:46.118 ] 00:27:46.118 }' 00:27:46.118 14:20:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:46.118 14:20:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:46.118 14:20:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:46.118 14:20:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:46.118 14:20:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:27:46.118 14:20:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:27:46.118 14:20:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:27:46.118 14:20:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:27:46.118 14:20:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@705 -- # local timeout=889 00:27:46.118 14:20:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:27:46.118 14:20:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:46.118 14:20:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:46.118 14:20:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:46.118 14:20:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:46.118 14:20:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:46.118 14:20:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:46.118 14:20:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:46.376 14:20:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:46.376 "name": "raid_bdev1", 00:27:46.376 "uuid": "4f497c1f-4050-4e92-8aea-523781894d2c", 00:27:46.376 "strip_size_kb": 0, 00:27:46.376 "state": "online", 00:27:46.376 "raid_level": "raid1", 00:27:46.376 "superblock": false, 00:27:46.376 "num_base_bdevs": 2, 00:27:46.376 "num_base_bdevs_discovered": 2, 00:27:46.376 "num_base_bdevs_operational": 2, 00:27:46.376 "process": { 00:27:46.376 "type": "rebuild", 00:27:46.376 "target": "spare", 00:27:46.376 "progress": { 00:27:46.376 "blocks": 32768, 00:27:46.376 "percent": 50 00:27:46.376 } 00:27:46.376 }, 00:27:46.376 "base_bdevs_list": [ 00:27:46.376 { 00:27:46.376 "name": "spare", 00:27:46.376 "uuid": "ab10e5e6-c740-54d9-8fdd-6fa3babb240a", 00:27:46.376 "is_configured": true, 00:27:46.376 "data_offset": 0, 00:27:46.376 "data_size": 65536 00:27:46.376 }, 00:27:46.376 { 00:27:46.376 "name": "BaseBdev2", 00:27:46.376 "uuid": "6a88a00b-8338-582d-a3cd-95d2e815b483", 00:27:46.376 "is_configured": true, 00:27:46.376 "data_offset": 0, 00:27:46.376 "data_size": 65536 00:27:46.376 } 00:27:46.376 ] 00:27:46.376 }' 00:27:46.376 14:20:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:46.635 14:20:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:46.635 14:20:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:46.635 14:20:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:46.635 14:20:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:27:47.570 14:20:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:27:47.570 14:20:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:47.570 14:20:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:47.570 14:20:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:47.570 14:20:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:47.570 14:20:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:47.570 14:20:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:47.570 14:20:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:47.829 14:20:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:47.829 "name": "raid_bdev1", 00:27:47.829 "uuid": "4f497c1f-4050-4e92-8aea-523781894d2c", 00:27:47.829 "strip_size_kb": 0, 00:27:47.829 "state": "online", 00:27:47.829 "raid_level": "raid1", 00:27:47.829 "superblock": false, 00:27:47.829 "num_base_bdevs": 2, 00:27:47.829 "num_base_bdevs_discovered": 2, 00:27:47.829 "num_base_bdevs_operational": 2, 00:27:47.829 "process": { 00:27:47.829 "type": "rebuild", 00:27:47.829 "target": "spare", 00:27:47.829 "progress": { 00:27:47.829 "blocks": 61440, 00:27:47.829 "percent": 93 00:27:47.829 } 00:27:47.829 }, 00:27:47.829 "base_bdevs_list": [ 00:27:47.829 { 00:27:47.829 "name": "spare", 00:27:47.829 "uuid": "ab10e5e6-c740-54d9-8fdd-6fa3babb240a", 00:27:47.829 "is_configured": true, 00:27:47.829 "data_offset": 0, 00:27:47.829 "data_size": 65536 00:27:47.829 }, 00:27:47.829 { 00:27:47.829 "name": "BaseBdev2", 00:27:47.829 "uuid": "6a88a00b-8338-582d-a3cd-95d2e815b483", 00:27:47.829 "is_configured": true, 00:27:47.829 "data_offset": 0, 00:27:47.829 "data_size": 65536 00:27:47.829 } 00:27:47.829 ] 00:27:47.829 }' 00:27:47.829 14:20:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:47.829 14:20:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:47.829 14:20:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:47.829 14:20:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:47.829 14:20:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:27:48.087 [2024-07-15 14:20:33.874395] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:27:48.088 [2024-07-15 14:20:33.874715] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:27:48.088 [2024-07-15 14:20:33.875280] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:49.021 14:20:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:27:49.021 14:20:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:49.021 14:20:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:49.021 14:20:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:49.021 14:20:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:49.021 14:20:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:49.021 14:20:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:49.021 14:20:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:49.280 14:20:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:49.280 "name": "raid_bdev1", 00:27:49.280 "uuid": "4f497c1f-4050-4e92-8aea-523781894d2c", 00:27:49.280 "strip_size_kb": 0, 00:27:49.280 "state": "online", 00:27:49.280 "raid_level": "raid1", 00:27:49.280 "superblock": false, 00:27:49.280 "num_base_bdevs": 2, 00:27:49.280 "num_base_bdevs_discovered": 2, 00:27:49.280 "num_base_bdevs_operational": 2, 00:27:49.280 "base_bdevs_list": [ 00:27:49.280 { 00:27:49.280 "name": "spare", 00:27:49.280 "uuid": "ab10e5e6-c740-54d9-8fdd-6fa3babb240a", 00:27:49.280 "is_configured": true, 00:27:49.280 "data_offset": 0, 00:27:49.280 "data_size": 65536 00:27:49.280 }, 00:27:49.280 { 00:27:49.280 "name": "BaseBdev2", 00:27:49.280 "uuid": "6a88a00b-8338-582d-a3cd-95d2e815b483", 00:27:49.280 "is_configured": true, 00:27:49.280 "data_offset": 0, 00:27:49.280 "data_size": 65536 00:27:49.280 } 00:27:49.280 ] 00:27:49.280 }' 00:27:49.280 14:20:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:49.280 14:20:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:27:49.280 14:20:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:49.280 14:20:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:27:49.280 14:20:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # break 00:27:49.280 14:20:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:49.280 14:20:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:49.280 14:20:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:49.280 14:20:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:49.280 14:20:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:49.280 14:20:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:49.280 14:20:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:49.539 14:20:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:49.539 "name": "raid_bdev1", 00:27:49.539 "uuid": "4f497c1f-4050-4e92-8aea-523781894d2c", 00:27:49.539 "strip_size_kb": 0, 00:27:49.539 "state": "online", 00:27:49.539 "raid_level": "raid1", 00:27:49.539 "superblock": false, 00:27:49.539 "num_base_bdevs": 2, 00:27:49.539 "num_base_bdevs_discovered": 2, 00:27:49.539 "num_base_bdevs_operational": 2, 00:27:49.539 "base_bdevs_list": [ 00:27:49.539 { 00:27:49.539 "name": "spare", 00:27:49.539 "uuid": "ab10e5e6-c740-54d9-8fdd-6fa3babb240a", 00:27:49.539 "is_configured": true, 00:27:49.539 "data_offset": 0, 00:27:49.539 "data_size": 65536 00:27:49.539 }, 00:27:49.539 { 00:27:49.539 "name": "BaseBdev2", 00:27:49.539 "uuid": "6a88a00b-8338-582d-a3cd-95d2e815b483", 00:27:49.539 "is_configured": true, 00:27:49.539 "data_offset": 0, 00:27:49.539 "data_size": 65536 00:27:49.539 } 00:27:49.539 ] 00:27:49.539 }' 00:27:49.539 14:20:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:49.539 14:20:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:49.539 14:20:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:49.811 14:20:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:49.811 14:20:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:49.811 14:20:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:49.811 14:20:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:49.811 14:20:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:49.811 14:20:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:49.811 14:20:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:49.811 14:20:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:49.811 14:20:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:49.811 14:20:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:49.811 14:20:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:49.811 14:20:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:49.811 14:20:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:50.077 14:20:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:50.077 "name": "raid_bdev1", 00:27:50.077 "uuid": "4f497c1f-4050-4e92-8aea-523781894d2c", 00:27:50.077 "strip_size_kb": 0, 00:27:50.077 "state": "online", 00:27:50.077 "raid_level": "raid1", 00:27:50.077 "superblock": false, 00:27:50.077 "num_base_bdevs": 2, 00:27:50.077 "num_base_bdevs_discovered": 2, 00:27:50.077 "num_base_bdevs_operational": 2, 00:27:50.077 "base_bdevs_list": [ 00:27:50.077 { 00:27:50.077 "name": "spare", 00:27:50.077 "uuid": "ab10e5e6-c740-54d9-8fdd-6fa3babb240a", 00:27:50.077 "is_configured": true, 00:27:50.077 "data_offset": 0, 00:27:50.077 "data_size": 65536 00:27:50.077 }, 00:27:50.077 { 00:27:50.077 "name": "BaseBdev2", 00:27:50.077 "uuid": "6a88a00b-8338-582d-a3cd-95d2e815b483", 00:27:50.077 "is_configured": true, 00:27:50.077 "data_offset": 0, 00:27:50.077 "data_size": 65536 00:27:50.077 } 00:27:50.077 ] 00:27:50.077 }' 00:27:50.077 14:20:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:50.077 14:20:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:50.643 14:20:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:50.902 [2024-07-15 14:20:36.714915] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:50.902 [2024-07-15 14:20:36.715200] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:50.902 [2024-07-15 14:20:36.715386] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:50.902 [2024-07-15 14:20:36.715565] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:50.902 [2024-07-15 14:20:36.715699] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:27:50.902 14:20:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:50.902 14:20:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # jq length 00:27:51.161 14:20:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:27:51.161 14:20:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:27:51.161 14:20:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:27:51.161 14:20:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:27:51.161 14:20:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:51.161 14:20:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:27:51.161 14:20:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:51.161 14:20:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:51.161 14:20:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:51.161 14:20:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:27:51.161 14:20:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:51.161 14:20:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:51.161 14:20:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:27:51.420 /dev/nbd0 00:27:51.420 14:20:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:51.420 14:20:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:51.420 14:20:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:27:51.420 14:20:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:27:51.420 14:20:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:27:51.420 14:20:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:27:51.420 14:20:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:27:51.420 14:20:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # break 00:27:51.420 14:20:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:27:51.420 14:20:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:27:51.420 14:20:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:51.420 1+0 records in 00:27:51.420 1+0 records out 00:27:51.420 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000898728 s, 4.6 MB/s 00:27:51.420 14:20:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:51.420 14:20:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:27:51.420 14:20:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:51.420 14:20:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:27:51.420 14:20:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:27:51.420 14:20:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:51.420 14:20:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:51.420 14:20:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:27:51.678 /dev/nbd1 00:27:51.678 14:20:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:27:51.678 14:20:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:27:51.678 14:20:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:27:51.678 14:20:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:27:51.678 14:20:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:27:51.678 14:20:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:27:51.678 14:20:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:27:51.678 14:20:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # break 00:27:51.678 14:20:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:27:51.678 14:20:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:27:51.678 14:20:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:51.678 1+0 records in 00:27:51.678 1+0 records out 00:27:51.678 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000594207 s, 6.9 MB/s 00:27:51.678 14:20:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:51.678 14:20:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:27:51.678 14:20:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:51.678 14:20:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:27:51.678 14:20:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:27:51.678 14:20:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:51.678 14:20:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:51.678 14:20:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:27:51.936 14:20:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:27:51.936 14:20:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:51.936 14:20:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:51.936 14:20:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:51.936 14:20:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:27:51.936 14:20:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:51.936 14:20:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:27:52.194 14:20:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:52.194 14:20:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:52.194 14:20:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:52.194 14:20:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:52.194 14:20:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:52.194 14:20:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:52.194 14:20:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:27:52.194 14:20:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:27:52.194 14:20:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:52.194 14:20:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:27:52.453 14:20:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:27:52.453 14:20:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:27:52.453 14:20:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:27:52.453 14:20:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:52.453 14:20:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:52.453 14:20:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:52.453 14:20:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:27:52.453 14:20:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:27:52.453 14:20:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:27:52.453 14:20:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@782 -- # killprocess 210412 00:27:52.453 14:20:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@948 -- # '[' -z 210412 ']' 00:27:52.453 14:20:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@952 -- # kill -0 210412 00:27:52.453 14:20:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@953 -- # uname 00:27:52.453 14:20:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:52.453 14:20:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 210412 00:27:52.453 14:20:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:52.453 14:20:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:52.453 14:20:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 210412' 00:27:52.453 killing process with pid 210412 00:27:52.453 14:20:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@967 -- # kill 210412 00:27:52.453 Received shutdown signal, test time was about 60.000000 seconds 00:27:52.453 00:27:52.453 Latency(us) 00:27:52.453 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:52.453 =================================================================================================================== 00:27:52.453 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:27:52.453 14:20:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # wait 210412 00:27:52.453 [2024-07-15 14:20:38.332070] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:52.711 [2024-07-15 14:20:38.589864] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:54.116 14:20:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # return 0 00:27:54.116 00:27:54.116 real 0m22.768s 00:27:54.116 user 0m32.030s 00:27:54.116 sys 0m4.195s 00:27:54.116 14:20:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:54.116 14:20:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:27:54.116 ************************************ 00:27:54.116 END TEST raid_rebuild_test 00:27:54.116 ************************************ 00:27:54.116 14:20:39 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:27:54.116 14:20:39 bdev_raid -- bdev/bdev_raid.sh@878 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:27:54.116 14:20:39 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:27:54.116 14:20:39 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:54.116 14:20:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:54.116 ************************************ 00:27:54.116 START TEST raid_rebuild_test_sb 00:27:54.116 ************************************ 00:27:54.116 14:20:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 true false true 00:27:54.116 14:20:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:27:54.116 14:20:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:27:54.116 14:20:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:27:54.116 14:20:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:27:54.116 14:20:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local verify=true 00:27:54.116 14:20:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:27:54.116 14:20:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:27:54.116 14:20:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:27:54.116 14:20:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:27:54.116 14:20:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:27:54.116 14:20:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:27:54.116 14:20:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:27:54.116 14:20:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:27:54.116 14:20:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:27:54.116 14:20:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:27:54.116 14:20:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:27:54.116 14:20:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local strip_size 00:27:54.116 14:20:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local create_arg 00:27:54.116 14:20:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:27:54.116 14:20:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local data_offset 00:27:54.116 14:20:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:27:54.116 14:20:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:27:54.116 14:20:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:27:54.116 14:20:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:27:54.116 14:20:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # raid_pid=210929 00:27:54.116 14:20:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # waitforlisten 210929 /var/tmp/spdk-raid.sock 00:27:54.116 14:20:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:27:54.116 14:20:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@829 -- # '[' -z 210929 ']' 00:27:54.116 14:20:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:54.116 14:20:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:54.116 14:20:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:54.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:54.116 14:20:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:54.116 14:20:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:54.116 [2024-07-15 14:20:39.856092] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:27:54.116 [2024-07-15 14:20:39.856587] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid210929 ] 00:27:54.116 I/O size of 3145728 is greater than zero copy threshold (65536). 00:27:54.116 Zero copy mechanism will not be used. 00:27:54.116 [2024-07-15 14:20:40.018854] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:54.374 [2024-07-15 14:20:40.294982] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:54.633 [2024-07-15 14:20:40.496309] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:55.199 14:20:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:55.199 14:20:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@862 -- # return 0 00:27:55.199 14:20:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:27:55.199 14:20:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:27:55.458 BaseBdev1_malloc 00:27:55.458 14:20:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:27:55.716 [2024-07-15 14:20:41.490967] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:27:55.716 [2024-07-15 14:20:41.491283] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:55.716 [2024-07-15 14:20:41.491495] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:27:55.716 [2024-07-15 14:20:41.491665] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:55.716 [2024-07-15 14:20:41.493964] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:55.716 [2024-07-15 14:20:41.494191] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:55.716 BaseBdev1 00:27:55.716 14:20:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:27:55.716 14:20:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:27:55.974 BaseBdev2_malloc 00:27:55.974 14:20:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:27:56.277 [2024-07-15 14:20:42.020540] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:27:56.277 [2024-07-15 14:20:42.020908] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:56.277 [2024-07-15 14:20:42.021077] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:27:56.277 [2024-07-15 14:20:42.021209] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:56.277 [2024-07-15 14:20:42.023179] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:56.277 [2024-07-15 14:20:42.023363] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:27:56.277 BaseBdev2 00:27:56.277 14:20:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:27:56.535 spare_malloc 00:27:56.535 14:20:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:27:56.791 spare_delay 00:27:56.791 14:20:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:27:57.048 [2024-07-15 14:20:42.863511] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:57.048 [2024-07-15 14:20:42.863855] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:57.048 [2024-07-15 14:20:42.864029] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:27:57.048 [2024-07-15 14:20:42.864168] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:57.048 [2024-07-15 14:20:42.865981] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:57.048 [2024-07-15 14:20:42.866159] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:57.048 spare 00:27:57.048 14:20:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:27:57.308 [2024-07-15 14:20:43.127583] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:57.308 [2024-07-15 14:20:43.129424] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:57.308 [2024-07-15 14:20:43.129774] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:27:57.308 [2024-07-15 14:20:43.129910] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:27:57.308 [2024-07-15 14:20:43.130070] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:27:57.308 [2024-07-15 14:20:43.130441] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:27:57.308 [2024-07-15 14:20:43.130597] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:27:57.308 [2024-07-15 14:20:43.130834] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:57.308 14:20:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:57.308 14:20:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:57.308 14:20:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:57.308 14:20:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:57.308 14:20:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:57.308 14:20:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:57.308 14:20:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:57.308 14:20:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:57.308 14:20:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:57.308 14:20:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:57.308 14:20:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:57.308 14:20:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:57.566 14:20:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:57.566 "name": "raid_bdev1", 00:27:57.566 "uuid": "cbeec2f7-2a4f-4841-99ca-459cf71781e0", 00:27:57.566 "strip_size_kb": 0, 00:27:57.566 "state": "online", 00:27:57.566 "raid_level": "raid1", 00:27:57.566 "superblock": true, 00:27:57.566 "num_base_bdevs": 2, 00:27:57.566 "num_base_bdevs_discovered": 2, 00:27:57.566 "num_base_bdevs_operational": 2, 00:27:57.566 "base_bdevs_list": [ 00:27:57.566 { 00:27:57.566 "name": "BaseBdev1", 00:27:57.566 "uuid": "60bbd5df-a6d5-53d1-a2a1-68119b72ebfc", 00:27:57.566 "is_configured": true, 00:27:57.566 "data_offset": 2048, 00:27:57.566 "data_size": 63488 00:27:57.566 }, 00:27:57.566 { 00:27:57.566 "name": "BaseBdev2", 00:27:57.566 "uuid": "708d55d4-6eb8-57dc-b13a-5f9b5de7a80e", 00:27:57.566 "is_configured": true, 00:27:57.566 "data_offset": 2048, 00:27:57.566 "data_size": 63488 00:27:57.566 } 00:27:57.566 ] 00:27:57.566 }' 00:27:57.566 14:20:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:57.566 14:20:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:58.139 14:20:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:58.139 14:20:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:27:58.397 [2024-07-15 14:20:44.255941] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:58.397 14:20:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=63488 00:27:58.397 14:20:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:27:58.397 14:20:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:58.655 14:20:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:27:58.655 14:20:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:27:58.655 14:20:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:27:58.655 14:20:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:27:58.655 14:20:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:27:58.655 14:20:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:58.655 14:20:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:27:58.655 14:20:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:58.655 14:20:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:27:58.655 14:20:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:58.655 14:20:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:27:58.655 14:20:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:58.655 14:20:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:58.655 14:20:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:27:58.913 [2024-07-15 14:20:44.807889] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:27:58.913 /dev/nbd0 00:27:58.913 14:20:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:58.913 14:20:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:58.913 14:20:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:27:58.913 14:20:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:27:58.913 14:20:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:27:58.913 14:20:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:27:58.913 14:20:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:27:58.913 14:20:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:27:58.913 14:20:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:27:58.913 14:20:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:27:58.913 14:20:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:58.913 1+0 records in 00:27:58.913 1+0 records out 00:27:58.913 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000494523 s, 8.3 MB/s 00:27:58.913 14:20:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:58.913 14:20:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:27:58.913 14:20:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:58.913 14:20:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:27:58.913 14:20:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:27:58.913 14:20:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:58.913 14:20:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:58.913 14:20:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:27:58.913 14:20:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:27:58.913 14:20:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:28:03.153 63488+0 records in 00:28:03.153 63488+0 records out 00:28:03.153 32505856 bytes (33 MB, 31 MiB) copied, 3.72848 s, 8.7 MB/s 00:28:03.153 14:20:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:28:03.153 14:20:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:03.153 14:20:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:28:03.153 14:20:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:03.153 14:20:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:28:03.153 14:20:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:03.153 14:20:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:28:03.153 14:20:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:03.153 [2024-07-15 14:20:48.869664] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:03.153 14:20:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:03.153 14:20:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:03.153 14:20:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:03.153 14:20:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:03.153 14:20:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:03.153 14:20:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:28:03.153 14:20:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:28:03.153 14:20:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:28:03.153 [2024-07-15 14:20:49.097511] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:03.153 14:20:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:03.153 14:20:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:03.153 14:20:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:03.153 14:20:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:03.153 14:20:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:03.153 14:20:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:03.153 14:20:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:03.153 14:20:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:03.153 14:20:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:03.153 14:20:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:03.153 14:20:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:03.153 14:20:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:03.411 14:20:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:03.411 "name": "raid_bdev1", 00:28:03.411 "uuid": "cbeec2f7-2a4f-4841-99ca-459cf71781e0", 00:28:03.411 "strip_size_kb": 0, 00:28:03.411 "state": "online", 00:28:03.411 "raid_level": "raid1", 00:28:03.411 "superblock": true, 00:28:03.411 "num_base_bdevs": 2, 00:28:03.411 "num_base_bdevs_discovered": 1, 00:28:03.411 "num_base_bdevs_operational": 1, 00:28:03.411 "base_bdevs_list": [ 00:28:03.411 { 00:28:03.411 "name": null, 00:28:03.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:03.411 "is_configured": false, 00:28:03.411 "data_offset": 2048, 00:28:03.411 "data_size": 63488 00:28:03.411 }, 00:28:03.411 { 00:28:03.411 "name": "BaseBdev2", 00:28:03.411 "uuid": "708d55d4-6eb8-57dc-b13a-5f9b5de7a80e", 00:28:03.411 "is_configured": true, 00:28:03.411 "data_offset": 2048, 00:28:03.411 "data_size": 63488 00:28:03.411 } 00:28:03.411 ] 00:28:03.411 }' 00:28:03.411 14:20:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:03.411 14:20:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:04.342 14:20:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:28:04.342 [2024-07-15 14:20:50.269734] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:04.342 [2024-07-15 14:20:50.284608] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3290 00:28:04.342 [2024-07-15 14:20:50.286091] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:04.342 14:20:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # sleep 1 00:28:05.716 14:20:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:05.716 14:20:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:05.716 14:20:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:05.716 14:20:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:05.716 14:20:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:05.716 14:20:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:05.716 14:20:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:05.716 14:20:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:05.716 "name": "raid_bdev1", 00:28:05.716 "uuid": "cbeec2f7-2a4f-4841-99ca-459cf71781e0", 00:28:05.716 "strip_size_kb": 0, 00:28:05.716 "state": "online", 00:28:05.716 "raid_level": "raid1", 00:28:05.716 "superblock": true, 00:28:05.716 "num_base_bdevs": 2, 00:28:05.716 "num_base_bdevs_discovered": 2, 00:28:05.716 "num_base_bdevs_operational": 2, 00:28:05.716 "process": { 00:28:05.716 "type": "rebuild", 00:28:05.716 "target": "spare", 00:28:05.716 "progress": { 00:28:05.716 "blocks": 24576, 00:28:05.716 "percent": 38 00:28:05.716 } 00:28:05.716 }, 00:28:05.716 "base_bdevs_list": [ 00:28:05.716 { 00:28:05.716 "name": "spare", 00:28:05.716 "uuid": "a49cbe28-4893-52c3-8d4d-a36531d767a8", 00:28:05.716 "is_configured": true, 00:28:05.716 "data_offset": 2048, 00:28:05.716 "data_size": 63488 00:28:05.716 }, 00:28:05.716 { 00:28:05.716 "name": "BaseBdev2", 00:28:05.716 "uuid": "708d55d4-6eb8-57dc-b13a-5f9b5de7a80e", 00:28:05.716 "is_configured": true, 00:28:05.716 "data_offset": 2048, 00:28:05.716 "data_size": 63488 00:28:05.716 } 00:28:05.716 ] 00:28:05.716 }' 00:28:05.716 14:20:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:05.716 14:20:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:05.717 14:20:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:05.717 14:20:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:05.717 14:20:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:28:05.976 [2024-07-15 14:20:51.948395] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:06.235 [2024-07-15 14:20:51.996199] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:06.235 [2024-07-15 14:20:51.996331] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:06.235 [2024-07-15 14:20:51.996348] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:06.235 [2024-07-15 14:20:51.996357] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:06.235 14:20:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:06.235 14:20:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:06.235 14:20:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:06.235 14:20:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:06.235 14:20:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:06.235 14:20:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:06.235 14:20:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:06.235 14:20:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:06.235 14:20:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:06.235 14:20:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:06.235 14:20:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:06.235 14:20:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:06.494 14:20:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:06.494 "name": "raid_bdev1", 00:28:06.494 "uuid": "cbeec2f7-2a4f-4841-99ca-459cf71781e0", 00:28:06.494 "strip_size_kb": 0, 00:28:06.494 "state": "online", 00:28:06.494 "raid_level": "raid1", 00:28:06.494 "superblock": true, 00:28:06.494 "num_base_bdevs": 2, 00:28:06.494 "num_base_bdevs_discovered": 1, 00:28:06.494 "num_base_bdevs_operational": 1, 00:28:06.494 "base_bdevs_list": [ 00:28:06.494 { 00:28:06.494 "name": null, 00:28:06.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:06.494 "is_configured": false, 00:28:06.494 "data_offset": 2048, 00:28:06.494 "data_size": 63488 00:28:06.494 }, 00:28:06.494 { 00:28:06.494 "name": "BaseBdev2", 00:28:06.494 "uuid": "708d55d4-6eb8-57dc-b13a-5f9b5de7a80e", 00:28:06.494 "is_configured": true, 00:28:06.494 "data_offset": 2048, 00:28:06.494 "data_size": 63488 00:28:06.494 } 00:28:06.494 ] 00:28:06.494 }' 00:28:06.494 14:20:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:06.494 14:20:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:07.060 14:20:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:07.060 14:20:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:07.060 14:20:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:07.060 14:20:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:07.060 14:20:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:07.060 14:20:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:07.060 14:20:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:07.318 14:20:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:07.318 "name": "raid_bdev1", 00:28:07.318 "uuid": "cbeec2f7-2a4f-4841-99ca-459cf71781e0", 00:28:07.318 "strip_size_kb": 0, 00:28:07.318 "state": "online", 00:28:07.318 "raid_level": "raid1", 00:28:07.318 "superblock": true, 00:28:07.318 "num_base_bdevs": 2, 00:28:07.318 "num_base_bdevs_discovered": 1, 00:28:07.318 "num_base_bdevs_operational": 1, 00:28:07.318 "base_bdevs_list": [ 00:28:07.318 { 00:28:07.318 "name": null, 00:28:07.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:07.318 "is_configured": false, 00:28:07.318 "data_offset": 2048, 00:28:07.318 "data_size": 63488 00:28:07.318 }, 00:28:07.318 { 00:28:07.318 "name": "BaseBdev2", 00:28:07.318 "uuid": "708d55d4-6eb8-57dc-b13a-5f9b5de7a80e", 00:28:07.318 "is_configured": true, 00:28:07.318 "data_offset": 2048, 00:28:07.318 "data_size": 63488 00:28:07.318 } 00:28:07.318 ] 00:28:07.318 }' 00:28:07.318 14:20:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:07.318 14:20:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:07.318 14:20:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:07.318 14:20:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:07.318 14:20:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:28:07.575 [2024-07-15 14:20:53.529517] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:07.575 [2024-07-15 14:20:53.543625] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:28:07.575 [2024-07-15 14:20:53.545170] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:07.575 14:20:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # sleep 1 00:28:08.946 14:20:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:08.946 14:20:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:08.946 14:20:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:08.946 14:20:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:08.946 14:20:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:08.946 14:20:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:08.946 14:20:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:08.946 14:20:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:08.946 "name": "raid_bdev1", 00:28:08.946 "uuid": "cbeec2f7-2a4f-4841-99ca-459cf71781e0", 00:28:08.946 "strip_size_kb": 0, 00:28:08.946 "state": "online", 00:28:08.946 "raid_level": "raid1", 00:28:08.946 "superblock": true, 00:28:08.946 "num_base_bdevs": 2, 00:28:08.946 "num_base_bdevs_discovered": 2, 00:28:08.946 "num_base_bdevs_operational": 2, 00:28:08.946 "process": { 00:28:08.946 "type": "rebuild", 00:28:08.946 "target": "spare", 00:28:08.946 "progress": { 00:28:08.946 "blocks": 24576, 00:28:08.946 "percent": 38 00:28:08.946 } 00:28:08.946 }, 00:28:08.946 "base_bdevs_list": [ 00:28:08.946 { 00:28:08.946 "name": "spare", 00:28:08.946 "uuid": "a49cbe28-4893-52c3-8d4d-a36531d767a8", 00:28:08.946 "is_configured": true, 00:28:08.946 "data_offset": 2048, 00:28:08.946 "data_size": 63488 00:28:08.946 }, 00:28:08.946 { 00:28:08.946 "name": "BaseBdev2", 00:28:08.946 "uuid": "708d55d4-6eb8-57dc-b13a-5f9b5de7a80e", 00:28:08.946 "is_configured": true, 00:28:08.946 "data_offset": 2048, 00:28:08.946 "data_size": 63488 00:28:08.946 } 00:28:08.946 ] 00:28:08.946 }' 00:28:08.946 14:20:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:08.946 14:20:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:08.946 14:20:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:08.946 14:20:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:08.946 14:20:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:28:08.946 14:20:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:28:08.946 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:28:08.946 14:20:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:28:08.946 14:20:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:28:08.946 14:20:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:28:08.946 14:20:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@705 -- # local timeout=911 00:28:08.946 14:20:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:28:08.946 14:20:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:08.946 14:20:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:08.946 14:20:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:08.946 14:20:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:08.946 14:20:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:08.946 14:20:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:08.947 14:20:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:09.204 14:20:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:09.204 "name": "raid_bdev1", 00:28:09.204 "uuid": "cbeec2f7-2a4f-4841-99ca-459cf71781e0", 00:28:09.204 "strip_size_kb": 0, 00:28:09.204 "state": "online", 00:28:09.204 "raid_level": "raid1", 00:28:09.204 "superblock": true, 00:28:09.204 "num_base_bdevs": 2, 00:28:09.204 "num_base_bdevs_discovered": 2, 00:28:09.204 "num_base_bdevs_operational": 2, 00:28:09.204 "process": { 00:28:09.204 "type": "rebuild", 00:28:09.204 "target": "spare", 00:28:09.204 "progress": { 00:28:09.204 "blocks": 30720, 00:28:09.204 "percent": 48 00:28:09.204 } 00:28:09.204 }, 00:28:09.204 "base_bdevs_list": [ 00:28:09.204 { 00:28:09.204 "name": "spare", 00:28:09.204 "uuid": "a49cbe28-4893-52c3-8d4d-a36531d767a8", 00:28:09.204 "is_configured": true, 00:28:09.204 "data_offset": 2048, 00:28:09.204 "data_size": 63488 00:28:09.204 }, 00:28:09.204 { 00:28:09.204 "name": "BaseBdev2", 00:28:09.204 "uuid": "708d55d4-6eb8-57dc-b13a-5f9b5de7a80e", 00:28:09.204 "is_configured": true, 00:28:09.204 "data_offset": 2048, 00:28:09.204 "data_size": 63488 00:28:09.204 } 00:28:09.204 ] 00:28:09.204 }' 00:28:09.204 14:20:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:09.524 14:20:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:09.524 14:20:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:09.524 14:20:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:09.524 14:20:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:28:10.457 14:20:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:28:10.457 14:20:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:10.457 14:20:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:10.457 14:20:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:10.457 14:20:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:10.457 14:20:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:10.457 14:20:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:10.457 14:20:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:10.716 14:20:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:10.716 "name": "raid_bdev1", 00:28:10.716 "uuid": "cbeec2f7-2a4f-4841-99ca-459cf71781e0", 00:28:10.716 "strip_size_kb": 0, 00:28:10.716 "state": "online", 00:28:10.716 "raid_level": "raid1", 00:28:10.716 "superblock": true, 00:28:10.716 "num_base_bdevs": 2, 00:28:10.716 "num_base_bdevs_discovered": 2, 00:28:10.716 "num_base_bdevs_operational": 2, 00:28:10.716 "process": { 00:28:10.716 "type": "rebuild", 00:28:10.716 "target": "spare", 00:28:10.716 "progress": { 00:28:10.716 "blocks": 59392, 00:28:10.716 "percent": 93 00:28:10.716 } 00:28:10.716 }, 00:28:10.716 "base_bdevs_list": [ 00:28:10.716 { 00:28:10.716 "name": "spare", 00:28:10.716 "uuid": "a49cbe28-4893-52c3-8d4d-a36531d767a8", 00:28:10.716 "is_configured": true, 00:28:10.716 "data_offset": 2048, 00:28:10.716 "data_size": 63488 00:28:10.716 }, 00:28:10.716 { 00:28:10.716 "name": "BaseBdev2", 00:28:10.716 "uuid": "708d55d4-6eb8-57dc-b13a-5f9b5de7a80e", 00:28:10.716 "is_configured": true, 00:28:10.716 "data_offset": 2048, 00:28:10.716 "data_size": 63488 00:28:10.716 } 00:28:10.716 ] 00:28:10.716 }' 00:28:10.716 14:20:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:10.716 14:20:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:10.716 14:20:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:10.716 14:20:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:10.716 14:20:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:28:10.716 [2024-07-15 14:20:56.663428] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:28:10.716 [2024-07-15 14:20:56.663505] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:28:10.716 [2024-07-15 14:20:56.663665] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:12.091 14:20:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:28:12.091 14:20:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:12.091 14:20:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:12.091 14:20:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:12.091 14:20:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:12.091 14:20:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:12.091 14:20:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:12.091 14:20:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:12.091 14:20:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:12.091 "name": "raid_bdev1", 00:28:12.091 "uuid": "cbeec2f7-2a4f-4841-99ca-459cf71781e0", 00:28:12.091 "strip_size_kb": 0, 00:28:12.091 "state": "online", 00:28:12.091 "raid_level": "raid1", 00:28:12.091 "superblock": true, 00:28:12.091 "num_base_bdevs": 2, 00:28:12.091 "num_base_bdevs_discovered": 2, 00:28:12.091 "num_base_bdevs_operational": 2, 00:28:12.091 "base_bdevs_list": [ 00:28:12.091 { 00:28:12.091 "name": "spare", 00:28:12.091 "uuid": "a49cbe28-4893-52c3-8d4d-a36531d767a8", 00:28:12.091 "is_configured": true, 00:28:12.091 "data_offset": 2048, 00:28:12.091 "data_size": 63488 00:28:12.091 }, 00:28:12.091 { 00:28:12.091 "name": "BaseBdev2", 00:28:12.091 "uuid": "708d55d4-6eb8-57dc-b13a-5f9b5de7a80e", 00:28:12.091 "is_configured": true, 00:28:12.091 "data_offset": 2048, 00:28:12.091 "data_size": 63488 00:28:12.091 } 00:28:12.091 ] 00:28:12.091 }' 00:28:12.091 14:20:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:12.091 14:20:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:28:12.091 14:20:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:12.091 14:20:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:28:12.091 14:20:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # break 00:28:12.091 14:20:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:12.091 14:20:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:12.091 14:20:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:12.091 14:20:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:12.091 14:20:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:12.091 14:20:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:12.091 14:20:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:12.349 14:20:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:12.349 "name": "raid_bdev1", 00:28:12.349 "uuid": "cbeec2f7-2a4f-4841-99ca-459cf71781e0", 00:28:12.349 "strip_size_kb": 0, 00:28:12.349 "state": "online", 00:28:12.349 "raid_level": "raid1", 00:28:12.349 "superblock": true, 00:28:12.349 "num_base_bdevs": 2, 00:28:12.349 "num_base_bdevs_discovered": 2, 00:28:12.349 "num_base_bdevs_operational": 2, 00:28:12.349 "base_bdevs_list": [ 00:28:12.349 { 00:28:12.349 "name": "spare", 00:28:12.349 "uuid": "a49cbe28-4893-52c3-8d4d-a36531d767a8", 00:28:12.349 "is_configured": true, 00:28:12.349 "data_offset": 2048, 00:28:12.349 "data_size": 63488 00:28:12.349 }, 00:28:12.349 { 00:28:12.349 "name": "BaseBdev2", 00:28:12.349 "uuid": "708d55d4-6eb8-57dc-b13a-5f9b5de7a80e", 00:28:12.349 "is_configured": true, 00:28:12.349 "data_offset": 2048, 00:28:12.349 "data_size": 63488 00:28:12.349 } 00:28:12.349 ] 00:28:12.349 }' 00:28:12.349 14:20:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:12.607 14:20:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:12.607 14:20:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:12.607 14:20:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:12.607 14:20:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:12.608 14:20:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:12.608 14:20:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:12.608 14:20:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:12.608 14:20:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:12.608 14:20:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:12.608 14:20:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:12.608 14:20:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:12.608 14:20:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:12.608 14:20:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:12.608 14:20:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:12.608 14:20:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:12.865 14:20:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:12.865 "name": "raid_bdev1", 00:28:12.865 "uuid": "cbeec2f7-2a4f-4841-99ca-459cf71781e0", 00:28:12.865 "strip_size_kb": 0, 00:28:12.865 "state": "online", 00:28:12.865 "raid_level": "raid1", 00:28:12.865 "superblock": true, 00:28:12.865 "num_base_bdevs": 2, 00:28:12.865 "num_base_bdevs_discovered": 2, 00:28:12.865 "num_base_bdevs_operational": 2, 00:28:12.865 "base_bdevs_list": [ 00:28:12.865 { 00:28:12.865 "name": "spare", 00:28:12.865 "uuid": "a49cbe28-4893-52c3-8d4d-a36531d767a8", 00:28:12.865 "is_configured": true, 00:28:12.865 "data_offset": 2048, 00:28:12.865 "data_size": 63488 00:28:12.865 }, 00:28:12.865 { 00:28:12.865 "name": "BaseBdev2", 00:28:12.865 "uuid": "708d55d4-6eb8-57dc-b13a-5f9b5de7a80e", 00:28:12.865 "is_configured": true, 00:28:12.865 "data_offset": 2048, 00:28:12.865 "data_size": 63488 00:28:12.865 } 00:28:12.865 ] 00:28:12.865 }' 00:28:12.865 14:20:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:12.865 14:20:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:13.798 14:20:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:28:13.798 [2024-07-15 14:20:59.682052] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:13.798 [2024-07-15 14:20:59.682099] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:13.798 [2024-07-15 14:20:59.682172] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:13.798 [2024-07-15 14:20:59.682221] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:13.798 [2024-07-15 14:20:59.682232] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:28:13.798 14:20:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:13.798 14:20:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # jq length 00:28:14.056 14:20:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:28:14.056 14:20:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:28:14.056 14:20:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:28:14.056 14:20:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:28:14.056 14:20:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:14.056 14:20:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:28:14.056 14:20:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:14.056 14:20:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:14.056 14:20:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:14.056 14:20:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:28:14.056 14:20:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:14.056 14:20:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:14.056 14:20:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:28:14.314 /dev/nbd0 00:28:14.314 14:21:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:14.314 14:21:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:14.314 14:21:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:28:14.314 14:21:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:28:14.314 14:21:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:28:14.314 14:21:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:28:14.314 14:21:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:28:14.314 14:21:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:28:14.314 14:21:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:28:14.314 14:21:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:28:14.314 14:21:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:14.572 1+0 records in 00:28:14.572 1+0 records out 00:28:14.572 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00040543 s, 10.1 MB/s 00:28:14.572 14:21:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:14.572 14:21:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:28:14.572 14:21:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:14.572 14:21:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:28:14.572 14:21:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:28:14.572 14:21:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:14.572 14:21:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:14.572 14:21:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:28:14.831 /dev/nbd1 00:28:14.831 14:21:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:28:14.831 14:21:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:28:14.831 14:21:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:28:14.831 14:21:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:28:14.831 14:21:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:28:14.831 14:21:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:28:14.831 14:21:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:28:14.831 14:21:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:28:14.831 14:21:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:28:14.831 14:21:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:28:14.831 14:21:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:14.831 1+0 records in 00:28:14.831 1+0 records out 00:28:14.831 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00056565 s, 7.2 MB/s 00:28:14.831 14:21:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:14.831 14:21:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:28:14.831 14:21:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:14.831 14:21:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:28:14.831 14:21:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:28:14.831 14:21:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:14.831 14:21:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:14.831 14:21:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:28:14.831 14:21:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:28:14.831 14:21:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:14.831 14:21:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:14.831 14:21:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:14.831 14:21:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:28:14.831 14:21:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:14.831 14:21:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:28:15.397 14:21:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:15.397 14:21:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:15.397 14:21:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:15.397 14:21:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:15.397 14:21:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:15.397 14:21:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:15.397 14:21:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:28:15.397 14:21:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:28:15.397 14:21:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:15.397 14:21:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:28:15.655 14:21:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:28:15.655 14:21:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:28:15.655 14:21:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:28:15.655 14:21:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:15.655 14:21:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:15.655 14:21:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:28:15.656 14:21:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:28:15.656 14:21:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:28:15.656 14:21:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:28:15.656 14:21:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:28:15.914 14:21:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:28:16.173 [2024-07-15 14:21:01.987817] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:16.173 [2024-07-15 14:21:01.988346] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:16.173 [2024-07-15 14:21:01.988505] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:28:16.173 [2024-07-15 14:21:01.988594] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:16.173 [2024-07-15 14:21:01.990524] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:16.173 [2024-07-15 14:21:01.990649] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:16.173 [2024-07-15 14:21:01.990832] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:28:16.173 [2024-07-15 14:21:01.990882] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:16.173 [2024-07-15 14:21:01.991022] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:16.173 spare 00:28:16.173 14:21:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:16.173 14:21:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:16.173 14:21:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:16.173 14:21:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:16.173 14:21:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:16.173 14:21:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:16.173 14:21:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:16.173 14:21:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:16.173 14:21:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:16.173 14:21:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:16.173 14:21:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:16.173 14:21:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:16.173 [2024-07-15 14:21:02.091099] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580 00:28:16.173 [2024-07-15 14:21:02.091139] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:28:16.173 [2024-07-15 14:21:02.091298] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:28:16.173 [2024-07-15 14:21:02.091578] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580 00:28:16.173 [2024-07-15 14:21:02.091602] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580 00:28:16.173 [2024-07-15 14:21:02.091756] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:16.432 14:21:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:16.432 "name": "raid_bdev1", 00:28:16.432 "uuid": "cbeec2f7-2a4f-4841-99ca-459cf71781e0", 00:28:16.432 "strip_size_kb": 0, 00:28:16.432 "state": "online", 00:28:16.432 "raid_level": "raid1", 00:28:16.432 "superblock": true, 00:28:16.432 "num_base_bdevs": 2, 00:28:16.432 "num_base_bdevs_discovered": 2, 00:28:16.432 "num_base_bdevs_operational": 2, 00:28:16.432 "base_bdevs_list": [ 00:28:16.432 { 00:28:16.432 "name": "spare", 00:28:16.432 "uuid": "a49cbe28-4893-52c3-8d4d-a36531d767a8", 00:28:16.432 "is_configured": true, 00:28:16.432 "data_offset": 2048, 00:28:16.432 "data_size": 63488 00:28:16.432 }, 00:28:16.432 { 00:28:16.432 "name": "BaseBdev2", 00:28:16.432 "uuid": "708d55d4-6eb8-57dc-b13a-5f9b5de7a80e", 00:28:16.432 "is_configured": true, 00:28:16.432 "data_offset": 2048, 00:28:16.432 "data_size": 63488 00:28:16.432 } 00:28:16.432 ] 00:28:16.432 }' 00:28:16.432 14:21:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:16.432 14:21:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:16.997 14:21:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:16.997 14:21:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:16.997 14:21:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:16.997 14:21:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:16.997 14:21:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:16.997 14:21:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:16.997 14:21:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:17.255 14:21:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:17.255 "name": "raid_bdev1", 00:28:17.255 "uuid": "cbeec2f7-2a4f-4841-99ca-459cf71781e0", 00:28:17.255 "strip_size_kb": 0, 00:28:17.255 "state": "online", 00:28:17.255 "raid_level": "raid1", 00:28:17.255 "superblock": true, 00:28:17.255 "num_base_bdevs": 2, 00:28:17.255 "num_base_bdevs_discovered": 2, 00:28:17.255 "num_base_bdevs_operational": 2, 00:28:17.255 "base_bdevs_list": [ 00:28:17.255 { 00:28:17.255 "name": "spare", 00:28:17.255 "uuid": "a49cbe28-4893-52c3-8d4d-a36531d767a8", 00:28:17.255 "is_configured": true, 00:28:17.255 "data_offset": 2048, 00:28:17.255 "data_size": 63488 00:28:17.255 }, 00:28:17.255 { 00:28:17.255 "name": "BaseBdev2", 00:28:17.255 "uuid": "708d55d4-6eb8-57dc-b13a-5f9b5de7a80e", 00:28:17.255 "is_configured": true, 00:28:17.255 "data_offset": 2048, 00:28:17.255 "data_size": 63488 00:28:17.255 } 00:28:17.255 ] 00:28:17.255 }' 00:28:17.255 14:21:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:17.255 14:21:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:17.255 14:21:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:17.512 14:21:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:17.513 14:21:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:17.513 14:21:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:28:17.770 14:21:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:28:17.770 14:21:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:28:18.028 [2024-07-15 14:21:03.834355] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:18.028 14:21:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:18.028 14:21:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:18.028 14:21:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:18.028 14:21:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:18.028 14:21:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:18.028 14:21:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:18.028 14:21:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:18.028 14:21:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:18.028 14:21:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:18.028 14:21:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:18.028 14:21:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:18.028 14:21:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:18.286 14:21:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:18.286 "name": "raid_bdev1", 00:28:18.286 "uuid": "cbeec2f7-2a4f-4841-99ca-459cf71781e0", 00:28:18.286 "strip_size_kb": 0, 00:28:18.286 "state": "online", 00:28:18.286 "raid_level": "raid1", 00:28:18.286 "superblock": true, 00:28:18.286 "num_base_bdevs": 2, 00:28:18.286 "num_base_bdevs_discovered": 1, 00:28:18.286 "num_base_bdevs_operational": 1, 00:28:18.286 "base_bdevs_list": [ 00:28:18.286 { 00:28:18.286 "name": null, 00:28:18.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:18.286 "is_configured": false, 00:28:18.286 "data_offset": 2048, 00:28:18.286 "data_size": 63488 00:28:18.286 }, 00:28:18.286 { 00:28:18.286 "name": "BaseBdev2", 00:28:18.286 "uuid": "708d55d4-6eb8-57dc-b13a-5f9b5de7a80e", 00:28:18.286 "is_configured": true, 00:28:18.286 "data_offset": 2048, 00:28:18.286 "data_size": 63488 00:28:18.286 } 00:28:18.286 ] 00:28:18.286 }' 00:28:18.286 14:21:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:18.286 14:21:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:18.852 14:21:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:28:19.110 [2024-07-15 14:21:05.028660] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:19.110 [2024-07-15 14:21:05.028904] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:28:19.110 [2024-07-15 14:21:05.028921] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:28:19.110 [2024-07-15 14:21:05.029345] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:19.110 [2024-07-15 14:21:05.042818] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ef0 00:28:19.110 [2024-07-15 14:21:05.056984] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:19.110 14:21:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # sleep 1 00:28:20.485 14:21:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:20.485 14:21:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:20.485 14:21:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:20.485 14:21:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:20.485 14:21:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:20.485 14:21:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:20.486 14:21:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:20.486 14:21:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:20.486 "name": "raid_bdev1", 00:28:20.486 "uuid": "cbeec2f7-2a4f-4841-99ca-459cf71781e0", 00:28:20.486 "strip_size_kb": 0, 00:28:20.486 "state": "online", 00:28:20.486 "raid_level": "raid1", 00:28:20.486 "superblock": true, 00:28:20.486 "num_base_bdevs": 2, 00:28:20.486 "num_base_bdevs_discovered": 2, 00:28:20.486 "num_base_bdevs_operational": 2, 00:28:20.486 "process": { 00:28:20.486 "type": "rebuild", 00:28:20.486 "target": "spare", 00:28:20.486 "progress": { 00:28:20.486 "blocks": 24576, 00:28:20.486 "percent": 38 00:28:20.486 } 00:28:20.486 }, 00:28:20.486 "base_bdevs_list": [ 00:28:20.486 { 00:28:20.486 "name": "spare", 00:28:20.486 "uuid": "a49cbe28-4893-52c3-8d4d-a36531d767a8", 00:28:20.486 "is_configured": true, 00:28:20.486 "data_offset": 2048, 00:28:20.486 "data_size": 63488 00:28:20.486 }, 00:28:20.486 { 00:28:20.486 "name": "BaseBdev2", 00:28:20.486 "uuid": "708d55d4-6eb8-57dc-b13a-5f9b5de7a80e", 00:28:20.486 "is_configured": true, 00:28:20.486 "data_offset": 2048, 00:28:20.486 "data_size": 63488 00:28:20.486 } 00:28:20.486 ] 00:28:20.486 }' 00:28:20.486 14:21:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:20.486 14:21:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:20.486 14:21:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:20.486 14:21:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:20.486 14:21:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:28:20.743 [2024-07-15 14:21:06.739248] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:21.002 [2024-07-15 14:21:06.766880] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:21.002 [2024-07-15 14:21:06.767441] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:21.002 [2024-07-15 14:21:06.767624] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:21.002 [2024-07-15 14:21:06.767672] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:21.002 14:21:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:21.002 14:21:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:21.002 14:21:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:21.002 14:21:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:21.002 14:21:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:21.002 14:21:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:21.002 14:21:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:21.002 14:21:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:21.002 14:21:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:21.002 14:21:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:21.002 14:21:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:21.002 14:21:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:21.260 14:21:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:21.260 "name": "raid_bdev1", 00:28:21.260 "uuid": "cbeec2f7-2a4f-4841-99ca-459cf71781e0", 00:28:21.260 "strip_size_kb": 0, 00:28:21.260 "state": "online", 00:28:21.260 "raid_level": "raid1", 00:28:21.260 "superblock": true, 00:28:21.260 "num_base_bdevs": 2, 00:28:21.260 "num_base_bdevs_discovered": 1, 00:28:21.260 "num_base_bdevs_operational": 1, 00:28:21.260 "base_bdevs_list": [ 00:28:21.260 { 00:28:21.260 "name": null, 00:28:21.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:21.260 "is_configured": false, 00:28:21.260 "data_offset": 2048, 00:28:21.260 "data_size": 63488 00:28:21.260 }, 00:28:21.260 { 00:28:21.260 "name": "BaseBdev2", 00:28:21.260 "uuid": "708d55d4-6eb8-57dc-b13a-5f9b5de7a80e", 00:28:21.260 "is_configured": true, 00:28:21.260 "data_offset": 2048, 00:28:21.260 "data_size": 63488 00:28:21.260 } 00:28:21.260 ] 00:28:21.260 }' 00:28:21.260 14:21:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:21.260 14:21:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:21.827 14:21:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:28:22.085 [2024-07-15 14:21:07.945221] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:22.085 [2024-07-15 14:21:07.945893] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:22.085 [2024-07-15 14:21:07.946176] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:28:22.085 [2024-07-15 14:21:07.946410] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:22.085 [2024-07-15 14:21:07.947065] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:22.085 [2024-07-15 14:21:07.947297] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:22.085 [2024-07-15 14:21:07.947605] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:28:22.085 [2024-07-15 14:21:07.947741] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:28:22.085 [2024-07-15 14:21:07.947853] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:28:22.085 [2024-07-15 14:21:07.948015] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:22.085 [2024-07-15 14:21:07.961264] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc2230 00:28:22.085 spare 00:28:22.085 [2024-07-15 14:21:07.963605] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:22.085 14:21:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # sleep 1 00:28:23.018 14:21:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:23.018 14:21:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:23.018 14:21:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:23.018 14:21:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:23.018 14:21:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:23.018 14:21:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:23.018 14:21:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:23.276 14:21:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:23.276 "name": "raid_bdev1", 00:28:23.276 "uuid": "cbeec2f7-2a4f-4841-99ca-459cf71781e0", 00:28:23.276 "strip_size_kb": 0, 00:28:23.276 "state": "online", 00:28:23.276 "raid_level": "raid1", 00:28:23.276 "superblock": true, 00:28:23.276 "num_base_bdevs": 2, 00:28:23.276 "num_base_bdevs_discovered": 2, 00:28:23.276 "num_base_bdevs_operational": 2, 00:28:23.276 "process": { 00:28:23.276 "type": "rebuild", 00:28:23.276 "target": "spare", 00:28:23.276 "progress": { 00:28:23.276 "blocks": 24576, 00:28:23.276 "percent": 38 00:28:23.276 } 00:28:23.276 }, 00:28:23.276 "base_bdevs_list": [ 00:28:23.276 { 00:28:23.276 "name": "spare", 00:28:23.276 "uuid": "a49cbe28-4893-52c3-8d4d-a36531d767a8", 00:28:23.276 "is_configured": true, 00:28:23.276 "data_offset": 2048, 00:28:23.276 "data_size": 63488 00:28:23.276 }, 00:28:23.276 { 00:28:23.276 "name": "BaseBdev2", 00:28:23.276 "uuid": "708d55d4-6eb8-57dc-b13a-5f9b5de7a80e", 00:28:23.276 "is_configured": true, 00:28:23.276 "data_offset": 2048, 00:28:23.276 "data_size": 63488 00:28:23.276 } 00:28:23.276 ] 00:28:23.276 }' 00:28:23.276 14:21:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:23.533 14:21:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:23.533 14:21:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:23.534 14:21:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:23.534 14:21:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:28:23.792 [2024-07-15 14:21:09.620950] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:23.792 [2024-07-15 14:21:09.684871] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:23.792 [2024-07-15 14:21:09.685607] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:23.792 [2024-07-15 14:21:09.685802] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:23.792 [2024-07-15 14:21:09.685851] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:23.792 14:21:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:23.792 14:21:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:23.792 14:21:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:23.792 14:21:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:23.792 14:21:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:23.792 14:21:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:23.792 14:21:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:23.792 14:21:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:23.792 14:21:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:23.792 14:21:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:23.792 14:21:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:23.792 14:21:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:24.051 14:21:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:24.051 "name": "raid_bdev1", 00:28:24.051 "uuid": "cbeec2f7-2a4f-4841-99ca-459cf71781e0", 00:28:24.051 "strip_size_kb": 0, 00:28:24.051 "state": "online", 00:28:24.051 "raid_level": "raid1", 00:28:24.051 "superblock": true, 00:28:24.051 "num_base_bdevs": 2, 00:28:24.051 "num_base_bdevs_discovered": 1, 00:28:24.051 "num_base_bdevs_operational": 1, 00:28:24.051 "base_bdevs_list": [ 00:28:24.051 { 00:28:24.051 "name": null, 00:28:24.051 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:24.051 "is_configured": false, 00:28:24.051 "data_offset": 2048, 00:28:24.051 "data_size": 63488 00:28:24.051 }, 00:28:24.051 { 00:28:24.051 "name": "BaseBdev2", 00:28:24.051 "uuid": "708d55d4-6eb8-57dc-b13a-5f9b5de7a80e", 00:28:24.051 "is_configured": true, 00:28:24.051 "data_offset": 2048, 00:28:24.051 "data_size": 63488 00:28:24.051 } 00:28:24.051 ] 00:28:24.051 }' 00:28:24.051 14:21:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:24.051 14:21:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:24.987 14:21:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:24.987 14:21:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:24.987 14:21:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:24.987 14:21:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:24.987 14:21:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:24.987 14:21:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:24.987 14:21:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:24.987 14:21:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:24.987 "name": "raid_bdev1", 00:28:24.987 "uuid": "cbeec2f7-2a4f-4841-99ca-459cf71781e0", 00:28:24.987 "strip_size_kb": 0, 00:28:24.987 "state": "online", 00:28:24.987 "raid_level": "raid1", 00:28:24.987 "superblock": true, 00:28:24.987 "num_base_bdevs": 2, 00:28:24.987 "num_base_bdevs_discovered": 1, 00:28:24.987 "num_base_bdevs_operational": 1, 00:28:24.987 "base_bdevs_list": [ 00:28:24.987 { 00:28:24.987 "name": null, 00:28:24.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:24.987 "is_configured": false, 00:28:24.987 "data_offset": 2048, 00:28:24.987 "data_size": 63488 00:28:24.987 }, 00:28:24.987 { 00:28:24.987 "name": "BaseBdev2", 00:28:24.987 "uuid": "708d55d4-6eb8-57dc-b13a-5f9b5de7a80e", 00:28:24.987 "is_configured": true, 00:28:24.987 "data_offset": 2048, 00:28:24.987 "data_size": 63488 00:28:24.987 } 00:28:24.987 ] 00:28:24.987 }' 00:28:24.987 14:21:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:24.987 14:21:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:24.987 14:21:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:25.246 14:21:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:25.246 14:21:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:28:25.504 14:21:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:28:25.763 [2024-07-15 14:21:11.518711] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:28:25.763 [2024-07-15 14:21:11.519408] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:25.763 [2024-07-15 14:21:11.519714] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:28:25.763 [2024-07-15 14:21:11.519956] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:25.763 [2024-07-15 14:21:11.520514] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:25.763 [2024-07-15 14:21:11.520764] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:25.763 [2024-07-15 14:21:11.521088] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:28:25.763 [2024-07-15 14:21:11.521218] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:28:25.763 [2024-07-15 14:21:11.521331] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:28:25.763 BaseBdev1 00:28:25.763 14:21:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # sleep 1 00:28:26.699 14:21:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:26.699 14:21:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:26.699 14:21:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:26.699 14:21:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:26.699 14:21:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:26.699 14:21:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:26.699 14:21:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:26.699 14:21:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:26.699 14:21:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:26.699 14:21:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:26.699 14:21:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:26.699 14:21:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:26.957 14:21:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:26.957 "name": "raid_bdev1", 00:28:26.957 "uuid": "cbeec2f7-2a4f-4841-99ca-459cf71781e0", 00:28:26.957 "strip_size_kb": 0, 00:28:26.957 "state": "online", 00:28:26.957 "raid_level": "raid1", 00:28:26.957 "superblock": true, 00:28:26.957 "num_base_bdevs": 2, 00:28:26.957 "num_base_bdevs_discovered": 1, 00:28:26.957 "num_base_bdevs_operational": 1, 00:28:26.957 "base_bdevs_list": [ 00:28:26.957 { 00:28:26.957 "name": null, 00:28:26.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:26.957 "is_configured": false, 00:28:26.957 "data_offset": 2048, 00:28:26.957 "data_size": 63488 00:28:26.957 }, 00:28:26.958 { 00:28:26.958 "name": "BaseBdev2", 00:28:26.958 "uuid": "708d55d4-6eb8-57dc-b13a-5f9b5de7a80e", 00:28:26.958 "is_configured": true, 00:28:26.958 "data_offset": 2048, 00:28:26.958 "data_size": 63488 00:28:26.958 } 00:28:26.958 ] 00:28:26.958 }' 00:28:26.958 14:21:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:26.958 14:21:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:27.524 14:21:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:27.524 14:21:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:27.524 14:21:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:27.524 14:21:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:27.524 14:21:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:27.524 14:21:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:27.524 14:21:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:27.782 14:21:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:27.782 "name": "raid_bdev1", 00:28:27.782 "uuid": "cbeec2f7-2a4f-4841-99ca-459cf71781e0", 00:28:27.782 "strip_size_kb": 0, 00:28:27.782 "state": "online", 00:28:27.782 "raid_level": "raid1", 00:28:27.782 "superblock": true, 00:28:27.782 "num_base_bdevs": 2, 00:28:27.782 "num_base_bdevs_discovered": 1, 00:28:27.782 "num_base_bdevs_operational": 1, 00:28:27.782 "base_bdevs_list": [ 00:28:27.782 { 00:28:27.782 "name": null, 00:28:27.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:27.782 "is_configured": false, 00:28:27.782 "data_offset": 2048, 00:28:27.782 "data_size": 63488 00:28:27.782 }, 00:28:27.782 { 00:28:27.782 "name": "BaseBdev2", 00:28:27.782 "uuid": "708d55d4-6eb8-57dc-b13a-5f9b5de7a80e", 00:28:27.782 "is_configured": true, 00:28:27.782 "data_offset": 2048, 00:28:27.782 "data_size": 63488 00:28:27.782 } 00:28:27.782 ] 00:28:27.782 }' 00:28:27.782 14:21:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:28.039 14:21:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:28.039 14:21:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:28.039 14:21:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:28.039 14:21:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:28:28.039 14:21:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@648 -- # local es=0 00:28:28.039 14:21:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:28:28.040 14:21:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:28.040 14:21:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:28.040 14:21:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:28.040 14:21:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:28.040 14:21:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:28.040 14:21:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:28.040 14:21:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:28.040 14:21:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:28:28.040 14:21:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:28:28.298 [2024-07-15 14:21:14.083121] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:28.298 [2024-07-15 14:21:14.083458] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:28:28.298 [2024-07-15 14:21:14.083582] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:28:28.298 request: 00:28:28.298 { 00:28:28.298 "base_bdev": "BaseBdev1", 00:28:28.298 "raid_bdev": "raid_bdev1", 00:28:28.298 "method": "bdev_raid_add_base_bdev", 00:28:28.298 "req_id": 1 00:28:28.298 } 00:28:28.298 Got JSON-RPC error response 00:28:28.298 response: 00:28:28.298 { 00:28:28.298 "code": -22, 00:28:28.298 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:28:28.298 } 00:28:28.298 14:21:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@651 -- # es=1 00:28:28.298 14:21:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:28.298 14:21:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:28.298 14:21:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:28.298 14:21:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # sleep 1 00:28:29.234 14:21:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:29.234 14:21:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:29.234 14:21:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:29.234 14:21:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:29.234 14:21:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:29.234 14:21:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:29.234 14:21:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:29.234 14:21:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:29.234 14:21:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:29.234 14:21:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:29.234 14:21:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:29.234 14:21:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:29.494 14:21:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:29.494 "name": "raid_bdev1", 00:28:29.494 "uuid": "cbeec2f7-2a4f-4841-99ca-459cf71781e0", 00:28:29.494 "strip_size_kb": 0, 00:28:29.494 "state": "online", 00:28:29.494 "raid_level": "raid1", 00:28:29.494 "superblock": true, 00:28:29.494 "num_base_bdevs": 2, 00:28:29.494 "num_base_bdevs_discovered": 1, 00:28:29.494 "num_base_bdevs_operational": 1, 00:28:29.494 "base_bdevs_list": [ 00:28:29.494 { 00:28:29.494 "name": null, 00:28:29.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:29.494 "is_configured": false, 00:28:29.494 "data_offset": 2048, 00:28:29.494 "data_size": 63488 00:28:29.494 }, 00:28:29.494 { 00:28:29.494 "name": "BaseBdev2", 00:28:29.494 "uuid": "708d55d4-6eb8-57dc-b13a-5f9b5de7a80e", 00:28:29.494 "is_configured": true, 00:28:29.494 "data_offset": 2048, 00:28:29.494 "data_size": 63488 00:28:29.494 } 00:28:29.494 ] 00:28:29.494 }' 00:28:29.494 14:21:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:29.494 14:21:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:30.060 14:21:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:30.060 14:21:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:30.061 14:21:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:30.061 14:21:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:30.061 14:21:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:30.061 14:21:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:30.061 14:21:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:30.319 14:21:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:30.319 "name": "raid_bdev1", 00:28:30.319 "uuid": "cbeec2f7-2a4f-4841-99ca-459cf71781e0", 00:28:30.319 "strip_size_kb": 0, 00:28:30.319 "state": "online", 00:28:30.319 "raid_level": "raid1", 00:28:30.319 "superblock": true, 00:28:30.319 "num_base_bdevs": 2, 00:28:30.319 "num_base_bdevs_discovered": 1, 00:28:30.319 "num_base_bdevs_operational": 1, 00:28:30.319 "base_bdevs_list": [ 00:28:30.319 { 00:28:30.319 "name": null, 00:28:30.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:30.319 "is_configured": false, 00:28:30.319 "data_offset": 2048, 00:28:30.319 "data_size": 63488 00:28:30.319 }, 00:28:30.319 { 00:28:30.319 "name": "BaseBdev2", 00:28:30.319 "uuid": "708d55d4-6eb8-57dc-b13a-5f9b5de7a80e", 00:28:30.319 "is_configured": true, 00:28:30.319 "data_offset": 2048, 00:28:30.319 "data_size": 63488 00:28:30.319 } 00:28:30.319 ] 00:28:30.319 }' 00:28:30.319 14:21:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:30.319 14:21:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:30.319 14:21:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:30.578 14:21:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:30.578 14:21:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@782 -- # killprocess 210929 00:28:30.578 14:21:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@948 -- # '[' -z 210929 ']' 00:28:30.578 14:21:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@952 -- # kill -0 210929 00:28:30.578 14:21:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@953 -- # uname 00:28:30.578 14:21:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:30.578 14:21:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 210929 00:28:30.578 14:21:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:30.578 14:21:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:30.578 14:21:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 210929' 00:28:30.578 killing process with pid 210929 00:28:30.578 14:21:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@967 -- # kill 210929 00:28:30.578 Received shutdown signal, test time was about 60.000000 seconds 00:28:30.578 00:28:30.578 Latency(us) 00:28:30.578 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:30.578 =================================================================================================================== 00:28:30.578 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:28:30.578 14:21:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # wait 210929 00:28:30.578 [2024-07-15 14:21:16.353102] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:30.578 [2024-07-15 14:21:16.353197] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:30.578 [2024-07-15 14:21:16.353233] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:30.578 [2024-07-15 14:21:16.353243] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline 00:28:30.835 [2024-07-15 14:21:16.605170] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:31.767 14:21:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # return 0 00:28:31.767 00:28:31.767 real 0m37.961s 00:28:31.767 user 0m57.459s 00:28:31.767 sys 0m5.481s 00:28:31.767 14:21:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:31.767 14:21:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:31.767 ************************************ 00:28:31.767 END TEST raid_rebuild_test_sb 00:28:31.767 ************************************ 00:28:32.025 14:21:17 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:28:32.025 14:21:17 bdev_raid -- bdev/bdev_raid.sh@879 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:28:32.025 14:21:17 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:28:32.025 14:21:17 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:32.025 14:21:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:32.025 ************************************ 00:28:32.025 START TEST raid_rebuild_test_io 00:28:32.025 ************************************ 00:28:32.025 14:21:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 false true true 00:28:32.025 14:21:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:28:32.025 14:21:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:28:32.025 14:21:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:28:32.025 14:21:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local background_io=true 00:28:32.025 14:21:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local verify=true 00:28:32.025 14:21:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:28:32.025 14:21:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:28:32.025 14:21:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:28:32.025 14:21:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:28:32.025 14:21:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:28:32.025 14:21:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:28:32.025 14:21:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:28:32.025 14:21:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:28:32.025 14:21:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:28:32.025 14:21:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:28:32.025 14:21:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:28:32.025 14:21:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local strip_size 00:28:32.025 14:21:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local create_arg 00:28:32.025 14:21:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:28:32.025 14:21:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local data_offset 00:28:32.025 14:21:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:28:32.025 14:21:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:28:32.025 14:21:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:28:32.025 14:21:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # raid_pid=211850 00:28:32.025 14:21:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # waitforlisten 211850 /var/tmp/spdk-raid.sock 00:28:32.025 14:21:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:28:32.025 14:21:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@829 -- # '[' -z 211850 ']' 00:28:32.025 14:21:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:28:32.025 14:21:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:32.025 14:21:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:28:32.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:28:32.025 14:21:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:32.025 14:21:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:32.025 [2024-07-15 14:21:17.877588] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:28:32.025 [2024-07-15 14:21:17.878451] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid211850 ] 00:28:32.025 I/O size of 3145728 is greater than zero copy threshold (65536). 00:28:32.025 Zero copy mechanism will not be used. 00:28:32.283 [2024-07-15 14:21:18.031123] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:32.283 [2024-07-15 14:21:18.247472] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:32.541 [2024-07-15 14:21:18.446736] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:33.107 14:21:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:33.107 14:21:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@862 -- # return 0 00:28:33.107 14:21:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:28:33.107 14:21:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:28:33.365 BaseBdev1_malloc 00:28:33.365 14:21:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:28:33.624 [2024-07-15 14:21:19.463556] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:28:33.624 [2024-07-15 14:21:19.464241] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:33.624 [2024-07-15 14:21:19.464540] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:28:33.624 [2024-07-15 14:21:19.464790] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:33.624 [2024-07-15 14:21:19.466781] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:33.625 [2024-07-15 14:21:19.467035] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:33.625 BaseBdev1 00:28:33.625 14:21:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:28:33.625 14:21:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:28:33.883 BaseBdev2_malloc 00:28:33.883 14:21:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:28:34.142 [2024-07-15 14:21:20.079363] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:28:34.142 [2024-07-15 14:21:20.080152] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:34.142 [2024-07-15 14:21:20.080482] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:28:34.142 [2024-07-15 14:21:20.080935] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:34.142 [2024-07-15 14:21:20.083376] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:34.142 [2024-07-15 14:21:20.083653] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:28:34.142 BaseBdev2 00:28:34.142 14:21:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:28:34.401 spare_malloc 00:28:34.401 14:21:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:28:34.659 spare_delay 00:28:34.918 14:21:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:28:34.918 [2024-07-15 14:21:20.894647] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:34.918 [2024-07-15 14:21:20.895592] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:34.918 [2024-07-15 14:21:20.895875] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:28:34.918 [2024-07-15 14:21:20.896160] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:34.918 [2024-07-15 14:21:20.898549] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:34.918 [2024-07-15 14:21:20.898860] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:34.918 spare 00:28:34.918 14:21:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:28:35.176 [2024-07-15 14:21:21.179481] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:35.435 [2024-07-15 14:21:21.181734] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:35.435 [2024-07-15 14:21:21.182048] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:28:35.435 [2024-07-15 14:21:21.182199] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:28:35.435 [2024-07-15 14:21:21.182435] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:28:35.435 [2024-07-15 14:21:21.182913] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:28:35.435 [2024-07-15 14:21:21.183101] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:28:35.435 [2024-07-15 14:21:21.183440] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:35.435 14:21:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:35.435 14:21:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:35.435 14:21:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:35.435 14:21:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:35.435 14:21:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:35.435 14:21:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:35.435 14:21:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:35.435 14:21:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:35.435 14:21:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:35.435 14:21:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:35.435 14:21:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:35.435 14:21:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:35.694 14:21:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:35.694 "name": "raid_bdev1", 00:28:35.694 "uuid": "6ffa5122-305d-47ee-a556-ad1418989777", 00:28:35.694 "strip_size_kb": 0, 00:28:35.694 "state": "online", 00:28:35.694 "raid_level": "raid1", 00:28:35.694 "superblock": false, 00:28:35.694 "num_base_bdevs": 2, 00:28:35.694 "num_base_bdevs_discovered": 2, 00:28:35.694 "num_base_bdevs_operational": 2, 00:28:35.694 "base_bdevs_list": [ 00:28:35.694 { 00:28:35.694 "name": "BaseBdev1", 00:28:35.694 "uuid": "4eea611f-39cb-5b7c-b242-c4674c846ca7", 00:28:35.694 "is_configured": true, 00:28:35.694 "data_offset": 0, 00:28:35.694 "data_size": 65536 00:28:35.694 }, 00:28:35.694 { 00:28:35.694 "name": "BaseBdev2", 00:28:35.694 "uuid": "def05249-d50b-59cf-813a-35b4e5bda798", 00:28:35.694 "is_configured": true, 00:28:35.694 "data_offset": 0, 00:28:35.694 "data_size": 65536 00:28:35.694 } 00:28:35.694 ] 00:28:35.694 }' 00:28:35.694 14:21:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:35.694 14:21:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:36.260 14:21:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:36.260 14:21:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:28:36.518 [2024-07-15 14:21:22.363929] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:36.518 14:21:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=65536 00:28:36.518 14:21:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:36.518 14:21:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:28:36.809 14:21:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:28:36.809 14:21:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@620 -- # '[' true = true ']' 00:28:36.809 14:21:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:28:36.809 14:21:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:28:36.809 [2024-07-15 14:21:22.752751] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:28:36.809 I/O size of 3145728 is greater than zero copy threshold (65536). 00:28:36.809 Zero copy mechanism will not be used. 00:28:36.809 Running I/O for 60 seconds... 00:28:37.067 [2024-07-15 14:21:22.867886] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:37.067 [2024-07-15 14:21:22.873170] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005fb0 00:28:37.067 14:21:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:37.067 14:21:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:37.067 14:21:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:37.067 14:21:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:37.067 14:21:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:37.067 14:21:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:37.067 14:21:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:37.067 14:21:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:37.067 14:21:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:37.067 14:21:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:37.067 14:21:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:37.067 14:21:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:37.325 14:21:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:37.325 "name": "raid_bdev1", 00:28:37.325 "uuid": "6ffa5122-305d-47ee-a556-ad1418989777", 00:28:37.325 "strip_size_kb": 0, 00:28:37.325 "state": "online", 00:28:37.325 "raid_level": "raid1", 00:28:37.325 "superblock": false, 00:28:37.325 "num_base_bdevs": 2, 00:28:37.325 "num_base_bdevs_discovered": 1, 00:28:37.325 "num_base_bdevs_operational": 1, 00:28:37.325 "base_bdevs_list": [ 00:28:37.325 { 00:28:37.325 "name": null, 00:28:37.325 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:37.325 "is_configured": false, 00:28:37.325 "data_offset": 0, 00:28:37.325 "data_size": 65536 00:28:37.325 }, 00:28:37.325 { 00:28:37.325 "name": "BaseBdev2", 00:28:37.325 "uuid": "def05249-d50b-59cf-813a-35b4e5bda798", 00:28:37.325 "is_configured": true, 00:28:37.325 "data_offset": 0, 00:28:37.325 "data_size": 65536 00:28:37.325 } 00:28:37.325 ] 00:28:37.325 }' 00:28:37.325 14:21:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:37.325 14:21:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:37.893 14:21:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:28:38.152 [2024-07-15 14:21:24.010920] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:38.152 [2024-07-15 14:21:24.057364] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:28:38.152 [2024-07-15 14:21:24.059357] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:38.152 14:21:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # sleep 1 00:28:38.410 [2024-07-15 14:21:24.165885] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:28:38.410 [2024-07-15 14:21:24.166912] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:28:38.410 [2024-07-15 14:21:24.382934] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:28:38.410 [2024-07-15 14:21:24.383731] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:28:38.974 [2024-07-15 14:21:24.732214] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:28:38.974 [2024-07-15 14:21:24.945936] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:28:38.974 [2024-07-15 14:21:24.946662] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:28:39.232 14:21:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:39.232 14:21:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:39.232 14:21:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:39.232 14:21:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:39.232 14:21:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:39.232 14:21:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:39.232 14:21:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:39.490 [2024-07-15 14:21:25.270925] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:28:39.490 [2024-07-15 14:21:25.271990] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:28:39.490 14:21:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:39.490 "name": "raid_bdev1", 00:28:39.490 "uuid": "6ffa5122-305d-47ee-a556-ad1418989777", 00:28:39.490 "strip_size_kb": 0, 00:28:39.490 "state": "online", 00:28:39.490 "raid_level": "raid1", 00:28:39.490 "superblock": false, 00:28:39.490 "num_base_bdevs": 2, 00:28:39.490 "num_base_bdevs_discovered": 2, 00:28:39.490 "num_base_bdevs_operational": 2, 00:28:39.490 "process": { 00:28:39.490 "type": "rebuild", 00:28:39.490 "target": "spare", 00:28:39.490 "progress": { 00:28:39.490 "blocks": 14336, 00:28:39.490 "percent": 21 00:28:39.490 } 00:28:39.490 }, 00:28:39.490 "base_bdevs_list": [ 00:28:39.490 { 00:28:39.490 "name": "spare", 00:28:39.490 "uuid": "43b91321-fc7b-5a40-93dd-76b64313f452", 00:28:39.490 "is_configured": true, 00:28:39.490 "data_offset": 0, 00:28:39.490 "data_size": 65536 00:28:39.490 }, 00:28:39.490 { 00:28:39.490 "name": "BaseBdev2", 00:28:39.490 "uuid": "def05249-d50b-59cf-813a-35b4e5bda798", 00:28:39.490 "is_configured": true, 00:28:39.490 "data_offset": 0, 00:28:39.490 "data_size": 65536 00:28:39.490 } 00:28:39.490 ] 00:28:39.490 }' 00:28:39.490 14:21:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:39.490 14:21:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:39.490 14:21:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:39.490 14:21:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:39.490 14:21:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:28:39.490 [2024-07-15 14:21:25.492574] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:28:39.750 [2024-07-15 14:21:25.692543] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:39.750 [2024-07-15 14:21:25.752943] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:40.031 [2024-07-15 14:21:25.762111] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:40.031 [2024-07-15 14:21:25.762496] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:40.031 [2024-07-15 14:21:25.762560] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:40.031 [2024-07-15 14:21:25.805291] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005fb0 00:28:40.031 14:21:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:40.031 14:21:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:40.031 14:21:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:40.031 14:21:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:40.031 14:21:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:40.031 14:21:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:40.031 14:21:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:40.031 14:21:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:40.031 14:21:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:40.031 14:21:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:40.031 14:21:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:40.031 14:21:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:40.290 14:21:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:40.290 "name": "raid_bdev1", 00:28:40.290 "uuid": "6ffa5122-305d-47ee-a556-ad1418989777", 00:28:40.290 "strip_size_kb": 0, 00:28:40.290 "state": "online", 00:28:40.290 "raid_level": "raid1", 00:28:40.290 "superblock": false, 00:28:40.290 "num_base_bdevs": 2, 00:28:40.290 "num_base_bdevs_discovered": 1, 00:28:40.290 "num_base_bdevs_operational": 1, 00:28:40.290 "base_bdevs_list": [ 00:28:40.290 { 00:28:40.290 "name": null, 00:28:40.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:40.290 "is_configured": false, 00:28:40.290 "data_offset": 0, 00:28:40.290 "data_size": 65536 00:28:40.290 }, 00:28:40.290 { 00:28:40.290 "name": "BaseBdev2", 00:28:40.290 "uuid": "def05249-d50b-59cf-813a-35b4e5bda798", 00:28:40.290 "is_configured": true, 00:28:40.290 "data_offset": 0, 00:28:40.290 "data_size": 65536 00:28:40.290 } 00:28:40.290 ] 00:28:40.290 }' 00:28:40.290 14:21:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:40.290 14:21:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:40.857 14:21:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:40.857 14:21:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:40.857 14:21:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:40.857 14:21:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:40.857 14:21:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:40.857 14:21:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:40.857 14:21:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:41.116 14:21:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:41.116 "name": "raid_bdev1", 00:28:41.116 "uuid": "6ffa5122-305d-47ee-a556-ad1418989777", 00:28:41.116 "strip_size_kb": 0, 00:28:41.116 "state": "online", 00:28:41.116 "raid_level": "raid1", 00:28:41.116 "superblock": false, 00:28:41.116 "num_base_bdevs": 2, 00:28:41.116 "num_base_bdevs_discovered": 1, 00:28:41.116 "num_base_bdevs_operational": 1, 00:28:41.116 "base_bdevs_list": [ 00:28:41.116 { 00:28:41.116 "name": null, 00:28:41.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:41.116 "is_configured": false, 00:28:41.116 "data_offset": 0, 00:28:41.116 "data_size": 65536 00:28:41.116 }, 00:28:41.116 { 00:28:41.116 "name": "BaseBdev2", 00:28:41.116 "uuid": "def05249-d50b-59cf-813a-35b4e5bda798", 00:28:41.116 "is_configured": true, 00:28:41.116 "data_offset": 0, 00:28:41.116 "data_size": 65536 00:28:41.116 } 00:28:41.116 ] 00:28:41.116 }' 00:28:41.117 14:21:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:41.117 14:21:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:41.117 14:21:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:41.117 14:21:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:41.117 14:21:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:28:41.375 [2024-07-15 14:21:27.342037] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:41.633 14:21:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:28:41.633 [2024-07-15 14:21:27.400357] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:28:41.633 [2024-07-15 14:21:27.401907] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:41.633 [2024-07-15 14:21:27.504687] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:28:41.633 [2024-07-15 14:21:27.505578] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:28:41.891 [2024-07-15 14:21:27.721995] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:28:41.891 [2024-07-15 14:21:27.722561] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:28:42.455 [2024-07-15 14:21:28.173564] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:28:42.455 [2024-07-15 14:21:28.174070] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:28:42.455 14:21:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:42.455 14:21:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:42.455 14:21:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:42.455 14:21:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:42.455 14:21:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:42.455 14:21:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:42.455 14:21:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:42.455 [2024-07-15 14:21:28.412591] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:28:42.455 [2024-07-15 14:21:28.413336] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:28:42.713 [2024-07-15 14:21:28.615241] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:28:42.713 [2024-07-15 14:21:28.615800] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:28:42.713 14:21:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:42.713 "name": "raid_bdev1", 00:28:42.713 "uuid": "6ffa5122-305d-47ee-a556-ad1418989777", 00:28:42.713 "strip_size_kb": 0, 00:28:42.713 "state": "online", 00:28:42.713 "raid_level": "raid1", 00:28:42.713 "superblock": false, 00:28:42.713 "num_base_bdevs": 2, 00:28:42.713 "num_base_bdevs_discovered": 2, 00:28:42.713 "num_base_bdevs_operational": 2, 00:28:42.713 "process": { 00:28:42.713 "type": "rebuild", 00:28:42.713 "target": "spare", 00:28:42.713 "progress": { 00:28:42.713 "blocks": 16384, 00:28:42.713 "percent": 25 00:28:42.713 } 00:28:42.713 }, 00:28:42.713 "base_bdevs_list": [ 00:28:42.713 { 00:28:42.713 "name": "spare", 00:28:42.713 "uuid": "43b91321-fc7b-5a40-93dd-76b64313f452", 00:28:42.713 "is_configured": true, 00:28:42.713 "data_offset": 0, 00:28:42.713 "data_size": 65536 00:28:42.713 }, 00:28:42.713 { 00:28:42.713 "name": "BaseBdev2", 00:28:42.713 "uuid": "def05249-d50b-59cf-813a-35b4e5bda798", 00:28:42.713 "is_configured": true, 00:28:42.713 "data_offset": 0, 00:28:42.713 "data_size": 65536 00:28:42.713 } 00:28:42.713 ] 00:28:42.713 }' 00:28:42.713 14:21:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:42.971 14:21:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:42.971 14:21:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:42.971 14:21:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:42.971 14:21:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:28:42.971 14:21:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:28:42.971 14:21:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:28:42.971 14:21:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:28:42.971 14:21:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@705 -- # local timeout=945 00:28:42.971 14:21:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:28:42.971 14:21:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:42.971 14:21:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:42.971 14:21:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:42.971 14:21:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:42.971 14:21:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:42.971 14:21:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:42.971 14:21:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:42.971 [2024-07-15 14:21:28.836047] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:28:42.971 [2024-07-15 14:21:28.836747] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:28:43.228 [2024-07-15 14:21:29.048942] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:28:43.228 14:21:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:43.228 "name": "raid_bdev1", 00:28:43.228 "uuid": "6ffa5122-305d-47ee-a556-ad1418989777", 00:28:43.228 "strip_size_kb": 0, 00:28:43.228 "state": "online", 00:28:43.228 "raid_level": "raid1", 00:28:43.228 "superblock": false, 00:28:43.228 "num_base_bdevs": 2, 00:28:43.228 "num_base_bdevs_discovered": 2, 00:28:43.228 "num_base_bdevs_operational": 2, 00:28:43.228 "process": { 00:28:43.228 "type": "rebuild", 00:28:43.228 "target": "spare", 00:28:43.228 "progress": { 00:28:43.228 "blocks": 22528, 00:28:43.228 "percent": 34 00:28:43.228 } 00:28:43.228 }, 00:28:43.228 "base_bdevs_list": [ 00:28:43.228 { 00:28:43.228 "name": "spare", 00:28:43.228 "uuid": "43b91321-fc7b-5a40-93dd-76b64313f452", 00:28:43.228 "is_configured": true, 00:28:43.228 "data_offset": 0, 00:28:43.228 "data_size": 65536 00:28:43.228 }, 00:28:43.228 { 00:28:43.228 "name": "BaseBdev2", 00:28:43.228 "uuid": "def05249-d50b-59cf-813a-35b4e5bda798", 00:28:43.228 "is_configured": true, 00:28:43.228 "data_offset": 0, 00:28:43.228 "data_size": 65536 00:28:43.228 } 00:28:43.228 ] 00:28:43.228 }' 00:28:43.228 14:21:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:43.228 14:21:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:43.228 14:21:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:43.228 14:21:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:43.228 14:21:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:28:43.486 [2024-07-15 14:21:29.373341] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:28:43.745 [2024-07-15 14:21:29.491349] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:28:44.005 [2024-07-15 14:21:29.809268] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:28:44.263 [2024-07-15 14:21:30.020461] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:28:44.263 14:21:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:28:44.263 14:21:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:44.263 14:21:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:44.263 14:21:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:44.263 14:21:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:44.263 14:21:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:44.263 14:21:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:44.263 14:21:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:44.521 [2024-07-15 14:21:30.360030] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:28:44.521 14:21:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:44.521 "name": "raid_bdev1", 00:28:44.521 "uuid": "6ffa5122-305d-47ee-a556-ad1418989777", 00:28:44.521 "strip_size_kb": 0, 00:28:44.521 "state": "online", 00:28:44.521 "raid_level": "raid1", 00:28:44.521 "superblock": false, 00:28:44.521 "num_base_bdevs": 2, 00:28:44.521 "num_base_bdevs_discovered": 2, 00:28:44.521 "num_base_bdevs_operational": 2, 00:28:44.521 "process": { 00:28:44.521 "type": "rebuild", 00:28:44.521 "target": "spare", 00:28:44.521 "progress": { 00:28:44.521 "blocks": 38912, 00:28:44.521 "percent": 59 00:28:44.521 } 00:28:44.521 }, 00:28:44.521 "base_bdevs_list": [ 00:28:44.521 { 00:28:44.521 "name": "spare", 00:28:44.521 "uuid": "43b91321-fc7b-5a40-93dd-76b64313f452", 00:28:44.521 "is_configured": true, 00:28:44.521 "data_offset": 0, 00:28:44.521 "data_size": 65536 00:28:44.521 }, 00:28:44.521 { 00:28:44.521 "name": "BaseBdev2", 00:28:44.521 "uuid": "def05249-d50b-59cf-813a-35b4e5bda798", 00:28:44.521 "is_configured": true, 00:28:44.521 "data_offset": 0, 00:28:44.521 "data_size": 65536 00:28:44.521 } 00:28:44.521 ] 00:28:44.521 }' 00:28:44.521 14:21:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:44.521 14:21:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:44.521 14:21:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:44.779 14:21:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:44.779 14:21:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:28:44.779 [2024-07-15 14:21:30.566792] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:28:44.779 [2024-07-15 14:21:30.567261] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:28:45.345 [2024-07-15 14:21:31.236342] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:28:45.603 14:21:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:28:45.603 14:21:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:45.603 14:21:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:45.603 14:21:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:45.603 14:21:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:45.603 14:21:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:45.603 14:21:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:45.603 14:21:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:45.861 14:21:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:45.861 "name": "raid_bdev1", 00:28:45.861 "uuid": "6ffa5122-305d-47ee-a556-ad1418989777", 00:28:45.861 "strip_size_kb": 0, 00:28:45.861 "state": "online", 00:28:45.861 "raid_level": "raid1", 00:28:45.861 "superblock": false, 00:28:45.861 "num_base_bdevs": 2, 00:28:45.861 "num_base_bdevs_discovered": 2, 00:28:45.861 "num_base_bdevs_operational": 2, 00:28:45.861 "process": { 00:28:45.861 "type": "rebuild", 00:28:45.861 "target": "spare", 00:28:45.861 "progress": { 00:28:45.861 "blocks": 59392, 00:28:45.861 "percent": 90 00:28:45.861 } 00:28:45.861 }, 00:28:45.861 "base_bdevs_list": [ 00:28:45.861 { 00:28:45.861 "name": "spare", 00:28:45.861 "uuid": "43b91321-fc7b-5a40-93dd-76b64313f452", 00:28:45.861 "is_configured": true, 00:28:45.861 "data_offset": 0, 00:28:45.861 "data_size": 65536 00:28:45.861 }, 00:28:45.861 { 00:28:45.861 "name": "BaseBdev2", 00:28:45.861 "uuid": "def05249-d50b-59cf-813a-35b4e5bda798", 00:28:45.861 "is_configured": true, 00:28:45.861 "data_offset": 0, 00:28:45.861 "data_size": 65536 00:28:45.861 } 00:28:45.861 ] 00:28:45.861 }' 00:28:45.861 14:21:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:46.119 14:21:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:46.119 14:21:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:46.119 14:21:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:46.119 14:21:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:28:46.119 [2024-07-15 14:21:32.094541] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:28:46.378 [2024-07-15 14:21:32.199018] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:28:46.378 [2024-07-15 14:21:32.202281] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:46.944 14:21:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:28:46.944 14:21:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:46.944 14:21:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:46.944 14:21:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:46.944 14:21:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:46.944 14:21:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:46.944 14:21:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:46.944 14:21:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:47.509 14:21:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:47.509 "name": "raid_bdev1", 00:28:47.509 "uuid": "6ffa5122-305d-47ee-a556-ad1418989777", 00:28:47.509 "strip_size_kb": 0, 00:28:47.509 "state": "online", 00:28:47.509 "raid_level": "raid1", 00:28:47.509 "superblock": false, 00:28:47.509 "num_base_bdevs": 2, 00:28:47.509 "num_base_bdevs_discovered": 2, 00:28:47.509 "num_base_bdevs_operational": 2, 00:28:47.509 "base_bdevs_list": [ 00:28:47.509 { 00:28:47.509 "name": "spare", 00:28:47.509 "uuid": "43b91321-fc7b-5a40-93dd-76b64313f452", 00:28:47.509 "is_configured": true, 00:28:47.509 "data_offset": 0, 00:28:47.509 "data_size": 65536 00:28:47.509 }, 00:28:47.509 { 00:28:47.509 "name": "BaseBdev2", 00:28:47.509 "uuid": "def05249-d50b-59cf-813a-35b4e5bda798", 00:28:47.509 "is_configured": true, 00:28:47.509 "data_offset": 0, 00:28:47.509 "data_size": 65536 00:28:47.509 } 00:28:47.509 ] 00:28:47.509 }' 00:28:47.509 14:21:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:47.510 14:21:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:28:47.510 14:21:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:47.510 14:21:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:28:47.510 14:21:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # break 00:28:47.510 14:21:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:47.510 14:21:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:47.510 14:21:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:47.510 14:21:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:47.510 14:21:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:47.510 14:21:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:47.510 14:21:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:47.770 14:21:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:47.770 "name": "raid_bdev1", 00:28:47.770 "uuid": "6ffa5122-305d-47ee-a556-ad1418989777", 00:28:47.770 "strip_size_kb": 0, 00:28:47.770 "state": "online", 00:28:47.770 "raid_level": "raid1", 00:28:47.770 "superblock": false, 00:28:47.770 "num_base_bdevs": 2, 00:28:47.770 "num_base_bdevs_discovered": 2, 00:28:47.770 "num_base_bdevs_operational": 2, 00:28:47.770 "base_bdevs_list": [ 00:28:47.770 { 00:28:47.770 "name": "spare", 00:28:47.770 "uuid": "43b91321-fc7b-5a40-93dd-76b64313f452", 00:28:47.770 "is_configured": true, 00:28:47.770 "data_offset": 0, 00:28:47.770 "data_size": 65536 00:28:47.770 }, 00:28:47.770 { 00:28:47.770 "name": "BaseBdev2", 00:28:47.770 "uuid": "def05249-d50b-59cf-813a-35b4e5bda798", 00:28:47.770 "is_configured": true, 00:28:47.770 "data_offset": 0, 00:28:47.770 "data_size": 65536 00:28:47.770 } 00:28:47.770 ] 00:28:47.770 }' 00:28:47.770 14:21:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:47.770 14:21:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:47.770 14:21:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:47.770 14:21:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:47.770 14:21:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:47.770 14:21:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:47.770 14:21:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:47.770 14:21:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:47.770 14:21:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:47.770 14:21:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:47.770 14:21:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:47.770 14:21:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:47.770 14:21:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:47.770 14:21:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:47.770 14:21:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:47.770 14:21:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:48.030 14:21:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:48.030 "name": "raid_bdev1", 00:28:48.030 "uuid": "6ffa5122-305d-47ee-a556-ad1418989777", 00:28:48.030 "strip_size_kb": 0, 00:28:48.030 "state": "online", 00:28:48.030 "raid_level": "raid1", 00:28:48.030 "superblock": false, 00:28:48.030 "num_base_bdevs": 2, 00:28:48.030 "num_base_bdevs_discovered": 2, 00:28:48.030 "num_base_bdevs_operational": 2, 00:28:48.030 "base_bdevs_list": [ 00:28:48.030 { 00:28:48.030 "name": "spare", 00:28:48.030 "uuid": "43b91321-fc7b-5a40-93dd-76b64313f452", 00:28:48.030 "is_configured": true, 00:28:48.030 "data_offset": 0, 00:28:48.030 "data_size": 65536 00:28:48.030 }, 00:28:48.030 { 00:28:48.030 "name": "BaseBdev2", 00:28:48.030 "uuid": "def05249-d50b-59cf-813a-35b4e5bda798", 00:28:48.030 "is_configured": true, 00:28:48.030 "data_offset": 0, 00:28:48.030 "data_size": 65536 00:28:48.030 } 00:28:48.030 ] 00:28:48.030 }' 00:28:48.030 14:21:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:48.030 14:21:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:48.965 14:21:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:28:48.965 [2024-07-15 14:21:34.949129] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:48.965 [2024-07-15 14:21:34.949348] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:48.965 00:28:48.965 Latency(us) 00:28:48.965 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:48.965 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:28:48.965 raid_bdev1 : 12.21 134.56 403.69 0.00 0.00 10817.44 294.17 112006.98 00:28:48.965 =================================================================================================================== 00:28:48.965 Total : 134.56 403.69 0.00 0.00 10817.44 294.17 112006.98 00:28:49.224 [2024-07-15 14:21:34.983761] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:49.224 [2024-07-15 14:21:34.983945] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:49.224 0 00:28:49.224 [2024-07-15 14:21:34.984170] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:49.224 [2024-07-15 14:21:34.984324] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:28:49.224 14:21:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:49.224 14:21:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # jq length 00:28:49.482 14:21:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:28:49.482 14:21:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:28:49.482 14:21:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:28:49.482 14:21:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@724 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:28:49.482 14:21:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:49.482 14:21:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:28:49.482 14:21:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:49.482 14:21:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:28:49.482 14:21:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:49.482 14:21:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:28:49.482 14:21:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:49.482 14:21:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:49.483 14:21:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:28:49.741 /dev/nbd0 00:28:49.741 14:21:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:49.741 14:21:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:49.741 14:21:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:28:49.741 14:21:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # local i 00:28:49.741 14:21:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:28:49.741 14:21:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:28:49.741 14:21:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:28:49.741 14:21:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # break 00:28:49.741 14:21:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:28:49.741 14:21:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:28:49.741 14:21:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:49.741 1+0 records in 00:28:49.741 1+0 records out 00:28:49.741 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00051502 s, 8.0 MB/s 00:28:49.741 14:21:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:49.741 14:21:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # size=4096 00:28:49.741 14:21:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:49.741 14:21:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:28:49.741 14:21:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # return 0 00:28:49.741 14:21:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:49.741 14:21:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:49.741 14:21:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:28:49.741 14:21:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev2 ']' 00:28:49.741 14:21:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:28:49.741 14:21:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:49.741 14:21:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:28:49.741 14:21:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:49.741 14:21:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:28:49.741 14:21:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:49.741 14:21:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:28:49.741 14:21:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:49.741 14:21:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:49.741 14:21:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:28:50.000 /dev/nbd1 00:28:50.000 14:21:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:28:50.000 14:21:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:28:50.000 14:21:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:28:50.000 14:21:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # local i 00:28:50.000 14:21:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:28:50.000 14:21:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:28:50.000 14:21:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:28:50.000 14:21:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # break 00:28:50.000 14:21:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:28:50.000 14:21:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:28:50.000 14:21:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:50.000 1+0 records in 00:28:50.000 1+0 records out 00:28:50.000 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000574447 s, 7.1 MB/s 00:28:50.000 14:21:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:50.000 14:21:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # size=4096 00:28:50.000 14:21:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:50.000 14:21:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:28:50.000 14:21:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # return 0 00:28:50.000 14:21:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:50.000 14:21:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:50.000 14:21:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:28:50.000 14:21:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:28:50.000 14:21:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:50.000 14:21:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:28:50.000 14:21:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:50.000 14:21:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:28:50.000 14:21:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:50.000 14:21:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:28:50.259 14:21:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:28:50.259 14:21:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:28:50.259 14:21:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:28:50.259 14:21:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:50.259 14:21:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:50.259 14:21:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:28:50.259 14:21:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:28:50.259 14:21:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:28:50.259 14:21:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@733 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:28:50.259 14:21:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:50.259 14:21:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:28:50.259 14:21:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:50.259 14:21:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:28:50.259 14:21:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:50.259 14:21:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:28:50.517 14:21:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:50.517 14:21:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:50.517 14:21:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:50.517 14:21:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:50.517 14:21:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:50.517 14:21:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:50.517 14:21:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:28:50.517 14:21:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:28:50.517 14:21:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:28:50.517 14:21:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@782 -- # killprocess 211850 00:28:50.517 14:21:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@948 -- # '[' -z 211850 ']' 00:28:50.517 14:21:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@952 -- # kill -0 211850 00:28:50.517 14:21:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@953 -- # uname 00:28:50.517 14:21:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:50.517 14:21:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 211850 00:28:50.776 14:21:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:50.776 14:21:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:50.776 14:21:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@966 -- # echo 'killing process with pid 211850' 00:28:50.776 killing process with pid 211850 00:28:50.776 14:21:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@967 -- # kill 211850 00:28:50.776 Received shutdown signal, test time was about 13.767170 seconds 00:28:50.776 00:28:50.776 Latency(us) 00:28:50.776 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:50.776 =================================================================================================================== 00:28:50.776 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:50.776 14:21:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # wait 211850 00:28:50.776 [2024-07-15 14:21:36.522623] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:50.776 [2024-07-15 14:21:36.718710] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:52.168 14:21:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # return 0 00:28:52.168 00:28:52.168 real 0m20.081s 00:28:52.168 user 0m30.904s 00:28:52.168 sys 0m2.182s 00:28:52.168 14:21:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:52.168 14:21:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:28:52.168 ************************************ 00:28:52.168 END TEST raid_rebuild_test_io 00:28:52.168 ************************************ 00:28:52.168 14:21:37 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:28:52.168 14:21:37 bdev_raid -- bdev/bdev_raid.sh@880 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:28:52.168 14:21:37 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:28:52.168 14:21:37 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:52.168 14:21:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:52.168 ************************************ 00:28:52.168 START TEST raid_rebuild_test_sb_io 00:28:52.168 ************************************ 00:28:52.168 14:21:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 true true true 00:28:52.168 14:21:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:28:52.168 14:21:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:28:52.168 14:21:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:28:52.168 14:21:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local background_io=true 00:28:52.168 14:21:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local verify=true 00:28:52.168 14:21:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:28:52.168 14:21:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:28:52.168 14:21:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:28:52.168 14:21:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:28:52.168 14:21:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:28:52.168 14:21:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:28:52.168 14:21:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:28:52.168 14:21:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:28:52.168 14:21:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:28:52.168 14:21:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:28:52.168 14:21:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:28:52.168 14:21:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local strip_size 00:28:52.168 14:21:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local create_arg 00:28:52.168 14:21:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:28:52.168 14:21:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local data_offset 00:28:52.168 14:21:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:28:52.168 14:21:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:28:52.168 14:21:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:28:52.168 14:21:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:28:52.168 14:21:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # raid_pid=212333 00:28:52.168 14:21:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # waitforlisten 212333 /var/tmp/spdk-raid.sock 00:28:52.168 14:21:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:28:52.168 14:21:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@829 -- # '[' -z 212333 ']' 00:28:52.168 14:21:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:28:52.168 14:21:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:52.168 14:21:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:28:52.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:28:52.168 14:21:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:52.168 14:21:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:52.168 [2024-07-15 14:21:38.027289] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:28:52.168 [2024-07-15 14:21:38.027669] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid212333 ] 00:28:52.168 I/O size of 3145728 is greater than zero copy threshold (65536). 00:28:52.168 Zero copy mechanism will not be used. 00:28:52.426 [2024-07-15 14:21:38.190507] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:52.426 [2024-07-15 14:21:38.407597] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:52.685 [2024-07-15 14:21:38.605593] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:53.252 14:21:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:53.252 14:21:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@862 -- # return 0 00:28:53.252 14:21:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:28:53.252 14:21:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:28:53.252 BaseBdev1_malloc 00:28:53.511 14:21:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:28:53.771 [2024-07-15 14:21:39.530218] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:28:53.771 [2024-07-15 14:21:39.530493] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:53.771 [2024-07-15 14:21:39.530660] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:28:53.771 [2024-07-15 14:21:39.530871] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:53.771 [2024-07-15 14:21:39.532662] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:53.771 [2024-07-15 14:21:39.532873] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:53.771 BaseBdev1 00:28:53.771 14:21:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:28:53.771 14:21:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:28:54.030 BaseBdev2_malloc 00:28:54.030 14:21:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:28:54.289 [2024-07-15 14:21:40.046314] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:28:54.289 [2024-07-15 14:21:40.046660] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:54.289 [2024-07-15 14:21:40.046898] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:28:54.289 [2024-07-15 14:21:40.047045] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:54.289 [2024-07-15 14:21:40.049181] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:54.289 [2024-07-15 14:21:40.049371] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:28:54.289 BaseBdev2 00:28:54.289 14:21:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:28:54.548 spare_malloc 00:28:54.548 14:21:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:28:54.806 spare_delay 00:28:54.806 14:21:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:28:55.066 [2024-07-15 14:21:40.868599] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:55.066 [2024-07-15 14:21:40.868972] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:55.066 [2024-07-15 14:21:40.869146] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:28:55.066 [2024-07-15 14:21:40.869328] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:55.066 [2024-07-15 14:21:40.871219] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:55.066 [2024-07-15 14:21:40.871399] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:55.066 spare 00:28:55.066 14:21:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:28:55.335 [2024-07-15 14:21:41.140676] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:55.335 [2024-07-15 14:21:41.142375] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:55.335 [2024-07-15 14:21:41.142668] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:28:55.335 [2024-07-15 14:21:41.142830] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:28:55.335 [2024-07-15 14:21:41.143058] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:28:55.335 [2024-07-15 14:21:41.143449] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:28:55.335 [2024-07-15 14:21:41.143583] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:28:55.335 [2024-07-15 14:21:41.143831] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:55.335 14:21:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:28:55.335 14:21:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:55.336 14:21:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:55.336 14:21:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:55.336 14:21:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:55.336 14:21:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:28:55.336 14:21:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:55.336 14:21:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:55.336 14:21:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:55.336 14:21:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:55.336 14:21:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:55.336 14:21:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:55.594 14:21:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:55.594 "name": "raid_bdev1", 00:28:55.594 "uuid": "096ce165-d939-457d-aebb-4216094b197d", 00:28:55.594 "strip_size_kb": 0, 00:28:55.594 "state": "online", 00:28:55.594 "raid_level": "raid1", 00:28:55.594 "superblock": true, 00:28:55.594 "num_base_bdevs": 2, 00:28:55.594 "num_base_bdevs_discovered": 2, 00:28:55.594 "num_base_bdevs_operational": 2, 00:28:55.594 "base_bdevs_list": [ 00:28:55.594 { 00:28:55.594 "name": "BaseBdev1", 00:28:55.594 "uuid": "8cd2af45-e6ba-5243-97c0-2eacff736138", 00:28:55.594 "is_configured": true, 00:28:55.594 "data_offset": 2048, 00:28:55.594 "data_size": 63488 00:28:55.594 }, 00:28:55.594 { 00:28:55.594 "name": "BaseBdev2", 00:28:55.594 "uuid": "c0a617a5-1cf8-59ad-9a8d-060fbd103cf8", 00:28:55.594 "is_configured": true, 00:28:55.594 "data_offset": 2048, 00:28:55.594 "data_size": 63488 00:28:55.594 } 00:28:55.594 ] 00:28:55.594 }' 00:28:55.594 14:21:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:55.594 14:21:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:56.161 14:21:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:28:56.161 14:21:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:56.420 [2024-07-15 14:21:42.277716] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:56.420 14:21:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=63488 00:28:56.420 14:21:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:56.420 14:21:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:28:56.679 14:21:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:28:56.679 14:21:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@620 -- # '[' true = true ']' 00:28:56.679 14:21:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:28:56.679 14:21:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:28:56.679 [2024-07-15 14:21:42.654312] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:28:56.679 I/O size of 3145728 is greater than zero copy threshold (65536). 00:28:56.679 Zero copy mechanism will not be used. 00:28:56.679 Running I/O for 60 seconds... 00:28:56.938 [2024-07-15 14:21:42.752939] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:56.938 [2024-07-15 14:21:42.753470] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005fb0 00:28:56.938 14:21:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:56.938 14:21:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:56.938 14:21:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:56.938 14:21:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:56.938 14:21:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:56.938 14:21:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:56.938 14:21:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:56.938 14:21:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:56.938 14:21:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:56.938 14:21:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:56.938 14:21:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:56.938 14:21:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:57.196 14:21:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:57.196 "name": "raid_bdev1", 00:28:57.197 "uuid": "096ce165-d939-457d-aebb-4216094b197d", 00:28:57.197 "strip_size_kb": 0, 00:28:57.197 "state": "online", 00:28:57.197 "raid_level": "raid1", 00:28:57.197 "superblock": true, 00:28:57.197 "num_base_bdevs": 2, 00:28:57.197 "num_base_bdevs_discovered": 1, 00:28:57.197 "num_base_bdevs_operational": 1, 00:28:57.197 "base_bdevs_list": [ 00:28:57.197 { 00:28:57.197 "name": null, 00:28:57.197 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:57.197 "is_configured": false, 00:28:57.197 "data_offset": 2048, 00:28:57.197 "data_size": 63488 00:28:57.197 }, 00:28:57.197 { 00:28:57.197 "name": "BaseBdev2", 00:28:57.197 "uuid": "c0a617a5-1cf8-59ad-9a8d-060fbd103cf8", 00:28:57.197 "is_configured": true, 00:28:57.197 "data_offset": 2048, 00:28:57.197 "data_size": 63488 00:28:57.197 } 00:28:57.197 ] 00:28:57.197 }' 00:28:57.197 14:21:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:57.197 14:21:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:57.761 14:21:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:28:58.019 [2024-07-15 14:21:44.003836] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:58.277 14:21:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # sleep 1 00:28:58.277 [2024-07-15 14:21:44.058040] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:28:58.277 [2024-07-15 14:21:44.060097] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:58.277 [2024-07-15 14:21:44.171257] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:28:58.277 [2024-07-15 14:21:44.172173] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:28:58.535 [2024-07-15 14:21:44.288695] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:28:58.535 [2024-07-15 14:21:44.289290] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:28:58.812 [2024-07-15 14:21:44.613722] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:28:58.812 [2024-07-15 14:21:44.614753] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:28:58.812 [2024-07-15 14:21:44.732245] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:28:59.116 [2024-07-15 14:21:44.951566] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:28:59.117 14:21:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:59.117 14:21:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:59.117 14:21:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:59.117 14:21:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:59.117 14:21:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:59.117 14:21:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:59.117 14:21:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:59.375 14:21:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:59.375 "name": "raid_bdev1", 00:28:59.375 "uuid": "096ce165-d939-457d-aebb-4216094b197d", 00:28:59.375 "strip_size_kb": 0, 00:28:59.375 "state": "online", 00:28:59.375 "raid_level": "raid1", 00:28:59.375 "superblock": true, 00:28:59.375 "num_base_bdevs": 2, 00:28:59.375 "num_base_bdevs_discovered": 2, 00:28:59.375 "num_base_bdevs_operational": 2, 00:28:59.375 "process": { 00:28:59.375 "type": "rebuild", 00:28:59.375 "target": "spare", 00:28:59.375 "progress": { 00:28:59.375 "blocks": 20480, 00:28:59.375 "percent": 32 00:28:59.375 } 00:28:59.375 }, 00:28:59.375 "base_bdevs_list": [ 00:28:59.375 { 00:28:59.375 "name": "spare", 00:28:59.375 "uuid": "c7f1b185-d35e-5be8-98c0-ee7019d1db6c", 00:28:59.375 "is_configured": true, 00:28:59.376 "data_offset": 2048, 00:28:59.376 "data_size": 63488 00:28:59.376 }, 00:28:59.376 { 00:28:59.376 "name": "BaseBdev2", 00:28:59.376 "uuid": "c0a617a5-1cf8-59ad-9a8d-060fbd103cf8", 00:28:59.376 "is_configured": true, 00:28:59.376 "data_offset": 2048, 00:28:59.376 "data_size": 63488 00:28:59.376 } 00:28:59.376 ] 00:28:59.376 }' 00:28:59.376 14:21:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:59.635 14:21:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:59.635 14:21:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:59.635 14:21:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:59.635 14:21:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:28:59.894 [2024-07-15 14:21:45.691903] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:59.894 [2024-07-15 14:21:45.703653] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:28:59.894 [2024-07-15 14:21:45.756079] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:59.894 [2024-07-15 14:21:45.763275] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:59.894 [2024-07-15 14:21:45.763506] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:59.894 [2024-07-15 14:21:45.763567] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:59.894 [2024-07-15 14:21:45.800185] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005fb0 00:28:59.894 14:21:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:59.894 14:21:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:59.894 14:21:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:59.894 14:21:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:59.894 14:21:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:59.894 14:21:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:59.894 14:21:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:59.894 14:21:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:59.894 14:21:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:59.894 14:21:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:59.894 14:21:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:59.894 14:21:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:00.153 14:21:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:00.153 "name": "raid_bdev1", 00:29:00.153 "uuid": "096ce165-d939-457d-aebb-4216094b197d", 00:29:00.153 "strip_size_kb": 0, 00:29:00.153 "state": "online", 00:29:00.153 "raid_level": "raid1", 00:29:00.153 "superblock": true, 00:29:00.153 "num_base_bdevs": 2, 00:29:00.153 "num_base_bdevs_discovered": 1, 00:29:00.153 "num_base_bdevs_operational": 1, 00:29:00.153 "base_bdevs_list": [ 00:29:00.153 { 00:29:00.153 "name": null, 00:29:00.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:00.154 "is_configured": false, 00:29:00.154 "data_offset": 2048, 00:29:00.154 "data_size": 63488 00:29:00.154 }, 00:29:00.154 { 00:29:00.154 "name": "BaseBdev2", 00:29:00.154 "uuid": "c0a617a5-1cf8-59ad-9a8d-060fbd103cf8", 00:29:00.154 "is_configured": true, 00:29:00.154 "data_offset": 2048, 00:29:00.154 "data_size": 63488 00:29:00.154 } 00:29:00.154 ] 00:29:00.154 }' 00:29:00.154 14:21:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:00.154 14:21:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:01.088 14:21:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:01.088 14:21:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:01.088 14:21:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:01.088 14:21:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:01.088 14:21:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:01.088 14:21:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:01.088 14:21:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:01.088 14:21:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:01.088 "name": "raid_bdev1", 00:29:01.088 "uuid": "096ce165-d939-457d-aebb-4216094b197d", 00:29:01.088 "strip_size_kb": 0, 00:29:01.088 "state": "online", 00:29:01.088 "raid_level": "raid1", 00:29:01.088 "superblock": true, 00:29:01.088 "num_base_bdevs": 2, 00:29:01.088 "num_base_bdevs_discovered": 1, 00:29:01.088 "num_base_bdevs_operational": 1, 00:29:01.088 "base_bdevs_list": [ 00:29:01.088 { 00:29:01.088 "name": null, 00:29:01.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:01.088 "is_configured": false, 00:29:01.088 "data_offset": 2048, 00:29:01.088 "data_size": 63488 00:29:01.088 }, 00:29:01.088 { 00:29:01.088 "name": "BaseBdev2", 00:29:01.088 "uuid": "c0a617a5-1cf8-59ad-9a8d-060fbd103cf8", 00:29:01.088 "is_configured": true, 00:29:01.088 "data_offset": 2048, 00:29:01.088 "data_size": 63488 00:29:01.088 } 00:29:01.088 ] 00:29:01.088 }' 00:29:01.088 14:21:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:01.088 14:21:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:01.088 14:21:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:01.088 14:21:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:01.088 14:21:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:29:01.346 [2024-07-15 14:21:47.312621] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:01.605 14:21:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:29:01.605 [2024-07-15 14:21:47.363850] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:29:01.605 [2024-07-15 14:21:47.365753] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:01.605 [2024-07-15 14:21:47.476830] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:29:01.605 [2024-07-15 14:21:47.478023] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:29:01.864 [2024-07-15 14:21:47.691638] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:29:01.864 [2024-07-15 14:21:47.692376] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:29:02.122 [2024-07-15 14:21:47.933880] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:29:02.381 [2024-07-15 14:21:48.147040] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:29:02.381 [2024-07-15 14:21:48.147841] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:29:02.381 14:21:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:02.381 14:21:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:02.381 14:21:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:02.381 14:21:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:02.381 14:21:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:02.381 14:21:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:02.381 14:21:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:02.641 [2024-07-15 14:21:48.473923] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:29:02.641 14:21:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:02.641 "name": "raid_bdev1", 00:29:02.641 "uuid": "096ce165-d939-457d-aebb-4216094b197d", 00:29:02.641 "strip_size_kb": 0, 00:29:02.641 "state": "online", 00:29:02.641 "raid_level": "raid1", 00:29:02.641 "superblock": true, 00:29:02.641 "num_base_bdevs": 2, 00:29:02.641 "num_base_bdevs_discovered": 2, 00:29:02.641 "num_base_bdevs_operational": 2, 00:29:02.641 "process": { 00:29:02.641 "type": "rebuild", 00:29:02.641 "target": "spare", 00:29:02.641 "progress": { 00:29:02.641 "blocks": 14336, 00:29:02.641 "percent": 22 00:29:02.641 } 00:29:02.641 }, 00:29:02.641 "base_bdevs_list": [ 00:29:02.641 { 00:29:02.641 "name": "spare", 00:29:02.641 "uuid": "c7f1b185-d35e-5be8-98c0-ee7019d1db6c", 00:29:02.641 "is_configured": true, 00:29:02.641 "data_offset": 2048, 00:29:02.641 "data_size": 63488 00:29:02.641 }, 00:29:02.641 { 00:29:02.641 "name": "BaseBdev2", 00:29:02.641 "uuid": "c0a617a5-1cf8-59ad-9a8d-060fbd103cf8", 00:29:02.641 "is_configured": true, 00:29:02.641 "data_offset": 2048, 00:29:02.641 "data_size": 63488 00:29:02.641 } 00:29:02.641 ] 00:29:02.641 }' 00:29:02.641 14:21:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:02.916 14:21:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:02.916 14:21:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:02.916 [2024-07-15 14:21:48.689827] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:29:02.916 [2024-07-15 14:21:48.690557] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:29:02.916 14:21:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:02.916 14:21:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:29:02.916 14:21:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:29:02.916 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:29:02.916 14:21:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:29:02.916 14:21:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:29:02.916 14:21:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:29:02.916 14:21:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@705 -- # local timeout=965 00:29:02.916 14:21:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:29:02.916 14:21:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:02.916 14:21:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:02.916 14:21:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:02.916 14:21:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:02.916 14:21:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:02.916 14:21:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:02.916 14:21:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:02.916 [2024-07-15 14:21:48.900740] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:29:02.916 [2024-07-15 14:21:48.901822] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:29:03.176 14:21:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:03.176 "name": "raid_bdev1", 00:29:03.176 "uuid": "096ce165-d939-457d-aebb-4216094b197d", 00:29:03.176 "strip_size_kb": 0, 00:29:03.176 "state": "online", 00:29:03.176 "raid_level": "raid1", 00:29:03.176 "superblock": true, 00:29:03.176 "num_base_bdevs": 2, 00:29:03.176 "num_base_bdevs_discovered": 2, 00:29:03.176 "num_base_bdevs_operational": 2, 00:29:03.176 "process": { 00:29:03.176 "type": "rebuild", 00:29:03.176 "target": "spare", 00:29:03.176 "progress": { 00:29:03.176 "blocks": 20480, 00:29:03.176 "percent": 32 00:29:03.176 } 00:29:03.176 }, 00:29:03.176 "base_bdevs_list": [ 00:29:03.176 { 00:29:03.176 "name": "spare", 00:29:03.176 "uuid": "c7f1b185-d35e-5be8-98c0-ee7019d1db6c", 00:29:03.176 "is_configured": true, 00:29:03.176 "data_offset": 2048, 00:29:03.176 "data_size": 63488 00:29:03.176 }, 00:29:03.176 { 00:29:03.176 "name": "BaseBdev2", 00:29:03.176 "uuid": "c0a617a5-1cf8-59ad-9a8d-060fbd103cf8", 00:29:03.176 "is_configured": true, 00:29:03.176 "data_offset": 2048, 00:29:03.176 "data_size": 63488 00:29:03.176 } 00:29:03.176 ] 00:29:03.176 }' 00:29:03.176 14:21:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:03.176 14:21:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:03.176 14:21:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:03.176 14:21:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:03.176 14:21:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:29:03.176 [2024-07-15 14:21:49.105834] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:29:03.434 [2024-07-15 14:21:49.347988] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:29:03.694 [2024-07-15 14:21:49.561714] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:29:03.694 [2024-07-15 14:21:49.562437] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:29:04.262 [2024-07-15 14:21:50.014778] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:29:04.262 14:21:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:29:04.262 14:21:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:04.262 14:21:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:04.262 14:21:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:04.262 14:21:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:04.262 14:21:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:04.262 14:21:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:04.262 14:21:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:04.520 14:21:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:04.520 "name": "raid_bdev1", 00:29:04.520 "uuid": "096ce165-d939-457d-aebb-4216094b197d", 00:29:04.520 "strip_size_kb": 0, 00:29:04.520 "state": "online", 00:29:04.520 "raid_level": "raid1", 00:29:04.520 "superblock": true, 00:29:04.520 "num_base_bdevs": 2, 00:29:04.520 "num_base_bdevs_discovered": 2, 00:29:04.520 "num_base_bdevs_operational": 2, 00:29:04.520 "process": { 00:29:04.520 "type": "rebuild", 00:29:04.520 "target": "spare", 00:29:04.520 "progress": { 00:29:04.520 "blocks": 36864, 00:29:04.520 "percent": 58 00:29:04.520 } 00:29:04.520 }, 00:29:04.520 "base_bdevs_list": [ 00:29:04.520 { 00:29:04.520 "name": "spare", 00:29:04.520 "uuid": "c7f1b185-d35e-5be8-98c0-ee7019d1db6c", 00:29:04.520 "is_configured": true, 00:29:04.520 "data_offset": 2048, 00:29:04.520 "data_size": 63488 00:29:04.520 }, 00:29:04.520 { 00:29:04.520 "name": "BaseBdev2", 00:29:04.520 "uuid": "c0a617a5-1cf8-59ad-9a8d-060fbd103cf8", 00:29:04.520 "is_configured": true, 00:29:04.520 "data_offset": 2048, 00:29:04.520 "data_size": 63488 00:29:04.520 } 00:29:04.520 ] 00:29:04.520 }' 00:29:04.520 14:21:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:04.520 14:21:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:04.520 14:21:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:04.520 14:21:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:04.520 14:21:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:29:04.779 [2024-07-15 14:21:50.682798] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:29:05.038 [2024-07-15 14:21:50.889900] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:29:05.038 [2024-07-15 14:21:50.890682] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:29:05.605 14:21:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:29:05.606 14:21:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:05.606 14:21:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:05.606 14:21:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:05.606 14:21:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:05.606 14:21:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:05.606 14:21:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:05.606 14:21:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:05.606 [2024-07-15 14:21:51.547685] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:29:05.923 14:21:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:05.923 "name": "raid_bdev1", 00:29:05.923 "uuid": "096ce165-d939-457d-aebb-4216094b197d", 00:29:05.923 "strip_size_kb": 0, 00:29:05.923 "state": "online", 00:29:05.923 "raid_level": "raid1", 00:29:05.923 "superblock": true, 00:29:05.923 "num_base_bdevs": 2, 00:29:05.923 "num_base_bdevs_discovered": 2, 00:29:05.923 "num_base_bdevs_operational": 2, 00:29:05.923 "process": { 00:29:05.923 "type": "rebuild", 00:29:05.923 "target": "spare", 00:29:05.923 "progress": { 00:29:05.923 "blocks": 59392, 00:29:05.923 "percent": 93 00:29:05.923 } 00:29:05.923 }, 00:29:05.923 "base_bdevs_list": [ 00:29:05.923 { 00:29:05.923 "name": "spare", 00:29:05.923 "uuid": "c7f1b185-d35e-5be8-98c0-ee7019d1db6c", 00:29:05.923 "is_configured": true, 00:29:05.923 "data_offset": 2048, 00:29:05.923 "data_size": 63488 00:29:05.923 }, 00:29:05.923 { 00:29:05.923 "name": "BaseBdev2", 00:29:05.923 "uuid": "c0a617a5-1cf8-59ad-9a8d-060fbd103cf8", 00:29:05.923 "is_configured": true, 00:29:05.923 "data_offset": 2048, 00:29:05.923 "data_size": 63488 00:29:05.923 } 00:29:05.923 ] 00:29:05.923 }' 00:29:05.923 14:21:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:05.923 14:21:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:05.923 14:21:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:05.923 14:21:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:05.923 14:21:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:29:05.923 [2024-07-15 14:21:51.865869] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:29:06.181 [2024-07-15 14:21:51.970382] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:29:06.181 [2024-07-15 14:21:51.973805] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:07.116 14:21:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:29:07.116 14:21:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:07.116 14:21:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:07.116 14:21:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:07.116 14:21:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:07.116 14:21:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:07.116 14:21:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:07.116 14:21:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:07.376 14:21:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:07.376 "name": "raid_bdev1", 00:29:07.376 "uuid": "096ce165-d939-457d-aebb-4216094b197d", 00:29:07.376 "strip_size_kb": 0, 00:29:07.376 "state": "online", 00:29:07.376 "raid_level": "raid1", 00:29:07.376 "superblock": true, 00:29:07.376 "num_base_bdevs": 2, 00:29:07.376 "num_base_bdevs_discovered": 2, 00:29:07.376 "num_base_bdevs_operational": 2, 00:29:07.376 "base_bdevs_list": [ 00:29:07.376 { 00:29:07.376 "name": "spare", 00:29:07.376 "uuid": "c7f1b185-d35e-5be8-98c0-ee7019d1db6c", 00:29:07.376 "is_configured": true, 00:29:07.376 "data_offset": 2048, 00:29:07.376 "data_size": 63488 00:29:07.376 }, 00:29:07.376 { 00:29:07.376 "name": "BaseBdev2", 00:29:07.376 "uuid": "c0a617a5-1cf8-59ad-9a8d-060fbd103cf8", 00:29:07.376 "is_configured": true, 00:29:07.376 "data_offset": 2048, 00:29:07.376 "data_size": 63488 00:29:07.376 } 00:29:07.376 ] 00:29:07.376 }' 00:29:07.376 14:21:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:07.376 14:21:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:29:07.376 14:21:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:07.376 14:21:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:29:07.376 14:21:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # break 00:29:07.376 14:21:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:07.376 14:21:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:07.376 14:21:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:07.376 14:21:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:07.376 14:21:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:07.376 14:21:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:07.376 14:21:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:07.635 14:21:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:07.635 "name": "raid_bdev1", 00:29:07.635 "uuid": "096ce165-d939-457d-aebb-4216094b197d", 00:29:07.635 "strip_size_kb": 0, 00:29:07.635 "state": "online", 00:29:07.635 "raid_level": "raid1", 00:29:07.635 "superblock": true, 00:29:07.635 "num_base_bdevs": 2, 00:29:07.635 "num_base_bdevs_discovered": 2, 00:29:07.635 "num_base_bdevs_operational": 2, 00:29:07.635 "base_bdevs_list": [ 00:29:07.635 { 00:29:07.635 "name": "spare", 00:29:07.635 "uuid": "c7f1b185-d35e-5be8-98c0-ee7019d1db6c", 00:29:07.635 "is_configured": true, 00:29:07.635 "data_offset": 2048, 00:29:07.635 "data_size": 63488 00:29:07.635 }, 00:29:07.635 { 00:29:07.635 "name": "BaseBdev2", 00:29:07.635 "uuid": "c0a617a5-1cf8-59ad-9a8d-060fbd103cf8", 00:29:07.635 "is_configured": true, 00:29:07.635 "data_offset": 2048, 00:29:07.635 "data_size": 63488 00:29:07.635 } 00:29:07.635 ] 00:29:07.635 }' 00:29:07.635 14:21:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:07.635 14:21:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:07.635 14:21:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:07.635 14:21:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:07.635 14:21:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:07.635 14:21:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:07.635 14:21:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:07.635 14:21:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:07.635 14:21:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:07.635 14:21:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:07.635 14:21:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:07.635 14:21:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:07.635 14:21:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:07.635 14:21:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:07.635 14:21:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:07.635 14:21:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:07.894 14:21:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:07.894 "name": "raid_bdev1", 00:29:07.894 "uuid": "096ce165-d939-457d-aebb-4216094b197d", 00:29:07.894 "strip_size_kb": 0, 00:29:07.894 "state": "online", 00:29:07.894 "raid_level": "raid1", 00:29:07.894 "superblock": true, 00:29:07.894 "num_base_bdevs": 2, 00:29:07.894 "num_base_bdevs_discovered": 2, 00:29:07.894 "num_base_bdevs_operational": 2, 00:29:07.894 "base_bdevs_list": [ 00:29:07.894 { 00:29:07.894 "name": "spare", 00:29:07.894 "uuid": "c7f1b185-d35e-5be8-98c0-ee7019d1db6c", 00:29:07.894 "is_configured": true, 00:29:07.894 "data_offset": 2048, 00:29:07.894 "data_size": 63488 00:29:07.894 }, 00:29:07.894 { 00:29:07.894 "name": "BaseBdev2", 00:29:07.894 "uuid": "c0a617a5-1cf8-59ad-9a8d-060fbd103cf8", 00:29:07.894 "is_configured": true, 00:29:07.894 "data_offset": 2048, 00:29:07.894 "data_size": 63488 00:29:07.894 } 00:29:07.894 ] 00:29:07.894 }' 00:29:07.894 14:21:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:07.894 14:21:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:08.830 14:21:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:29:08.830 [2024-07-15 14:21:54.704634] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:08.830 [2024-07-15 14:21:54.705002] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:08.830 00:29:08.830 Latency(us) 00:29:08.830 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:08.830 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:29:08.830 raid_bdev1 : 12.09 143.65 430.95 0.00 0.00 10012.19 303.48 110577.11 00:29:08.830 =================================================================================================================== 00:29:08.830 Total : 143.65 430.95 0.00 0.00 10012.19 303.48 110577.11 00:29:08.830 [2024-07-15 14:21:54.767118] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:08.830 [2024-07-15 14:21:54.767334] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:08.830 0 00:29:08.830 [2024-07-15 14:21:54.767469] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:08.830 [2024-07-15 14:21:54.767487] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:29:08.830 14:21:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:08.830 14:21:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # jq length 00:29:09.089 14:21:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:29:09.089 14:21:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:29:09.089 14:21:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:29:09.089 14:21:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@724 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:29:09.089 14:21:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:09.089 14:21:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:29:09.089 14:21:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:09.090 14:21:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:29:09.090 14:21:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:09.090 14:21:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:29:09.090 14:21:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:09.090 14:21:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:09.090 14:21:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:29:09.349 /dev/nbd0 00:29:09.349 14:21:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:09.349 14:21:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:09.349 14:21:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:29:09.349 14:21:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # local i 00:29:09.349 14:21:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:29:09.349 14:21:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:29:09.349 14:21:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:29:09.349 14:21:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # break 00:29:09.349 14:21:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:29:09.349 14:21:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:29:09.349 14:21:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:09.349 1+0 records in 00:29:09.349 1+0 records out 00:29:09.349 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000545778 s, 7.5 MB/s 00:29:09.349 14:21:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:09.349 14:21:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # size=4096 00:29:09.349 14:21:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:09.349 14:21:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:29:09.349 14:21:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # return 0 00:29:09.349 14:21:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:09.349 14:21:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:09.349 14:21:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:29:09.349 14:21:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev2 ']' 00:29:09.349 14:21:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:29:09.349 14:21:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:09.349 14:21:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:29:09.349 14:21:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:09.349 14:21:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:29:09.349 14:21:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:09.349 14:21:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:29:09.349 14:21:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:09.349 14:21:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:09.349 14:21:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:29:09.915 /dev/nbd1 00:29:09.915 14:21:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:29:09.915 14:21:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:29:09.915 14:21:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:29:09.915 14:21:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # local i 00:29:09.915 14:21:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:29:09.916 14:21:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:29:09.916 14:21:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:29:09.916 14:21:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # break 00:29:09.916 14:21:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:29:09.916 14:21:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:29:09.916 14:21:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:09.916 1+0 records in 00:29:09.916 1+0 records out 00:29:09.916 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00050323 s, 8.1 MB/s 00:29:09.916 14:21:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:09.916 14:21:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # size=4096 00:29:09.916 14:21:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:09.916 14:21:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:29:09.916 14:21:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # return 0 00:29:09.916 14:21:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:09.916 14:21:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:09.916 14:21:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:29:09.916 14:21:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:29:09.916 14:21:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:09.916 14:21:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:29:09.916 14:21:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:09.916 14:21:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:29:09.916 14:21:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:09.916 14:21:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:29:10.175 14:21:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:29:10.175 14:21:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:29:10.175 14:21:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:29:10.175 14:21:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:10.175 14:21:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:10.175 14:21:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:29:10.175 14:21:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:29:10.175 14:21:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:29:10.175 14:21:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@733 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:29:10.175 14:21:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:10.175 14:21:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:29:10.175 14:21:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:10.175 14:21:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:29:10.175 14:21:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:10.175 14:21:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:29:10.743 14:21:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:10.743 14:21:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:10.743 14:21:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:10.743 14:21:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:10.743 14:21:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:10.743 14:21:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:10.743 14:21:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:29:10.743 14:21:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:29:10.743 14:21:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:29:10.743 14:21:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:29:10.743 14:21:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:29:11.005 [2024-07-15 14:21:56.997174] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:11.005 [2024-07-15 14:21:56.998024] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:11.005 [2024-07-15 14:21:56.998319] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:29:11.005 [2024-07-15 14:21:56.998535] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:11.005 [2024-07-15 14:21:57.000717] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:11.005 [2024-07-15 14:21:57.000996] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:11.005 [2024-07-15 14:21:57.001367] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:29:11.005 [2024-07-15 14:21:57.001539] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:11.005 [2024-07-15 14:21:57.001858] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:11.005 spare 00:29:11.263 14:21:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:11.263 14:21:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:11.263 14:21:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:11.263 14:21:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:11.263 14:21:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:11.263 14:21:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:11.263 14:21:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:11.263 14:21:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:11.263 14:21:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:11.263 14:21:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:11.263 14:21:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:11.263 14:21:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:11.263 [2024-07-15 14:21:57.102083] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:29:11.263 [2024-07-15 14:21:57.103995] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:29:11.263 [2024-07-15 14:21:57.104296] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b340 00:29:11.263 [2024-07-15 14:21:57.104977] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:29:11.263 [2024-07-15 14:21:57.105131] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:29:11.263 [2024-07-15 14:21:57.105405] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:11.521 14:21:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:11.521 "name": "raid_bdev1", 00:29:11.521 "uuid": "096ce165-d939-457d-aebb-4216094b197d", 00:29:11.521 "strip_size_kb": 0, 00:29:11.521 "state": "online", 00:29:11.521 "raid_level": "raid1", 00:29:11.521 "superblock": true, 00:29:11.521 "num_base_bdevs": 2, 00:29:11.521 "num_base_bdevs_discovered": 2, 00:29:11.521 "num_base_bdevs_operational": 2, 00:29:11.521 "base_bdevs_list": [ 00:29:11.521 { 00:29:11.521 "name": "spare", 00:29:11.521 "uuid": "c7f1b185-d35e-5be8-98c0-ee7019d1db6c", 00:29:11.521 "is_configured": true, 00:29:11.521 "data_offset": 2048, 00:29:11.521 "data_size": 63488 00:29:11.521 }, 00:29:11.521 { 00:29:11.521 "name": "BaseBdev2", 00:29:11.521 "uuid": "c0a617a5-1cf8-59ad-9a8d-060fbd103cf8", 00:29:11.521 "is_configured": true, 00:29:11.521 "data_offset": 2048, 00:29:11.521 "data_size": 63488 00:29:11.521 } 00:29:11.521 ] 00:29:11.521 }' 00:29:11.521 14:21:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:11.521 14:21:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:12.089 14:21:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:12.089 14:21:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:12.089 14:21:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:12.089 14:21:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:12.089 14:21:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:12.089 14:21:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:12.089 14:21:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:12.348 14:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:12.348 "name": "raid_bdev1", 00:29:12.348 "uuid": "096ce165-d939-457d-aebb-4216094b197d", 00:29:12.348 "strip_size_kb": 0, 00:29:12.348 "state": "online", 00:29:12.348 "raid_level": "raid1", 00:29:12.348 "superblock": true, 00:29:12.348 "num_base_bdevs": 2, 00:29:12.348 "num_base_bdevs_discovered": 2, 00:29:12.348 "num_base_bdevs_operational": 2, 00:29:12.348 "base_bdevs_list": [ 00:29:12.348 { 00:29:12.348 "name": "spare", 00:29:12.348 "uuid": "c7f1b185-d35e-5be8-98c0-ee7019d1db6c", 00:29:12.348 "is_configured": true, 00:29:12.348 "data_offset": 2048, 00:29:12.348 "data_size": 63488 00:29:12.348 }, 00:29:12.348 { 00:29:12.348 "name": "BaseBdev2", 00:29:12.348 "uuid": "c0a617a5-1cf8-59ad-9a8d-060fbd103cf8", 00:29:12.348 "is_configured": true, 00:29:12.348 "data_offset": 2048, 00:29:12.348 "data_size": 63488 00:29:12.348 } 00:29:12.348 ] 00:29:12.348 }' 00:29:12.348 14:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:12.348 14:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:12.348 14:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:12.348 14:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:12.348 14:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:12.348 14:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:29:12.606 14:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:29:12.606 14:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:29:12.910 [2024-07-15 14:21:58.831032] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:12.910 14:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:12.910 14:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:12.910 14:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:12.910 14:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:12.910 14:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:12.910 14:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:29:12.910 14:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:12.910 14:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:12.910 14:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:12.910 14:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:12.910 14:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:12.910 14:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:13.199 14:21:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:13.199 "name": "raid_bdev1", 00:29:13.199 "uuid": "096ce165-d939-457d-aebb-4216094b197d", 00:29:13.199 "strip_size_kb": 0, 00:29:13.199 "state": "online", 00:29:13.199 "raid_level": "raid1", 00:29:13.199 "superblock": true, 00:29:13.199 "num_base_bdevs": 2, 00:29:13.199 "num_base_bdevs_discovered": 1, 00:29:13.199 "num_base_bdevs_operational": 1, 00:29:13.199 "base_bdevs_list": [ 00:29:13.199 { 00:29:13.199 "name": null, 00:29:13.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:13.199 "is_configured": false, 00:29:13.199 "data_offset": 2048, 00:29:13.199 "data_size": 63488 00:29:13.199 }, 00:29:13.199 { 00:29:13.199 "name": "BaseBdev2", 00:29:13.199 "uuid": "c0a617a5-1cf8-59ad-9a8d-060fbd103cf8", 00:29:13.199 "is_configured": true, 00:29:13.199 "data_offset": 2048, 00:29:13.199 "data_size": 63488 00:29:13.199 } 00:29:13.199 ] 00:29:13.199 }' 00:29:13.199 14:21:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:13.199 14:21:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:13.763 14:21:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:29:14.329 [2024-07-15 14:22:00.043128] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:14.329 [2024-07-15 14:22:00.043584] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:29:14.329 [2024-07-15 14:22:00.043760] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:29:14.329 [2024-07-15 14:22:00.044369] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:14.329 [2024-07-15 14:22:00.059896] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b4e0 00:29:14.329 [2024-07-15 14:22:00.074425] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:14.329 14:22:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # sleep 1 00:29:15.262 14:22:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:15.262 14:22:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:15.262 14:22:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:15.262 14:22:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:15.262 14:22:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:15.262 14:22:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:15.262 14:22:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:15.519 14:22:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:15.519 "name": "raid_bdev1", 00:29:15.519 "uuid": "096ce165-d939-457d-aebb-4216094b197d", 00:29:15.519 "strip_size_kb": 0, 00:29:15.519 "state": "online", 00:29:15.519 "raid_level": "raid1", 00:29:15.519 "superblock": true, 00:29:15.519 "num_base_bdevs": 2, 00:29:15.519 "num_base_bdevs_discovered": 2, 00:29:15.519 "num_base_bdevs_operational": 2, 00:29:15.519 "process": { 00:29:15.519 "type": "rebuild", 00:29:15.519 "target": "spare", 00:29:15.519 "progress": { 00:29:15.519 "blocks": 24576, 00:29:15.519 "percent": 38 00:29:15.519 } 00:29:15.519 }, 00:29:15.519 "base_bdevs_list": [ 00:29:15.519 { 00:29:15.519 "name": "spare", 00:29:15.519 "uuid": "c7f1b185-d35e-5be8-98c0-ee7019d1db6c", 00:29:15.519 "is_configured": true, 00:29:15.519 "data_offset": 2048, 00:29:15.519 "data_size": 63488 00:29:15.519 }, 00:29:15.519 { 00:29:15.519 "name": "BaseBdev2", 00:29:15.519 "uuid": "c0a617a5-1cf8-59ad-9a8d-060fbd103cf8", 00:29:15.519 "is_configured": true, 00:29:15.519 "data_offset": 2048, 00:29:15.519 "data_size": 63488 00:29:15.519 } 00:29:15.519 ] 00:29:15.519 }' 00:29:15.519 14:22:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:15.519 14:22:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:15.519 14:22:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:15.519 14:22:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:15.519 14:22:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:29:15.776 [2024-07-15 14:22:01.721394] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:16.033 [2024-07-15 14:22:01.787635] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:16.033 [2024-07-15 14:22:01.788562] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:16.033 [2024-07-15 14:22:01.788747] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:16.033 [2024-07-15 14:22:01.788803] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:16.033 14:22:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:16.033 14:22:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:16.034 14:22:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:16.034 14:22:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:16.034 14:22:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:16.034 14:22:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:29:16.034 14:22:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:16.034 14:22:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:16.034 14:22:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:16.034 14:22:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:16.034 14:22:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:16.034 14:22:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:16.292 14:22:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:16.292 "name": "raid_bdev1", 00:29:16.292 "uuid": "096ce165-d939-457d-aebb-4216094b197d", 00:29:16.292 "strip_size_kb": 0, 00:29:16.292 "state": "online", 00:29:16.292 "raid_level": "raid1", 00:29:16.292 "superblock": true, 00:29:16.292 "num_base_bdevs": 2, 00:29:16.292 "num_base_bdevs_discovered": 1, 00:29:16.292 "num_base_bdevs_operational": 1, 00:29:16.292 "base_bdevs_list": [ 00:29:16.292 { 00:29:16.292 "name": null, 00:29:16.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:16.292 "is_configured": false, 00:29:16.292 "data_offset": 2048, 00:29:16.292 "data_size": 63488 00:29:16.292 }, 00:29:16.292 { 00:29:16.292 "name": "BaseBdev2", 00:29:16.292 "uuid": "c0a617a5-1cf8-59ad-9a8d-060fbd103cf8", 00:29:16.292 "is_configured": true, 00:29:16.292 "data_offset": 2048, 00:29:16.292 "data_size": 63488 00:29:16.292 } 00:29:16.292 ] 00:29:16.292 }' 00:29:16.292 14:22:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:16.292 14:22:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:16.860 14:22:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:29:17.119 [2024-07-15 14:22:03.076158] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:17.119 [2024-07-15 14:22:03.077022] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:17.119 [2024-07-15 14:22:03.077290] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:29:17.119 [2024-07-15 14:22:03.077529] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:17.119 [2024-07-15 14:22:03.078226] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:17.119 [2024-07-15 14:22:03.078460] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:17.119 [2024-07-15 14:22:03.078776] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:29:17.119 [2024-07-15 14:22:03.078912] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:29:17.119 [2024-07-15 14:22:03.079034] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:29:17.119 [2024-07-15 14:22:03.079183] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:17.119 [2024-07-15 14:22:03.094781] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:29:17.119 spare 00:29:17.119 [2024-07-15 14:22:03.096650] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:17.119 14:22:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # sleep 1 00:29:18.495 14:22:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:18.495 14:22:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:18.495 14:22:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:18.495 14:22:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:18.495 14:22:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:18.495 14:22:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:18.495 14:22:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:18.495 14:22:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:18.495 "name": "raid_bdev1", 00:29:18.495 "uuid": "096ce165-d939-457d-aebb-4216094b197d", 00:29:18.495 "strip_size_kb": 0, 00:29:18.495 "state": "online", 00:29:18.495 "raid_level": "raid1", 00:29:18.495 "superblock": true, 00:29:18.495 "num_base_bdevs": 2, 00:29:18.495 "num_base_bdevs_discovered": 2, 00:29:18.495 "num_base_bdevs_operational": 2, 00:29:18.495 "process": { 00:29:18.495 "type": "rebuild", 00:29:18.495 "target": "spare", 00:29:18.495 "progress": { 00:29:18.495 "blocks": 24576, 00:29:18.495 "percent": 38 00:29:18.495 } 00:29:18.495 }, 00:29:18.495 "base_bdevs_list": [ 00:29:18.495 { 00:29:18.495 "name": "spare", 00:29:18.495 "uuid": "c7f1b185-d35e-5be8-98c0-ee7019d1db6c", 00:29:18.495 "is_configured": true, 00:29:18.495 "data_offset": 2048, 00:29:18.495 "data_size": 63488 00:29:18.495 }, 00:29:18.495 { 00:29:18.495 "name": "BaseBdev2", 00:29:18.495 "uuid": "c0a617a5-1cf8-59ad-9a8d-060fbd103cf8", 00:29:18.495 "is_configured": true, 00:29:18.495 "data_offset": 2048, 00:29:18.495 "data_size": 63488 00:29:18.495 } 00:29:18.495 ] 00:29:18.495 }' 00:29:18.495 14:22:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:18.495 14:22:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:18.496 14:22:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:18.496 14:22:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:18.496 14:22:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:29:18.755 [2024-07-15 14:22:04.748160] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:19.013 [2024-07-15 14:22:04.809375] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:19.013 [2024-07-15 14:22:04.810243] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:19.013 [2024-07-15 14:22:04.810393] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:19.013 [2024-07-15 14:22:04.810460] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:19.013 14:22:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:19.013 14:22:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:19.013 14:22:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:19.013 14:22:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:19.013 14:22:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:19.013 14:22:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:29:19.013 14:22:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:19.013 14:22:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:19.013 14:22:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:19.013 14:22:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:19.013 14:22:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:19.013 14:22:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:19.272 14:22:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:19.272 "name": "raid_bdev1", 00:29:19.272 "uuid": "096ce165-d939-457d-aebb-4216094b197d", 00:29:19.272 "strip_size_kb": 0, 00:29:19.272 "state": "online", 00:29:19.272 "raid_level": "raid1", 00:29:19.272 "superblock": true, 00:29:19.272 "num_base_bdevs": 2, 00:29:19.272 "num_base_bdevs_discovered": 1, 00:29:19.272 "num_base_bdevs_operational": 1, 00:29:19.272 "base_bdevs_list": [ 00:29:19.272 { 00:29:19.272 "name": null, 00:29:19.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:19.272 "is_configured": false, 00:29:19.272 "data_offset": 2048, 00:29:19.272 "data_size": 63488 00:29:19.272 }, 00:29:19.272 { 00:29:19.272 "name": "BaseBdev2", 00:29:19.272 "uuid": "c0a617a5-1cf8-59ad-9a8d-060fbd103cf8", 00:29:19.272 "is_configured": true, 00:29:19.272 "data_offset": 2048, 00:29:19.272 "data_size": 63488 00:29:19.272 } 00:29:19.272 ] 00:29:19.272 }' 00:29:19.272 14:22:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:19.272 14:22:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:19.838 14:22:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:19.838 14:22:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:19.838 14:22:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:19.838 14:22:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:19.838 14:22:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:19.838 14:22:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:19.838 14:22:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:20.404 14:22:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:20.404 "name": "raid_bdev1", 00:29:20.404 "uuid": "096ce165-d939-457d-aebb-4216094b197d", 00:29:20.404 "strip_size_kb": 0, 00:29:20.404 "state": "online", 00:29:20.404 "raid_level": "raid1", 00:29:20.404 "superblock": true, 00:29:20.404 "num_base_bdevs": 2, 00:29:20.404 "num_base_bdevs_discovered": 1, 00:29:20.404 "num_base_bdevs_operational": 1, 00:29:20.404 "base_bdevs_list": [ 00:29:20.404 { 00:29:20.404 "name": null, 00:29:20.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:20.404 "is_configured": false, 00:29:20.404 "data_offset": 2048, 00:29:20.404 "data_size": 63488 00:29:20.404 }, 00:29:20.404 { 00:29:20.404 "name": "BaseBdev2", 00:29:20.404 "uuid": "c0a617a5-1cf8-59ad-9a8d-060fbd103cf8", 00:29:20.404 "is_configured": true, 00:29:20.404 "data_offset": 2048, 00:29:20.404 "data_size": 63488 00:29:20.404 } 00:29:20.404 ] 00:29:20.404 }' 00:29:20.404 14:22:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:20.404 14:22:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:20.404 14:22:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:20.404 14:22:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:20.404 14:22:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:29:20.662 14:22:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:29:20.919 [2024-07-15 14:22:06.844084] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:29:20.919 [2024-07-15 14:22:06.844700] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:20.920 [2024-07-15 14:22:06.844958] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:29:20.920 [2024-07-15 14:22:06.845206] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:20.920 [2024-07-15 14:22:06.845786] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:20.920 [2024-07-15 14:22:06.846017] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:20.920 [2024-07-15 14:22:06.846333] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:29:20.920 [2024-07-15 14:22:06.846461] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:29:20.920 [2024-07-15 14:22:06.846568] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:29:20.920 BaseBdev1 00:29:20.920 14:22:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # sleep 1 00:29:22.294 14:22:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:22.294 14:22:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:22.294 14:22:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:22.294 14:22:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:22.294 14:22:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:22.294 14:22:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:29:22.294 14:22:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:22.294 14:22:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:22.294 14:22:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:22.294 14:22:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:22.294 14:22:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:22.294 14:22:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:22.294 14:22:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:22.294 "name": "raid_bdev1", 00:29:22.294 "uuid": "096ce165-d939-457d-aebb-4216094b197d", 00:29:22.294 "strip_size_kb": 0, 00:29:22.294 "state": "online", 00:29:22.294 "raid_level": "raid1", 00:29:22.294 "superblock": true, 00:29:22.294 "num_base_bdevs": 2, 00:29:22.294 "num_base_bdevs_discovered": 1, 00:29:22.294 "num_base_bdevs_operational": 1, 00:29:22.294 "base_bdevs_list": [ 00:29:22.294 { 00:29:22.294 "name": null, 00:29:22.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:22.294 "is_configured": false, 00:29:22.294 "data_offset": 2048, 00:29:22.294 "data_size": 63488 00:29:22.294 }, 00:29:22.294 { 00:29:22.294 "name": "BaseBdev2", 00:29:22.294 "uuid": "c0a617a5-1cf8-59ad-9a8d-060fbd103cf8", 00:29:22.294 "is_configured": true, 00:29:22.294 "data_offset": 2048, 00:29:22.294 "data_size": 63488 00:29:22.294 } 00:29:22.294 ] 00:29:22.294 }' 00:29:22.294 14:22:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:22.294 14:22:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:22.859 14:22:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:22.859 14:22:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:22.859 14:22:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:22.859 14:22:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:22.859 14:22:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:22.859 14:22:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:22.859 14:22:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:23.117 14:22:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:23.117 "name": "raid_bdev1", 00:29:23.117 "uuid": "096ce165-d939-457d-aebb-4216094b197d", 00:29:23.117 "strip_size_kb": 0, 00:29:23.117 "state": "online", 00:29:23.117 "raid_level": "raid1", 00:29:23.117 "superblock": true, 00:29:23.117 "num_base_bdevs": 2, 00:29:23.117 "num_base_bdevs_discovered": 1, 00:29:23.117 "num_base_bdevs_operational": 1, 00:29:23.117 "base_bdevs_list": [ 00:29:23.117 { 00:29:23.117 "name": null, 00:29:23.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:23.117 "is_configured": false, 00:29:23.117 "data_offset": 2048, 00:29:23.117 "data_size": 63488 00:29:23.117 }, 00:29:23.117 { 00:29:23.117 "name": "BaseBdev2", 00:29:23.117 "uuid": "c0a617a5-1cf8-59ad-9a8d-060fbd103cf8", 00:29:23.117 "is_configured": true, 00:29:23.117 "data_offset": 2048, 00:29:23.117 "data_size": 63488 00:29:23.117 } 00:29:23.117 ] 00:29:23.117 }' 00:29:23.117 14:22:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:23.375 14:22:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:23.375 14:22:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:23.375 14:22:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:23.375 14:22:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:29:23.375 14:22:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@648 -- # local es=0 00:29:23.375 14:22:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:29:23.375 14:22:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:23.375 14:22:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:23.375 14:22:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:23.375 14:22:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:23.375 14:22:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:23.375 14:22:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:23.375 14:22:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:23.375 14:22:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:29:23.375 14:22:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:29:23.634 [2024-07-15 14:22:09.450792] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:23.634 [2024-07-15 14:22:09.451147] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:29:23.634 [2024-07-15 14:22:09.451277] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:29:23.634 request: 00:29:23.634 { 00:29:23.634 "base_bdev": "BaseBdev1", 00:29:23.634 "raid_bdev": "raid_bdev1", 00:29:23.634 "method": "bdev_raid_add_base_bdev", 00:29:23.634 "req_id": 1 00:29:23.634 } 00:29:23.634 Got JSON-RPC error response 00:29:23.634 response: 00:29:23.634 { 00:29:23.634 "code": -22, 00:29:23.634 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:29:23.634 } 00:29:23.634 14:22:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@651 -- # es=1 00:29:23.634 14:22:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:23.634 14:22:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:23.634 14:22:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:23.634 14:22:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # sleep 1 00:29:24.574 14:22:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:29:24.574 14:22:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:24.574 14:22:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:24.574 14:22:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:24.574 14:22:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:24.574 14:22:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:29:24.574 14:22:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:24.574 14:22:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:24.574 14:22:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:24.574 14:22:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:24.574 14:22:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:24.574 14:22:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:24.833 14:22:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:24.833 "name": "raid_bdev1", 00:29:24.833 "uuid": "096ce165-d939-457d-aebb-4216094b197d", 00:29:24.833 "strip_size_kb": 0, 00:29:24.833 "state": "online", 00:29:24.833 "raid_level": "raid1", 00:29:24.833 "superblock": true, 00:29:24.833 "num_base_bdevs": 2, 00:29:24.833 "num_base_bdevs_discovered": 1, 00:29:24.833 "num_base_bdevs_operational": 1, 00:29:24.833 "base_bdevs_list": [ 00:29:24.833 { 00:29:24.833 "name": null, 00:29:24.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:24.833 "is_configured": false, 00:29:24.833 "data_offset": 2048, 00:29:24.833 "data_size": 63488 00:29:24.833 }, 00:29:24.833 { 00:29:24.833 "name": "BaseBdev2", 00:29:24.833 "uuid": "c0a617a5-1cf8-59ad-9a8d-060fbd103cf8", 00:29:24.833 "is_configured": true, 00:29:24.833 "data_offset": 2048, 00:29:24.833 "data_size": 63488 00:29:24.833 } 00:29:24.833 ] 00:29:24.833 }' 00:29:24.833 14:22:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:24.833 14:22:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:25.768 14:22:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:25.768 14:22:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:25.768 14:22:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:25.768 14:22:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:25.768 14:22:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:25.768 14:22:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:25.768 14:22:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:25.768 14:22:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:25.768 "name": "raid_bdev1", 00:29:25.768 "uuid": "096ce165-d939-457d-aebb-4216094b197d", 00:29:25.768 "strip_size_kb": 0, 00:29:25.768 "state": "online", 00:29:25.768 "raid_level": "raid1", 00:29:25.768 "superblock": true, 00:29:25.768 "num_base_bdevs": 2, 00:29:25.768 "num_base_bdevs_discovered": 1, 00:29:25.768 "num_base_bdevs_operational": 1, 00:29:25.768 "base_bdevs_list": [ 00:29:25.768 { 00:29:25.768 "name": null, 00:29:25.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:25.768 "is_configured": false, 00:29:25.768 "data_offset": 2048, 00:29:25.768 "data_size": 63488 00:29:25.768 }, 00:29:25.768 { 00:29:25.768 "name": "BaseBdev2", 00:29:25.768 "uuid": "c0a617a5-1cf8-59ad-9a8d-060fbd103cf8", 00:29:25.768 "is_configured": true, 00:29:25.768 "data_offset": 2048, 00:29:25.768 "data_size": 63488 00:29:25.768 } 00:29:25.768 ] 00:29:25.768 }' 00:29:25.768 14:22:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:26.027 14:22:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:26.027 14:22:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:26.027 14:22:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:26.027 14:22:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@782 -- # killprocess 212333 00:29:26.027 14:22:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@948 -- # '[' -z 212333 ']' 00:29:26.027 14:22:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@952 -- # kill -0 212333 00:29:26.027 14:22:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@953 -- # uname 00:29:26.027 14:22:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:26.027 14:22:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 212333 00:29:26.027 14:22:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:26.027 14:22:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:26.027 killing process with pid 212333 00:29:26.027 14:22:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@966 -- # echo 'killing process with pid 212333' 00:29:26.027 14:22:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@967 -- # kill 212333 00:29:26.027 Received shutdown signal, test time was about 29.206737 seconds 00:29:26.027 00:29:26.027 Latency(us) 00:29:26.027 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:26.027 =================================================================================================================== 00:29:26.027 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:26.027 [2024-07-15 14:22:11.863434] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:26.027 [2024-07-15 14:22:11.863536] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:26.027 [2024-07-15 14:22:11.863577] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:26.027 [2024-07-15 14:22:11.863588] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:29:26.027 14:22:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # wait 212333 00:29:26.286 [2024-07-15 14:22:12.061955] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:27.664 14:22:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # return 0 00:29:27.664 00:29:27.664 real 0m35.315s 00:29:27.664 user 0m56.140s 00:29:27.664 sys 0m3.598s 00:29:27.664 14:22:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:27.664 ************************************ 00:29:27.664 END TEST raid_rebuild_test_sb_io 00:29:27.664 14:22:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:27.664 ************************************ 00:29:27.664 14:22:13 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:29:27.664 14:22:13 bdev_raid -- bdev/bdev_raid.sh@876 -- # for n in 2 4 00:29:27.664 14:22:13 bdev_raid -- bdev/bdev_raid.sh@877 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:29:27.664 14:22:13 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:29:27.664 14:22:13 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:27.664 14:22:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:27.664 ************************************ 00:29:27.664 START TEST raid_rebuild_test 00:29:27.664 ************************************ 00:29:27.664 14:22:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 4 false false true 00:29:27.664 14:22:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:29:27.664 14:22:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:29:27.664 14:22:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:29:27.664 14:22:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:29:27.664 14:22:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local verify=true 00:29:27.664 14:22:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:29:27.664 14:22:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:27.664 14:22:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:29:27.664 14:22:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:29:27.664 14:22:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:27.664 14:22:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:29:27.664 14:22:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:29:27.664 14:22:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:27.664 14:22:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev3 00:29:27.664 14:22:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:29:27.664 14:22:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:27.664 14:22:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev4 00:29:27.664 14:22:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:29:27.664 14:22:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:27.664 14:22:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:29:27.664 14:22:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:29:27.664 14:22:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:29:27.664 14:22:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local strip_size 00:29:27.664 14:22:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local create_arg 00:29:27.664 14:22:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:29:27.664 14:22:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local data_offset 00:29:27.664 14:22:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:29:27.664 14:22:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:29:27.664 14:22:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:29:27.664 14:22:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # raid_pid=213217 00:29:27.664 14:22:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # waitforlisten 213217 /var/tmp/spdk-raid.sock 00:29:27.664 14:22:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:29:27.664 14:22:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@829 -- # '[' -z 213217 ']' 00:29:27.664 14:22:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:29:27.664 14:22:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:27.664 14:22:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:29:27.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:29:27.664 14:22:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:27.664 14:22:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:27.664 [2024-07-15 14:22:13.402580] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:29:27.664 [2024-07-15 14:22:13.403402] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid213217 ] 00:29:27.664 I/O size of 3145728 is greater than zero copy threshold (65536). 00:29:27.664 Zero copy mechanism will not be used. 00:29:27.664 [2024-07-15 14:22:13.574776] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:27.922 [2024-07-15 14:22:13.790996] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:28.180 [2024-07-15 14:22:13.992389] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:28.439 14:22:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:28.439 14:22:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@862 -- # return 0 00:29:28.439 14:22:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:29:28.439 14:22:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:29:29.007 BaseBdev1_malloc 00:29:29.007 14:22:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:29:29.007 [2024-07-15 14:22:14.997853] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:29:29.007 [2024-07-15 14:22:14.997986] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:29.007 [2024-07-15 14:22:14.998050] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:29:29.007 [2024-07-15 14:22:14.998090] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:29.007 [2024-07-15 14:22:15.000061] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:29.007 [2024-07-15 14:22:15.000133] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:29.007 BaseBdev1 00:29:29.266 14:22:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:29:29.266 14:22:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:29:29.525 BaseBdev2_malloc 00:29:29.525 14:22:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:29:29.525 [2024-07-15 14:22:15.516335] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:29:29.525 [2024-07-15 14:22:15.516471] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:29.525 [2024-07-15 14:22:15.516532] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:29:29.525 [2024-07-15 14:22:15.516569] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:29.525 [2024-07-15 14:22:15.518495] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:29.525 [2024-07-15 14:22:15.518553] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:29:29.525 BaseBdev2 00:29:29.785 14:22:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:29:29.785 14:22:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:29:30.042 BaseBdev3_malloc 00:29:30.042 14:22:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:29:30.042 [2024-07-15 14:22:16.045074] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:29:30.042 [2024-07-15 14:22:16.045198] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:30.042 [2024-07-15 14:22:16.045251] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:29:30.042 [2024-07-15 14:22:16.045292] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:30.300 [2024-07-15 14:22:16.047212] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:30.300 [2024-07-15 14:22:16.047273] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:29:30.300 BaseBdev3 00:29:30.300 14:22:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:29:30.300 14:22:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:29:30.574 BaseBdev4_malloc 00:29:30.574 14:22:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:29:30.832 [2024-07-15 14:22:16.652052] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:29:30.832 [2024-07-15 14:22:16.652187] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:30.832 [2024-07-15 14:22:16.652241] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:29:30.832 [2024-07-15 14:22:16.652282] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:30.832 [2024-07-15 14:22:16.654172] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:30.832 [2024-07-15 14:22:16.654232] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:29:30.832 BaseBdev4 00:29:30.832 14:22:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:29:31.089 spare_malloc 00:29:31.089 14:22:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:29:31.347 spare_delay 00:29:31.347 14:22:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:29:31.657 [2024-07-15 14:22:17.431174] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:31.657 [2024-07-15 14:22:17.431321] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:31.657 [2024-07-15 14:22:17.431382] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:29:31.657 [2024-07-15 14:22:17.431460] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:31.657 [2024-07-15 14:22:17.433574] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:31.657 [2024-07-15 14:22:17.433636] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:31.657 spare 00:29:31.657 14:22:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:29:31.915 [2024-07-15 14:22:17.667268] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:31.915 [2024-07-15 14:22:17.668883] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:31.915 [2024-07-15 14:22:17.668980] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:29:31.915 [2024-07-15 14:22:17.669024] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:29:31.916 [2024-07-15 14:22:17.669114] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:29:31.916 [2024-07-15 14:22:17.669128] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:29:31.916 [2024-07-15 14:22:17.669260] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:29:31.916 [2024-07-15 14:22:17.669533] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:29:31.916 [2024-07-15 14:22:17.669562] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:29:31.916 [2024-07-15 14:22:17.669709] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:31.916 14:22:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:29:31.916 14:22:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:31.916 14:22:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:31.916 14:22:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:31.916 14:22:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:31.916 14:22:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:29:31.916 14:22:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:31.916 14:22:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:31.916 14:22:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:31.916 14:22:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:31.916 14:22:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:31.916 14:22:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:32.174 14:22:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:32.174 "name": "raid_bdev1", 00:29:32.174 "uuid": "40f18699-5e3f-424e-829a-7fbafed1a4f4", 00:29:32.174 "strip_size_kb": 0, 00:29:32.174 "state": "online", 00:29:32.174 "raid_level": "raid1", 00:29:32.174 "superblock": false, 00:29:32.174 "num_base_bdevs": 4, 00:29:32.174 "num_base_bdevs_discovered": 4, 00:29:32.174 "num_base_bdevs_operational": 4, 00:29:32.174 "base_bdevs_list": [ 00:29:32.174 { 00:29:32.174 "name": "BaseBdev1", 00:29:32.174 "uuid": "904a3834-3db8-57b7-9061-2224489f239c", 00:29:32.174 "is_configured": true, 00:29:32.174 "data_offset": 0, 00:29:32.174 "data_size": 65536 00:29:32.174 }, 00:29:32.174 { 00:29:32.174 "name": "BaseBdev2", 00:29:32.174 "uuid": "e999b6dd-cb66-5382-aa57-557fe467cb14", 00:29:32.174 "is_configured": true, 00:29:32.174 "data_offset": 0, 00:29:32.174 "data_size": 65536 00:29:32.174 }, 00:29:32.174 { 00:29:32.174 "name": "BaseBdev3", 00:29:32.174 "uuid": "50452b76-60e3-57af-93cb-beaf82650657", 00:29:32.174 "is_configured": true, 00:29:32.174 "data_offset": 0, 00:29:32.174 "data_size": 65536 00:29:32.174 }, 00:29:32.174 { 00:29:32.174 "name": "BaseBdev4", 00:29:32.174 "uuid": "dc38ac51-3783-5d97-a547-ee1b4fadaff3", 00:29:32.174 "is_configured": true, 00:29:32.174 "data_offset": 0, 00:29:32.174 "data_size": 65536 00:29:32.174 } 00:29:32.174 ] 00:29:32.174 }' 00:29:32.174 14:22:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:32.174 14:22:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:32.740 14:22:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:29:32.740 14:22:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:29:32.999 [2024-07-15 14:22:18.799588] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:32.999 14:22:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=65536 00:29:32.999 14:22:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:32.999 14:22:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:29:33.257 14:22:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:29:33.257 14:22:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:29:33.257 14:22:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:29:33.257 14:22:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:29:33.257 14:22:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:29:33.257 14:22:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:33.257 14:22:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:29:33.257 14:22:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:33.257 14:22:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:29:33.257 14:22:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:33.257 14:22:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:29:33.257 14:22:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:33.257 14:22:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:33.257 14:22:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:29:33.515 [2024-07-15 14:22:19.287545] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:29:33.515 /dev/nbd0 00:29:33.515 14:22:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:33.515 14:22:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:33.515 14:22:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:29:33.515 14:22:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:29:33.515 14:22:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:29:33.515 14:22:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:29:33.515 14:22:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:29:33.515 14:22:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # break 00:29:33.515 14:22:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:29:33.515 14:22:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:29:33.515 14:22:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:33.515 1+0 records in 00:29:33.515 1+0 records out 00:29:33.515 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000374614 s, 10.9 MB/s 00:29:33.515 14:22:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:33.515 14:22:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:29:33.515 14:22:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:33.515 14:22:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:29:33.515 14:22:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:29:33.515 14:22:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:33.515 14:22:19 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:33.515 14:22:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:29:33.515 14:22:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:29:33.515 14:22:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:29:38.782 65536+0 records in 00:29:38.782 65536+0 records out 00:29:38.782 33554432 bytes (34 MB, 32 MiB) copied, 4.53634 s, 7.4 MB/s 00:29:38.782 14:22:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:29:38.782 14:22:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:38.782 14:22:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:29:38.782 14:22:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:38.782 14:22:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:29:38.782 14:22:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:38.782 14:22:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:29:38.782 14:22:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:38.782 [2024-07-15 14:22:24.192397] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:38.782 14:22:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:38.782 14:22:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:38.782 14:22:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:38.782 14:22:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:38.782 14:22:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:38.782 14:22:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:29:38.782 14:22:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:29:38.782 14:22:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:29:38.782 [2024-07-15 14:22:24.428243] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:29:38.782 14:22:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:29:38.782 14:22:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:38.782 14:22:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:38.782 14:22:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:38.782 14:22:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:38.782 14:22:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:38.782 14:22:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:38.782 14:22:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:38.782 14:22:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:38.782 14:22:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:38.782 14:22:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:38.782 14:22:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:38.782 14:22:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:38.782 "name": "raid_bdev1", 00:29:38.782 "uuid": "40f18699-5e3f-424e-829a-7fbafed1a4f4", 00:29:38.782 "strip_size_kb": 0, 00:29:38.782 "state": "online", 00:29:38.782 "raid_level": "raid1", 00:29:38.782 "superblock": false, 00:29:38.782 "num_base_bdevs": 4, 00:29:38.782 "num_base_bdevs_discovered": 3, 00:29:38.782 "num_base_bdevs_operational": 3, 00:29:38.782 "base_bdevs_list": [ 00:29:38.782 { 00:29:38.782 "name": null, 00:29:38.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:38.782 "is_configured": false, 00:29:38.782 "data_offset": 0, 00:29:38.782 "data_size": 65536 00:29:38.782 }, 00:29:38.782 { 00:29:38.782 "name": "BaseBdev2", 00:29:38.782 "uuid": "e999b6dd-cb66-5382-aa57-557fe467cb14", 00:29:38.782 "is_configured": true, 00:29:38.782 "data_offset": 0, 00:29:38.782 "data_size": 65536 00:29:38.782 }, 00:29:38.782 { 00:29:38.782 "name": "BaseBdev3", 00:29:38.782 "uuid": "50452b76-60e3-57af-93cb-beaf82650657", 00:29:38.782 "is_configured": true, 00:29:38.782 "data_offset": 0, 00:29:38.783 "data_size": 65536 00:29:38.783 }, 00:29:38.783 { 00:29:38.783 "name": "BaseBdev4", 00:29:38.783 "uuid": "dc38ac51-3783-5d97-a547-ee1b4fadaff3", 00:29:38.783 "is_configured": true, 00:29:38.783 "data_offset": 0, 00:29:38.783 "data_size": 65536 00:29:38.783 } 00:29:38.783 ] 00:29:38.783 }' 00:29:38.783 14:22:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:38.783 14:22:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:39.718 14:22:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:29:39.976 [2024-07-15 14:22:25.760396] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:39.976 [2024-07-15 14:22:25.774215] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:29:39.976 [2024-07-15 14:22:25.776054] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:39.976 14:22:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # sleep 1 00:29:40.911 14:22:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:40.911 14:22:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:40.911 14:22:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:40.911 14:22:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:40.911 14:22:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:40.911 14:22:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:40.911 14:22:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:41.170 14:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:41.170 "name": "raid_bdev1", 00:29:41.170 "uuid": "40f18699-5e3f-424e-829a-7fbafed1a4f4", 00:29:41.170 "strip_size_kb": 0, 00:29:41.170 "state": "online", 00:29:41.170 "raid_level": "raid1", 00:29:41.170 "superblock": false, 00:29:41.170 "num_base_bdevs": 4, 00:29:41.170 "num_base_bdevs_discovered": 4, 00:29:41.170 "num_base_bdevs_operational": 4, 00:29:41.170 "process": { 00:29:41.170 "type": "rebuild", 00:29:41.170 "target": "spare", 00:29:41.170 "progress": { 00:29:41.170 "blocks": 24576, 00:29:41.170 "percent": 37 00:29:41.170 } 00:29:41.170 }, 00:29:41.170 "base_bdevs_list": [ 00:29:41.170 { 00:29:41.170 "name": "spare", 00:29:41.170 "uuid": "187f7f0f-f06f-5b32-b2e3-844c91a682d5", 00:29:41.170 "is_configured": true, 00:29:41.170 "data_offset": 0, 00:29:41.170 "data_size": 65536 00:29:41.170 }, 00:29:41.170 { 00:29:41.170 "name": "BaseBdev2", 00:29:41.170 "uuid": "e999b6dd-cb66-5382-aa57-557fe467cb14", 00:29:41.170 "is_configured": true, 00:29:41.170 "data_offset": 0, 00:29:41.170 "data_size": 65536 00:29:41.170 }, 00:29:41.170 { 00:29:41.170 "name": "BaseBdev3", 00:29:41.170 "uuid": "50452b76-60e3-57af-93cb-beaf82650657", 00:29:41.170 "is_configured": true, 00:29:41.170 "data_offset": 0, 00:29:41.170 "data_size": 65536 00:29:41.170 }, 00:29:41.170 { 00:29:41.170 "name": "BaseBdev4", 00:29:41.170 "uuid": "dc38ac51-3783-5d97-a547-ee1b4fadaff3", 00:29:41.170 "is_configured": true, 00:29:41.170 "data_offset": 0, 00:29:41.170 "data_size": 65536 00:29:41.170 } 00:29:41.170 ] 00:29:41.170 }' 00:29:41.170 14:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:41.170 14:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:41.170 14:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:41.429 14:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:41.429 14:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:29:41.429 [2024-07-15 14:22:27.402642] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:41.688 [2024-07-15 14:22:27.488651] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:41.688 [2024-07-15 14:22:27.488783] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:41.688 [2024-07-15 14:22:27.488808] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:41.688 [2024-07-15 14:22:27.488819] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:41.688 14:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:29:41.688 14:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:41.688 14:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:41.688 14:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:41.688 14:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:41.688 14:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:41.688 14:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:41.688 14:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:41.688 14:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:41.688 14:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:41.688 14:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:41.688 14:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:41.946 14:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:41.946 "name": "raid_bdev1", 00:29:41.946 "uuid": "40f18699-5e3f-424e-829a-7fbafed1a4f4", 00:29:41.946 "strip_size_kb": 0, 00:29:41.946 "state": "online", 00:29:41.946 "raid_level": "raid1", 00:29:41.946 "superblock": false, 00:29:41.946 "num_base_bdevs": 4, 00:29:41.946 "num_base_bdevs_discovered": 3, 00:29:41.946 "num_base_bdevs_operational": 3, 00:29:41.946 "base_bdevs_list": [ 00:29:41.946 { 00:29:41.946 "name": null, 00:29:41.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:41.946 "is_configured": false, 00:29:41.946 "data_offset": 0, 00:29:41.946 "data_size": 65536 00:29:41.946 }, 00:29:41.946 { 00:29:41.946 "name": "BaseBdev2", 00:29:41.946 "uuid": "e999b6dd-cb66-5382-aa57-557fe467cb14", 00:29:41.946 "is_configured": true, 00:29:41.946 "data_offset": 0, 00:29:41.946 "data_size": 65536 00:29:41.946 }, 00:29:41.946 { 00:29:41.946 "name": "BaseBdev3", 00:29:41.946 "uuid": "50452b76-60e3-57af-93cb-beaf82650657", 00:29:41.946 "is_configured": true, 00:29:41.946 "data_offset": 0, 00:29:41.946 "data_size": 65536 00:29:41.946 }, 00:29:41.946 { 00:29:41.946 "name": "BaseBdev4", 00:29:41.946 "uuid": "dc38ac51-3783-5d97-a547-ee1b4fadaff3", 00:29:41.946 "is_configured": true, 00:29:41.946 "data_offset": 0, 00:29:41.946 "data_size": 65536 00:29:41.946 } 00:29:41.946 ] 00:29:41.946 }' 00:29:41.946 14:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:41.946 14:22:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:42.512 14:22:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:42.512 14:22:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:42.512 14:22:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:42.512 14:22:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:42.512 14:22:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:42.512 14:22:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:42.512 14:22:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:42.771 14:22:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:42.771 "name": "raid_bdev1", 00:29:42.771 "uuid": "40f18699-5e3f-424e-829a-7fbafed1a4f4", 00:29:42.771 "strip_size_kb": 0, 00:29:42.771 "state": "online", 00:29:42.771 "raid_level": "raid1", 00:29:42.771 "superblock": false, 00:29:42.771 "num_base_bdevs": 4, 00:29:42.771 "num_base_bdevs_discovered": 3, 00:29:42.771 "num_base_bdevs_operational": 3, 00:29:42.771 "base_bdevs_list": [ 00:29:42.771 { 00:29:42.771 "name": null, 00:29:42.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:42.771 "is_configured": false, 00:29:42.771 "data_offset": 0, 00:29:42.771 "data_size": 65536 00:29:42.771 }, 00:29:42.771 { 00:29:42.771 "name": "BaseBdev2", 00:29:42.771 "uuid": "e999b6dd-cb66-5382-aa57-557fe467cb14", 00:29:42.771 "is_configured": true, 00:29:42.771 "data_offset": 0, 00:29:42.771 "data_size": 65536 00:29:42.771 }, 00:29:42.771 { 00:29:42.771 "name": "BaseBdev3", 00:29:42.771 "uuid": "50452b76-60e3-57af-93cb-beaf82650657", 00:29:42.771 "is_configured": true, 00:29:42.771 "data_offset": 0, 00:29:42.771 "data_size": 65536 00:29:42.771 }, 00:29:42.771 { 00:29:42.771 "name": "BaseBdev4", 00:29:42.771 "uuid": "dc38ac51-3783-5d97-a547-ee1b4fadaff3", 00:29:42.771 "is_configured": true, 00:29:42.771 "data_offset": 0, 00:29:42.771 "data_size": 65536 00:29:42.771 } 00:29:42.771 ] 00:29:42.771 }' 00:29:42.771 14:22:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:42.771 14:22:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:42.771 14:22:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:43.030 14:22:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:43.030 14:22:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:29:43.288 [2024-07-15 14:22:29.059284] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:43.288 [2024-07-15 14:22:29.071472] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09fe0 00:29:43.288 [2024-07-15 14:22:29.073242] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:43.288 14:22:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # sleep 1 00:29:44.228 14:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:44.228 14:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:44.228 14:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:44.228 14:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:44.228 14:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:44.228 14:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:44.228 14:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:44.486 14:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:44.486 "name": "raid_bdev1", 00:29:44.486 "uuid": "40f18699-5e3f-424e-829a-7fbafed1a4f4", 00:29:44.486 "strip_size_kb": 0, 00:29:44.486 "state": "online", 00:29:44.486 "raid_level": "raid1", 00:29:44.486 "superblock": false, 00:29:44.486 "num_base_bdevs": 4, 00:29:44.487 "num_base_bdevs_discovered": 4, 00:29:44.487 "num_base_bdevs_operational": 4, 00:29:44.487 "process": { 00:29:44.487 "type": "rebuild", 00:29:44.487 "target": "spare", 00:29:44.487 "progress": { 00:29:44.487 "blocks": 24576, 00:29:44.487 "percent": 37 00:29:44.487 } 00:29:44.487 }, 00:29:44.487 "base_bdevs_list": [ 00:29:44.487 { 00:29:44.487 "name": "spare", 00:29:44.487 "uuid": "187f7f0f-f06f-5b32-b2e3-844c91a682d5", 00:29:44.487 "is_configured": true, 00:29:44.487 "data_offset": 0, 00:29:44.487 "data_size": 65536 00:29:44.487 }, 00:29:44.487 { 00:29:44.487 "name": "BaseBdev2", 00:29:44.487 "uuid": "e999b6dd-cb66-5382-aa57-557fe467cb14", 00:29:44.487 "is_configured": true, 00:29:44.487 "data_offset": 0, 00:29:44.487 "data_size": 65536 00:29:44.487 }, 00:29:44.487 { 00:29:44.487 "name": "BaseBdev3", 00:29:44.487 "uuid": "50452b76-60e3-57af-93cb-beaf82650657", 00:29:44.487 "is_configured": true, 00:29:44.487 "data_offset": 0, 00:29:44.487 "data_size": 65536 00:29:44.487 }, 00:29:44.487 { 00:29:44.487 "name": "BaseBdev4", 00:29:44.487 "uuid": "dc38ac51-3783-5d97-a547-ee1b4fadaff3", 00:29:44.487 "is_configured": true, 00:29:44.487 "data_offset": 0, 00:29:44.487 "data_size": 65536 00:29:44.487 } 00:29:44.487 ] 00:29:44.487 }' 00:29:44.487 14:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:44.487 14:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:44.487 14:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:44.487 14:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:44.487 14:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:29:44.487 14:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:29:44.487 14:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:29:44.487 14:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' 4 -gt 2 ']' 00:29:44.487 14:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@694 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:29:44.745 [2024-07-15 14:22:30.679519] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:29:44.745 [2024-07-15 14:22:30.685513] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09fe0 00:29:44.745 14:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@697 -- # base_bdevs[1]= 00:29:44.745 14:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # (( num_base_bdevs_operational-- )) 00:29:44.746 14:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@701 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:44.746 14:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:44.746 14:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:44.746 14:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:44.746 14:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:44.746 14:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:44.746 14:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:45.004 14:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:45.004 "name": "raid_bdev1", 00:29:45.004 "uuid": "40f18699-5e3f-424e-829a-7fbafed1a4f4", 00:29:45.004 "strip_size_kb": 0, 00:29:45.004 "state": "online", 00:29:45.004 "raid_level": "raid1", 00:29:45.005 "superblock": false, 00:29:45.005 "num_base_bdevs": 4, 00:29:45.005 "num_base_bdevs_discovered": 3, 00:29:45.005 "num_base_bdevs_operational": 3, 00:29:45.005 "process": { 00:29:45.005 "type": "rebuild", 00:29:45.005 "target": "spare", 00:29:45.005 "progress": { 00:29:45.005 "blocks": 36864, 00:29:45.005 "percent": 56 00:29:45.005 } 00:29:45.005 }, 00:29:45.005 "base_bdevs_list": [ 00:29:45.005 { 00:29:45.005 "name": "spare", 00:29:45.005 "uuid": "187f7f0f-f06f-5b32-b2e3-844c91a682d5", 00:29:45.005 "is_configured": true, 00:29:45.005 "data_offset": 0, 00:29:45.005 "data_size": 65536 00:29:45.005 }, 00:29:45.005 { 00:29:45.005 "name": null, 00:29:45.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:45.005 "is_configured": false, 00:29:45.005 "data_offset": 0, 00:29:45.005 "data_size": 65536 00:29:45.005 }, 00:29:45.005 { 00:29:45.005 "name": "BaseBdev3", 00:29:45.005 "uuid": "50452b76-60e3-57af-93cb-beaf82650657", 00:29:45.005 "is_configured": true, 00:29:45.005 "data_offset": 0, 00:29:45.005 "data_size": 65536 00:29:45.005 }, 00:29:45.005 { 00:29:45.005 "name": "BaseBdev4", 00:29:45.005 "uuid": "dc38ac51-3783-5d97-a547-ee1b4fadaff3", 00:29:45.005 "is_configured": true, 00:29:45.005 "data_offset": 0, 00:29:45.005 "data_size": 65536 00:29:45.005 } 00:29:45.005 ] 00:29:45.005 }' 00:29:45.005 14:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:45.005 14:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:45.005 14:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:45.264 14:22:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:45.264 14:22:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@705 -- # local timeout=1008 00:29:45.264 14:22:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:29:45.264 14:22:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:45.264 14:22:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:45.264 14:22:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:45.264 14:22:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:45.264 14:22:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:45.264 14:22:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:45.264 14:22:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:45.523 14:22:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:45.523 "name": "raid_bdev1", 00:29:45.523 "uuid": "40f18699-5e3f-424e-829a-7fbafed1a4f4", 00:29:45.523 "strip_size_kb": 0, 00:29:45.523 "state": "online", 00:29:45.523 "raid_level": "raid1", 00:29:45.523 "superblock": false, 00:29:45.523 "num_base_bdevs": 4, 00:29:45.523 "num_base_bdevs_discovered": 3, 00:29:45.523 "num_base_bdevs_operational": 3, 00:29:45.523 "process": { 00:29:45.523 "type": "rebuild", 00:29:45.523 "target": "spare", 00:29:45.523 "progress": { 00:29:45.523 "blocks": 43008, 00:29:45.523 "percent": 65 00:29:45.523 } 00:29:45.523 }, 00:29:45.523 "base_bdevs_list": [ 00:29:45.523 { 00:29:45.523 "name": "spare", 00:29:45.523 "uuid": "187f7f0f-f06f-5b32-b2e3-844c91a682d5", 00:29:45.523 "is_configured": true, 00:29:45.523 "data_offset": 0, 00:29:45.523 "data_size": 65536 00:29:45.523 }, 00:29:45.523 { 00:29:45.523 "name": null, 00:29:45.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:45.523 "is_configured": false, 00:29:45.523 "data_offset": 0, 00:29:45.523 "data_size": 65536 00:29:45.523 }, 00:29:45.523 { 00:29:45.523 "name": "BaseBdev3", 00:29:45.523 "uuid": "50452b76-60e3-57af-93cb-beaf82650657", 00:29:45.523 "is_configured": true, 00:29:45.523 "data_offset": 0, 00:29:45.523 "data_size": 65536 00:29:45.523 }, 00:29:45.523 { 00:29:45.523 "name": "BaseBdev4", 00:29:45.523 "uuid": "dc38ac51-3783-5d97-a547-ee1b4fadaff3", 00:29:45.523 "is_configured": true, 00:29:45.523 "data_offset": 0, 00:29:45.523 "data_size": 65536 00:29:45.523 } 00:29:45.523 ] 00:29:45.523 }' 00:29:45.523 14:22:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:45.523 14:22:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:45.523 14:22:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:45.524 14:22:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:45.524 14:22:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:29:46.460 [2024-07-15 14:22:32.297627] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:29:46.460 [2024-07-15 14:22:32.297778] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:29:46.460 [2024-07-15 14:22:32.298387] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:46.460 14:22:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:29:46.460 14:22:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:46.460 14:22:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:46.460 14:22:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:46.460 14:22:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:46.460 14:22:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:46.460 14:22:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:46.460 14:22:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:46.719 14:22:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:46.719 "name": "raid_bdev1", 00:29:46.719 "uuid": "40f18699-5e3f-424e-829a-7fbafed1a4f4", 00:29:46.719 "strip_size_kb": 0, 00:29:46.719 "state": "online", 00:29:46.719 "raid_level": "raid1", 00:29:46.719 "superblock": false, 00:29:46.719 "num_base_bdevs": 4, 00:29:46.719 "num_base_bdevs_discovered": 3, 00:29:46.719 "num_base_bdevs_operational": 3, 00:29:46.719 "base_bdevs_list": [ 00:29:46.719 { 00:29:46.719 "name": "spare", 00:29:46.719 "uuid": "187f7f0f-f06f-5b32-b2e3-844c91a682d5", 00:29:46.719 "is_configured": true, 00:29:46.719 "data_offset": 0, 00:29:46.719 "data_size": 65536 00:29:46.719 }, 00:29:46.719 { 00:29:46.719 "name": null, 00:29:46.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:46.719 "is_configured": false, 00:29:46.719 "data_offset": 0, 00:29:46.719 "data_size": 65536 00:29:46.719 }, 00:29:46.719 { 00:29:46.719 "name": "BaseBdev3", 00:29:46.719 "uuid": "50452b76-60e3-57af-93cb-beaf82650657", 00:29:46.719 "is_configured": true, 00:29:46.719 "data_offset": 0, 00:29:46.719 "data_size": 65536 00:29:46.719 }, 00:29:46.719 { 00:29:46.719 "name": "BaseBdev4", 00:29:46.719 "uuid": "dc38ac51-3783-5d97-a547-ee1b4fadaff3", 00:29:46.719 "is_configured": true, 00:29:46.719 "data_offset": 0, 00:29:46.719 "data_size": 65536 00:29:46.719 } 00:29:46.719 ] 00:29:46.719 }' 00:29:46.719 14:22:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:46.979 14:22:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:29:46.979 14:22:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:46.979 14:22:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:29:46.979 14:22:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # break 00:29:46.979 14:22:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:46.979 14:22:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:46.979 14:22:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:46.979 14:22:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:46.979 14:22:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:46.979 14:22:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:46.979 14:22:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:47.238 14:22:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:47.238 "name": "raid_bdev1", 00:29:47.238 "uuid": "40f18699-5e3f-424e-829a-7fbafed1a4f4", 00:29:47.238 "strip_size_kb": 0, 00:29:47.238 "state": "online", 00:29:47.238 "raid_level": "raid1", 00:29:47.238 "superblock": false, 00:29:47.238 "num_base_bdevs": 4, 00:29:47.238 "num_base_bdevs_discovered": 3, 00:29:47.238 "num_base_bdevs_operational": 3, 00:29:47.238 "base_bdevs_list": [ 00:29:47.238 { 00:29:47.238 "name": "spare", 00:29:47.238 "uuid": "187f7f0f-f06f-5b32-b2e3-844c91a682d5", 00:29:47.238 "is_configured": true, 00:29:47.238 "data_offset": 0, 00:29:47.238 "data_size": 65536 00:29:47.238 }, 00:29:47.238 { 00:29:47.238 "name": null, 00:29:47.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:47.238 "is_configured": false, 00:29:47.238 "data_offset": 0, 00:29:47.238 "data_size": 65536 00:29:47.238 }, 00:29:47.238 { 00:29:47.238 "name": "BaseBdev3", 00:29:47.238 "uuid": "50452b76-60e3-57af-93cb-beaf82650657", 00:29:47.238 "is_configured": true, 00:29:47.238 "data_offset": 0, 00:29:47.238 "data_size": 65536 00:29:47.238 }, 00:29:47.238 { 00:29:47.238 "name": "BaseBdev4", 00:29:47.238 "uuid": "dc38ac51-3783-5d97-a547-ee1b4fadaff3", 00:29:47.238 "is_configured": true, 00:29:47.238 "data_offset": 0, 00:29:47.238 "data_size": 65536 00:29:47.238 } 00:29:47.238 ] 00:29:47.238 }' 00:29:47.238 14:22:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:47.238 14:22:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:47.238 14:22:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:47.238 14:22:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:47.238 14:22:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:29:47.238 14:22:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:47.238 14:22:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:47.238 14:22:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:47.238 14:22:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:47.238 14:22:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:47.238 14:22:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:47.238 14:22:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:47.238 14:22:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:47.238 14:22:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:47.238 14:22:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:47.238 14:22:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:47.497 14:22:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:47.497 "name": "raid_bdev1", 00:29:47.497 "uuid": "40f18699-5e3f-424e-829a-7fbafed1a4f4", 00:29:47.497 "strip_size_kb": 0, 00:29:47.497 "state": "online", 00:29:47.497 "raid_level": "raid1", 00:29:47.497 "superblock": false, 00:29:47.497 "num_base_bdevs": 4, 00:29:47.497 "num_base_bdevs_discovered": 3, 00:29:47.497 "num_base_bdevs_operational": 3, 00:29:47.497 "base_bdevs_list": [ 00:29:47.497 { 00:29:47.497 "name": "spare", 00:29:47.497 "uuid": "187f7f0f-f06f-5b32-b2e3-844c91a682d5", 00:29:47.497 "is_configured": true, 00:29:47.497 "data_offset": 0, 00:29:47.497 "data_size": 65536 00:29:47.497 }, 00:29:47.497 { 00:29:47.497 "name": null, 00:29:47.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:47.497 "is_configured": false, 00:29:47.497 "data_offset": 0, 00:29:47.497 "data_size": 65536 00:29:47.497 }, 00:29:47.497 { 00:29:47.497 "name": "BaseBdev3", 00:29:47.497 "uuid": "50452b76-60e3-57af-93cb-beaf82650657", 00:29:47.497 "is_configured": true, 00:29:47.497 "data_offset": 0, 00:29:47.497 "data_size": 65536 00:29:47.497 }, 00:29:47.497 { 00:29:47.497 "name": "BaseBdev4", 00:29:47.497 "uuid": "dc38ac51-3783-5d97-a547-ee1b4fadaff3", 00:29:47.497 "is_configured": true, 00:29:47.497 "data_offset": 0, 00:29:47.497 "data_size": 65536 00:29:47.497 } 00:29:47.497 ] 00:29:47.497 }' 00:29:47.497 14:22:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:47.497 14:22:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:48.435 14:22:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:29:48.435 [2024-07-15 14:22:34.300042] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:48.435 [2024-07-15 14:22:34.300105] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:48.435 [2024-07-15 14:22:34.300192] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:48.435 [2024-07-15 14:22:34.300269] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:48.435 [2024-07-15 14:22:34.300285] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:29:48.435 14:22:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # jq length 00:29:48.435 14:22:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:48.694 14:22:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:29:48.694 14:22:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:29:48.694 14:22:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:29:48.694 14:22:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:29:48.694 14:22:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:48.694 14:22:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:29:48.694 14:22:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:48.694 14:22:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:48.694 14:22:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:48.694 14:22:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:29:48.694 14:22:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:48.694 14:22:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:48.694 14:22:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:29:48.953 /dev/nbd0 00:29:48.953 14:22:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:48.953 14:22:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:48.953 14:22:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:29:48.953 14:22:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:29:48.953 14:22:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:29:48.953 14:22:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:29:48.953 14:22:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:29:48.953 14:22:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # break 00:29:48.953 14:22:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:29:48.953 14:22:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:29:48.953 14:22:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:48.953 1+0 records in 00:29:48.953 1+0 records out 00:29:48.953 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000583102 s, 7.0 MB/s 00:29:48.953 14:22:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:48.953 14:22:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:29:48.953 14:22:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:48.953 14:22:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:29:48.953 14:22:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:29:48.953 14:22:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:48.953 14:22:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:48.953 14:22:34 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:29:49.211 /dev/nbd1 00:29:49.211 14:22:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:29:49.211 14:22:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:29:49.211 14:22:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:29:49.211 14:22:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # local i 00:29:49.211 14:22:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:29:49.211 14:22:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:29:49.211 14:22:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:29:49.211 14:22:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # break 00:29:49.211 14:22:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:29:49.211 14:22:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:29:49.211 14:22:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:49.211 1+0 records in 00:29:49.211 1+0 records out 00:29:49.211 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000803002 s, 5.1 MB/s 00:29:49.211 14:22:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:49.211 14:22:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # size=4096 00:29:49.211 14:22:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:49.211 14:22:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:29:49.211 14:22:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # return 0 00:29:49.211 14:22:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:49.211 14:22:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:49.211 14:22:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:29:49.469 14:22:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:29:49.469 14:22:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:49.469 14:22:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:29:49.469 14:22:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:49.469 14:22:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:29:49.469 14:22:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:49.469 14:22:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:29:49.727 14:22:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:49.727 14:22:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:49.727 14:22:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:49.727 14:22:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:49.727 14:22:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:49.727 14:22:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:49.727 14:22:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:29:49.727 14:22:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:29:49.727 14:22:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:49.727 14:22:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:29:49.984 14:22:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:29:49.984 14:22:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:29:49.984 14:22:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:29:49.984 14:22:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:49.984 14:22:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:49.984 14:22:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:29:49.984 14:22:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:29:49.984 14:22:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:29:49.984 14:22:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:29:49.984 14:22:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@782 -- # killprocess 213217 00:29:49.984 14:22:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@948 -- # '[' -z 213217 ']' 00:29:49.984 14:22:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@952 -- # kill -0 213217 00:29:49.984 14:22:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@953 -- # uname 00:29:49.984 14:22:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:49.984 14:22:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 213217 00:29:49.984 14:22:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:49.984 14:22:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:49.984 14:22:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 213217' 00:29:49.984 killing process with pid 213217 00:29:49.984 14:22:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@967 -- # kill 213217 00:29:49.984 Received shutdown signal, test time was about 60.000000 seconds 00:29:49.984 00:29:49.984 Latency(us) 00:29:49.984 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:49.984 =================================================================================================================== 00:29:49.984 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:29:49.984 [2024-07-15 14:22:35.892800] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:49.984 14:22:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # wait 213217 00:29:50.552 [2024-07-15 14:22:36.351505] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:51.929 14:22:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # return 0 00:29:51.929 00:29:51.929 real 0m24.272s 00:29:51.929 user 0m34.677s 00:29:51.929 sys 0m4.353s 00:29:51.930 14:22:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:51.930 14:22:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:29:51.930 ************************************ 00:29:51.930 END TEST raid_rebuild_test 00:29:51.930 ************************************ 00:29:51.930 14:22:37 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:29:51.930 14:22:37 bdev_raid -- bdev/bdev_raid.sh@878 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:29:51.930 14:22:37 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:29:51.930 14:22:37 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:51.930 14:22:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:51.930 ************************************ 00:29:51.930 START TEST raid_rebuild_test_sb 00:29:51.930 ************************************ 00:29:51.930 14:22:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 4 true false true 00:29:51.930 14:22:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:29:51.930 14:22:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:29:51.930 14:22:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:29:51.930 14:22:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:29:51.930 14:22:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local verify=true 00:29:51.930 14:22:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:29:51.930 14:22:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:51.930 14:22:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:29:51.930 14:22:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:29:51.930 14:22:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:51.930 14:22:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:29:51.930 14:22:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:29:51.930 14:22:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:51.930 14:22:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev3 00:29:51.930 14:22:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:29:51.930 14:22:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:51.930 14:22:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev4 00:29:51.930 14:22:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:29:51.930 14:22:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:51.930 14:22:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:29:51.930 14:22:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:29:51.930 14:22:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:29:51.930 14:22:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local strip_size 00:29:51.930 14:22:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local create_arg 00:29:51.930 14:22:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:29:51.930 14:22:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local data_offset 00:29:51.930 14:22:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:29:51.930 14:22:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:29:51.930 14:22:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:29:51.930 14:22:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:29:51.930 14:22:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # raid_pid=213755 00:29:51.930 14:22:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # waitforlisten 213755 /var/tmp/spdk-raid.sock 00:29:51.930 14:22:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:29:51.930 14:22:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@829 -- # '[' -z 213755 ']' 00:29:51.930 14:22:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:29:51.930 14:22:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:51.930 14:22:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:29:51.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:29:51.930 14:22:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:51.930 14:22:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:51.930 [2024-07-15 14:22:37.736433] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:29:51.930 [2024-07-15 14:22:37.736638] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid213755 ] 00:29:51.930 I/O size of 3145728 is greater than zero copy threshold (65536). 00:29:51.930 Zero copy mechanism will not be used. 00:29:51.930 [2024-07-15 14:22:37.896911] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:52.189 [2024-07-15 14:22:38.149562] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:52.448 [2024-07-15 14:22:38.368744] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:53.063 14:22:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:53.063 14:22:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@862 -- # return 0 00:29:53.063 14:22:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:29:53.063 14:22:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:29:53.321 BaseBdev1_malloc 00:29:53.321 14:22:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:29:53.578 [2024-07-15 14:22:39.380650] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:29:53.578 [2024-07-15 14:22:39.380866] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:53.578 [2024-07-15 14:22:39.380928] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:29:53.578 [2024-07-15 14:22:39.380963] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:53.578 [2024-07-15 14:22:39.383150] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:53.578 [2024-07-15 14:22:39.383229] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:53.578 BaseBdev1 00:29:53.578 14:22:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:29:53.578 14:22:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:29:53.836 BaseBdev2_malloc 00:29:53.836 14:22:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:29:54.095 [2024-07-15 14:22:39.993391] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:29:54.095 [2024-07-15 14:22:39.993587] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:54.095 [2024-07-15 14:22:39.993644] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:29:54.095 [2024-07-15 14:22:39.993675] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:54.095 [2024-07-15 14:22:39.995778] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:54.095 [2024-07-15 14:22:39.995867] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:29:54.095 BaseBdev2 00:29:54.095 14:22:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:29:54.095 14:22:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:29:54.352 BaseBdev3_malloc 00:29:54.352 14:22:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:29:54.610 [2024-07-15 14:22:40.504569] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:29:54.610 [2024-07-15 14:22:40.504797] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:54.610 [2024-07-15 14:22:40.504848] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:29:54.610 [2024-07-15 14:22:40.504894] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:54.610 [2024-07-15 14:22:40.507019] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:54.610 [2024-07-15 14:22:40.507108] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:29:54.610 BaseBdev3 00:29:54.610 14:22:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:29:54.610 14:22:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:29:54.868 BaseBdev4_malloc 00:29:54.868 14:22:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:29:55.126 [2024-07-15 14:22:41.014510] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:29:55.126 [2024-07-15 14:22:41.014735] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:55.126 [2024-07-15 14:22:41.014808] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:29:55.126 [2024-07-15 14:22:41.014846] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:55.126 [2024-07-15 14:22:41.016944] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:55.126 [2024-07-15 14:22:41.017013] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:29:55.126 BaseBdev4 00:29:55.126 14:22:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:29:55.384 spare_malloc 00:29:55.384 14:22:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:29:55.642 spare_delay 00:29:55.642 14:22:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:29:55.898 [2024-07-15 14:22:41.756700] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:55.898 [2024-07-15 14:22:41.756883] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:55.898 [2024-07-15 14:22:41.756950] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:29:55.898 [2024-07-15 14:22:41.756989] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:55.898 [2024-07-15 14:22:41.759184] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:55.898 [2024-07-15 14:22:41.759268] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:55.898 spare 00:29:55.898 14:22:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:29:56.155 [2024-07-15 14:22:41.984967] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:56.155 [2024-07-15 14:22:41.986842] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:56.155 [2024-07-15 14:22:41.986910] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:29:56.155 [2024-07-15 14:22:41.986967] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:29:56.155 [2024-07-15 14:22:41.987185] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:29:56.155 [2024-07-15 14:22:41.987213] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:29:56.155 [2024-07-15 14:22:41.987377] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:29:56.155 [2024-07-15 14:22:41.987683] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:29:56.155 [2024-07-15 14:22:41.987711] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:29:56.155 [2024-07-15 14:22:41.987890] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:56.155 14:22:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:29:56.155 14:22:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:56.155 14:22:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:56.155 14:22:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:56.155 14:22:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:56.155 14:22:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:29:56.155 14:22:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:56.155 14:22:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:56.155 14:22:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:56.155 14:22:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:56.155 14:22:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:56.155 14:22:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:56.412 14:22:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:56.412 "name": "raid_bdev1", 00:29:56.412 "uuid": "7e35b6b1-29c2-4314-a635-e686b6024f36", 00:29:56.412 "strip_size_kb": 0, 00:29:56.412 "state": "online", 00:29:56.412 "raid_level": "raid1", 00:29:56.412 "superblock": true, 00:29:56.412 "num_base_bdevs": 4, 00:29:56.412 "num_base_bdevs_discovered": 4, 00:29:56.412 "num_base_bdevs_operational": 4, 00:29:56.412 "base_bdevs_list": [ 00:29:56.412 { 00:29:56.412 "name": "BaseBdev1", 00:29:56.412 "uuid": "034d3f8c-e762-5874-af23-be18ef36d5bf", 00:29:56.412 "is_configured": true, 00:29:56.412 "data_offset": 2048, 00:29:56.412 "data_size": 63488 00:29:56.412 }, 00:29:56.412 { 00:29:56.412 "name": "BaseBdev2", 00:29:56.412 "uuid": "cf30702d-9062-5126-9711-ae24bac3c04c", 00:29:56.412 "is_configured": true, 00:29:56.412 "data_offset": 2048, 00:29:56.412 "data_size": 63488 00:29:56.412 }, 00:29:56.412 { 00:29:56.412 "name": "BaseBdev3", 00:29:56.412 "uuid": "7a8d4191-0f81-536e-9f82-b06ef2d6dda1", 00:29:56.412 "is_configured": true, 00:29:56.412 "data_offset": 2048, 00:29:56.412 "data_size": 63488 00:29:56.412 }, 00:29:56.412 { 00:29:56.412 "name": "BaseBdev4", 00:29:56.412 "uuid": "8c0817a0-8514-593b-93a9-b71c9eda8fcd", 00:29:56.412 "is_configured": true, 00:29:56.412 "data_offset": 2048, 00:29:56.412 "data_size": 63488 00:29:56.412 } 00:29:56.412 ] 00:29:56.412 }' 00:29:56.412 14:22:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:56.412 14:22:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:56.978 14:22:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:29:56.978 14:22:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:29:57.236 [2024-07-15 14:22:43.197501] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:57.236 14:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=63488 00:29:57.236 14:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:57.236 14:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:29:57.516 14:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:29:57.516 14:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:29:57.516 14:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:29:57.516 14:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:29:57.516 14:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:29:57.516 14:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:57.516 14:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:29:57.516 14:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:57.516 14:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:29:57.516 14:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:57.516 14:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:29:57.516 14:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:57.516 14:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:57.516 14:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:29:57.774 [2024-07-15 14:22:43.777498] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:29:58.033 /dev/nbd0 00:29:58.033 14:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:58.033 14:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:58.033 14:22:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:29:58.033 14:22:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:29:58.033 14:22:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:29:58.033 14:22:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:29:58.033 14:22:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:29:58.033 14:22:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:29:58.033 14:22:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:29:58.033 14:22:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:29:58.033 14:22:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:58.033 1+0 records in 00:29:58.033 1+0 records out 00:29:58.033 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000741442 s, 5.5 MB/s 00:29:58.033 14:22:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:58.033 14:22:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:29:58.033 14:22:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:58.033 14:22:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:29:58.033 14:22:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:29:58.033 14:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:58.033 14:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:58.033 14:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:29:58.033 14:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:29:58.033 14:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:30:03.298 63488+0 records in 00:30:03.298 63488+0 records out 00:30:03.299 32505856 bytes (33 MB, 31 MiB) copied, 4.80953 s, 6.8 MB/s 00:30:03.299 14:22:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:30:03.299 14:22:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:03.299 14:22:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:30:03.299 14:22:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:03.299 14:22:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:30:03.299 14:22:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:03.299 14:22:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:30:03.299 [2024-07-15 14:22:48.943791] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:03.299 14:22:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:03.299 14:22:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:03.299 14:22:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:03.299 14:22:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:03.299 14:22:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:03.299 14:22:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:03.299 14:22:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:30:03.299 14:22:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:30:03.299 14:22:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:30:03.299 [2024-07-15 14:22:49.202923] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:03.299 14:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:03.299 14:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:03.299 14:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:03.299 14:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:03.299 14:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:03.299 14:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:03.299 14:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:03.299 14:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:03.299 14:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:03.299 14:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:03.299 14:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:03.299 14:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:03.557 14:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:03.557 "name": "raid_bdev1", 00:30:03.557 "uuid": "7e35b6b1-29c2-4314-a635-e686b6024f36", 00:30:03.557 "strip_size_kb": 0, 00:30:03.557 "state": "online", 00:30:03.557 "raid_level": "raid1", 00:30:03.557 "superblock": true, 00:30:03.557 "num_base_bdevs": 4, 00:30:03.557 "num_base_bdevs_discovered": 3, 00:30:03.557 "num_base_bdevs_operational": 3, 00:30:03.557 "base_bdevs_list": [ 00:30:03.557 { 00:30:03.557 "name": null, 00:30:03.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:03.557 "is_configured": false, 00:30:03.557 "data_offset": 2048, 00:30:03.557 "data_size": 63488 00:30:03.557 }, 00:30:03.557 { 00:30:03.557 "name": "BaseBdev2", 00:30:03.557 "uuid": "cf30702d-9062-5126-9711-ae24bac3c04c", 00:30:03.557 "is_configured": true, 00:30:03.557 "data_offset": 2048, 00:30:03.557 "data_size": 63488 00:30:03.557 }, 00:30:03.558 { 00:30:03.558 "name": "BaseBdev3", 00:30:03.558 "uuid": "7a8d4191-0f81-536e-9f82-b06ef2d6dda1", 00:30:03.558 "is_configured": true, 00:30:03.558 "data_offset": 2048, 00:30:03.558 "data_size": 63488 00:30:03.558 }, 00:30:03.558 { 00:30:03.558 "name": "BaseBdev4", 00:30:03.558 "uuid": "8c0817a0-8514-593b-93a9-b71c9eda8fcd", 00:30:03.558 "is_configured": true, 00:30:03.558 "data_offset": 2048, 00:30:03.558 "data_size": 63488 00:30:03.558 } 00:30:03.558 ] 00:30:03.558 }' 00:30:03.558 14:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:03.558 14:22:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:04.144 14:22:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:30:04.409 [2024-07-15 14:22:50.371191] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:04.409 [2024-07-15 14:22:50.384975] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:30:04.409 [2024-07-15 14:22:50.386715] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:04.409 14:22:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # sleep 1 00:30:05.785 14:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:05.785 14:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:05.785 14:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:05.785 14:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:05.785 14:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:05.785 14:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:05.785 14:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:05.785 14:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:05.786 "name": "raid_bdev1", 00:30:05.786 "uuid": "7e35b6b1-29c2-4314-a635-e686b6024f36", 00:30:05.786 "strip_size_kb": 0, 00:30:05.786 "state": "online", 00:30:05.786 "raid_level": "raid1", 00:30:05.786 "superblock": true, 00:30:05.786 "num_base_bdevs": 4, 00:30:05.786 "num_base_bdevs_discovered": 4, 00:30:05.786 "num_base_bdevs_operational": 4, 00:30:05.786 "process": { 00:30:05.786 "type": "rebuild", 00:30:05.786 "target": "spare", 00:30:05.786 "progress": { 00:30:05.786 "blocks": 26624, 00:30:05.786 "percent": 41 00:30:05.786 } 00:30:05.786 }, 00:30:05.786 "base_bdevs_list": [ 00:30:05.786 { 00:30:05.786 "name": "spare", 00:30:05.786 "uuid": "fde72649-721b-5fbe-a181-be0d7a5b31e3", 00:30:05.786 "is_configured": true, 00:30:05.786 "data_offset": 2048, 00:30:05.786 "data_size": 63488 00:30:05.786 }, 00:30:05.786 { 00:30:05.786 "name": "BaseBdev2", 00:30:05.786 "uuid": "cf30702d-9062-5126-9711-ae24bac3c04c", 00:30:05.786 "is_configured": true, 00:30:05.786 "data_offset": 2048, 00:30:05.786 "data_size": 63488 00:30:05.786 }, 00:30:05.786 { 00:30:05.786 "name": "BaseBdev3", 00:30:05.786 "uuid": "7a8d4191-0f81-536e-9f82-b06ef2d6dda1", 00:30:05.786 "is_configured": true, 00:30:05.786 "data_offset": 2048, 00:30:05.786 "data_size": 63488 00:30:05.786 }, 00:30:05.786 { 00:30:05.786 "name": "BaseBdev4", 00:30:05.786 "uuid": "8c0817a0-8514-593b-93a9-b71c9eda8fcd", 00:30:05.786 "is_configured": true, 00:30:05.786 "data_offset": 2048, 00:30:05.786 "data_size": 63488 00:30:05.786 } 00:30:05.786 ] 00:30:05.786 }' 00:30:05.786 14:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:05.786 14:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:05.786 14:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:06.045 14:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:06.045 14:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:30:06.383 [2024-07-15 14:22:52.057552] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:06.383 [2024-07-15 14:22:52.100359] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:30:06.383 [2024-07-15 14:22:52.100998] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:06.383 [2024-07-15 14:22:52.101029] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:06.383 [2024-07-15 14:22:52.101072] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:30:06.383 14:22:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:06.383 14:22:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:06.383 14:22:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:06.383 14:22:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:06.383 14:22:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:06.383 14:22:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:06.383 14:22:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:06.383 14:22:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:06.383 14:22:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:06.383 14:22:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:06.383 14:22:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:06.383 14:22:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:06.641 14:22:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:06.641 "name": "raid_bdev1", 00:30:06.641 "uuid": "7e35b6b1-29c2-4314-a635-e686b6024f36", 00:30:06.641 "strip_size_kb": 0, 00:30:06.641 "state": "online", 00:30:06.641 "raid_level": "raid1", 00:30:06.641 "superblock": true, 00:30:06.641 "num_base_bdevs": 4, 00:30:06.641 "num_base_bdevs_discovered": 3, 00:30:06.641 "num_base_bdevs_operational": 3, 00:30:06.641 "base_bdevs_list": [ 00:30:06.641 { 00:30:06.641 "name": null, 00:30:06.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:06.641 "is_configured": false, 00:30:06.641 "data_offset": 2048, 00:30:06.641 "data_size": 63488 00:30:06.641 }, 00:30:06.641 { 00:30:06.641 "name": "BaseBdev2", 00:30:06.641 "uuid": "cf30702d-9062-5126-9711-ae24bac3c04c", 00:30:06.641 "is_configured": true, 00:30:06.641 "data_offset": 2048, 00:30:06.641 "data_size": 63488 00:30:06.641 }, 00:30:06.641 { 00:30:06.641 "name": "BaseBdev3", 00:30:06.641 "uuid": "7a8d4191-0f81-536e-9f82-b06ef2d6dda1", 00:30:06.641 "is_configured": true, 00:30:06.641 "data_offset": 2048, 00:30:06.641 "data_size": 63488 00:30:06.641 }, 00:30:06.641 { 00:30:06.641 "name": "BaseBdev4", 00:30:06.641 "uuid": "8c0817a0-8514-593b-93a9-b71c9eda8fcd", 00:30:06.641 "is_configured": true, 00:30:06.641 "data_offset": 2048, 00:30:06.641 "data_size": 63488 00:30:06.641 } 00:30:06.641 ] 00:30:06.641 }' 00:30:06.641 14:22:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:06.641 14:22:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:07.210 14:22:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:07.210 14:22:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:07.210 14:22:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:07.210 14:22:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:07.210 14:22:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:07.210 14:22:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:07.210 14:22:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:07.470 14:22:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:07.470 "name": "raid_bdev1", 00:30:07.470 "uuid": "7e35b6b1-29c2-4314-a635-e686b6024f36", 00:30:07.470 "strip_size_kb": 0, 00:30:07.470 "state": "online", 00:30:07.470 "raid_level": "raid1", 00:30:07.470 "superblock": true, 00:30:07.470 "num_base_bdevs": 4, 00:30:07.470 "num_base_bdevs_discovered": 3, 00:30:07.470 "num_base_bdevs_operational": 3, 00:30:07.470 "base_bdevs_list": [ 00:30:07.470 { 00:30:07.470 "name": null, 00:30:07.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:07.470 "is_configured": false, 00:30:07.470 "data_offset": 2048, 00:30:07.470 "data_size": 63488 00:30:07.470 }, 00:30:07.470 { 00:30:07.470 "name": "BaseBdev2", 00:30:07.470 "uuid": "cf30702d-9062-5126-9711-ae24bac3c04c", 00:30:07.470 "is_configured": true, 00:30:07.470 "data_offset": 2048, 00:30:07.470 "data_size": 63488 00:30:07.470 }, 00:30:07.470 { 00:30:07.470 "name": "BaseBdev3", 00:30:07.470 "uuid": "7a8d4191-0f81-536e-9f82-b06ef2d6dda1", 00:30:07.470 "is_configured": true, 00:30:07.470 "data_offset": 2048, 00:30:07.470 "data_size": 63488 00:30:07.470 }, 00:30:07.470 { 00:30:07.470 "name": "BaseBdev4", 00:30:07.470 "uuid": "8c0817a0-8514-593b-93a9-b71c9eda8fcd", 00:30:07.470 "is_configured": true, 00:30:07.470 "data_offset": 2048, 00:30:07.470 "data_size": 63488 00:30:07.470 } 00:30:07.470 ] 00:30:07.470 }' 00:30:07.470 14:22:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:07.470 14:22:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:07.470 14:22:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:07.728 14:22:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:07.728 14:22:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:30:07.987 [2024-07-15 14:22:53.753112] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:07.987 [2024-07-15 14:22:53.764989] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3770 00:30:07.987 [2024-07-15 14:22:53.766552] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:07.987 14:22:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # sleep 1 00:30:08.922 14:22:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:08.922 14:22:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:08.922 14:22:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:08.922 14:22:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:08.922 14:22:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:08.922 14:22:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:08.922 14:22:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:09.180 14:22:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:09.180 "name": "raid_bdev1", 00:30:09.180 "uuid": "7e35b6b1-29c2-4314-a635-e686b6024f36", 00:30:09.180 "strip_size_kb": 0, 00:30:09.180 "state": "online", 00:30:09.180 "raid_level": "raid1", 00:30:09.180 "superblock": true, 00:30:09.180 "num_base_bdevs": 4, 00:30:09.180 "num_base_bdevs_discovered": 4, 00:30:09.180 "num_base_bdevs_operational": 4, 00:30:09.180 "process": { 00:30:09.180 "type": "rebuild", 00:30:09.180 "target": "spare", 00:30:09.180 "progress": { 00:30:09.180 "blocks": 24576, 00:30:09.180 "percent": 38 00:30:09.180 } 00:30:09.180 }, 00:30:09.180 "base_bdevs_list": [ 00:30:09.180 { 00:30:09.180 "name": "spare", 00:30:09.180 "uuid": "fde72649-721b-5fbe-a181-be0d7a5b31e3", 00:30:09.180 "is_configured": true, 00:30:09.180 "data_offset": 2048, 00:30:09.180 "data_size": 63488 00:30:09.180 }, 00:30:09.180 { 00:30:09.180 "name": "BaseBdev2", 00:30:09.180 "uuid": "cf30702d-9062-5126-9711-ae24bac3c04c", 00:30:09.180 "is_configured": true, 00:30:09.180 "data_offset": 2048, 00:30:09.180 "data_size": 63488 00:30:09.180 }, 00:30:09.180 { 00:30:09.180 "name": "BaseBdev3", 00:30:09.180 "uuid": "7a8d4191-0f81-536e-9f82-b06ef2d6dda1", 00:30:09.180 "is_configured": true, 00:30:09.180 "data_offset": 2048, 00:30:09.180 "data_size": 63488 00:30:09.180 }, 00:30:09.180 { 00:30:09.180 "name": "BaseBdev4", 00:30:09.180 "uuid": "8c0817a0-8514-593b-93a9-b71c9eda8fcd", 00:30:09.180 "is_configured": true, 00:30:09.180 "data_offset": 2048, 00:30:09.180 "data_size": 63488 00:30:09.180 } 00:30:09.180 ] 00:30:09.180 }' 00:30:09.180 14:22:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:09.180 14:22:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:09.180 14:22:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:09.180 14:22:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:09.180 14:22:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:30:09.180 14:22:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:30:09.180 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:30:09.180 14:22:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:30:09.180 14:22:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:30:09.180 14:22:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' 4 -gt 2 ']' 00:30:09.180 14:22:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@694 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:30:09.438 [2024-07-15 14:22:55.421104] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:30:09.698 [2024-07-15 14:22:55.577151] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca3770 00:30:09.698 14:22:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@697 -- # base_bdevs[1]= 00:30:09.698 14:22:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # (( num_base_bdevs_operational-- )) 00:30:09.698 14:22:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@701 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:09.698 14:22:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:09.698 14:22:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:09.698 14:22:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:09.698 14:22:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:09.699 14:22:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:09.699 14:22:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:09.963 14:22:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:09.963 "name": "raid_bdev1", 00:30:09.963 "uuid": "7e35b6b1-29c2-4314-a635-e686b6024f36", 00:30:09.963 "strip_size_kb": 0, 00:30:09.963 "state": "online", 00:30:09.963 "raid_level": "raid1", 00:30:09.963 "superblock": true, 00:30:09.963 "num_base_bdevs": 4, 00:30:09.963 "num_base_bdevs_discovered": 3, 00:30:09.963 "num_base_bdevs_operational": 3, 00:30:09.963 "process": { 00:30:09.963 "type": "rebuild", 00:30:09.963 "target": "spare", 00:30:09.963 "progress": { 00:30:09.963 "blocks": 38912, 00:30:09.963 "percent": 61 00:30:09.963 } 00:30:09.963 }, 00:30:09.963 "base_bdevs_list": [ 00:30:09.963 { 00:30:09.963 "name": "spare", 00:30:09.963 "uuid": "fde72649-721b-5fbe-a181-be0d7a5b31e3", 00:30:09.963 "is_configured": true, 00:30:09.963 "data_offset": 2048, 00:30:09.963 "data_size": 63488 00:30:09.963 }, 00:30:09.963 { 00:30:09.963 "name": null, 00:30:09.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:09.963 "is_configured": false, 00:30:09.964 "data_offset": 2048, 00:30:09.964 "data_size": 63488 00:30:09.964 }, 00:30:09.964 { 00:30:09.964 "name": "BaseBdev3", 00:30:09.964 "uuid": "7a8d4191-0f81-536e-9f82-b06ef2d6dda1", 00:30:09.964 "is_configured": true, 00:30:09.964 "data_offset": 2048, 00:30:09.964 "data_size": 63488 00:30:09.964 }, 00:30:09.964 { 00:30:09.964 "name": "BaseBdev4", 00:30:09.964 "uuid": "8c0817a0-8514-593b-93a9-b71c9eda8fcd", 00:30:09.964 "is_configured": true, 00:30:09.964 "data_offset": 2048, 00:30:09.964 "data_size": 63488 00:30:09.964 } 00:30:09.964 ] 00:30:09.964 }' 00:30:09.964 14:22:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:09.964 14:22:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:09.964 14:22:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:10.235 14:22:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:10.235 14:22:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@705 -- # local timeout=1032 00:30:10.235 14:22:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:30:10.235 14:22:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:10.235 14:22:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:10.235 14:22:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:10.235 14:22:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:10.235 14:22:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:10.235 14:22:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:10.235 14:22:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:10.235 14:22:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:10.235 "name": "raid_bdev1", 00:30:10.235 "uuid": "7e35b6b1-29c2-4314-a635-e686b6024f36", 00:30:10.235 "strip_size_kb": 0, 00:30:10.235 "state": "online", 00:30:10.235 "raid_level": "raid1", 00:30:10.235 "superblock": true, 00:30:10.235 "num_base_bdevs": 4, 00:30:10.235 "num_base_bdevs_discovered": 3, 00:30:10.235 "num_base_bdevs_operational": 3, 00:30:10.235 "process": { 00:30:10.235 "type": "rebuild", 00:30:10.235 "target": "spare", 00:30:10.235 "progress": { 00:30:10.235 "blocks": 47104, 00:30:10.235 "percent": 74 00:30:10.235 } 00:30:10.235 }, 00:30:10.235 "base_bdevs_list": [ 00:30:10.235 { 00:30:10.235 "name": "spare", 00:30:10.235 "uuid": "fde72649-721b-5fbe-a181-be0d7a5b31e3", 00:30:10.235 "is_configured": true, 00:30:10.235 "data_offset": 2048, 00:30:10.235 "data_size": 63488 00:30:10.235 }, 00:30:10.235 { 00:30:10.235 "name": null, 00:30:10.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:10.235 "is_configured": false, 00:30:10.235 "data_offset": 2048, 00:30:10.235 "data_size": 63488 00:30:10.235 }, 00:30:10.235 { 00:30:10.235 "name": "BaseBdev3", 00:30:10.235 "uuid": "7a8d4191-0f81-536e-9f82-b06ef2d6dda1", 00:30:10.235 "is_configured": true, 00:30:10.235 "data_offset": 2048, 00:30:10.235 "data_size": 63488 00:30:10.235 }, 00:30:10.235 { 00:30:10.235 "name": "BaseBdev4", 00:30:10.235 "uuid": "8c0817a0-8514-593b-93a9-b71c9eda8fcd", 00:30:10.235 "is_configured": true, 00:30:10.235 "data_offset": 2048, 00:30:10.235 "data_size": 63488 00:30:10.235 } 00:30:10.235 ] 00:30:10.235 }' 00:30:10.239 14:22:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:10.503 14:22:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:10.503 14:22:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:10.503 14:22:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:10.503 14:22:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:30:11.070 [2024-07-15 14:22:56.985198] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:30:11.070 [2024-07-15 14:22:56.985572] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:30:11.070 [2024-07-15 14:22:56.985809] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:11.328 14:22:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:30:11.328 14:22:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:11.328 14:22:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:11.328 14:22:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:11.328 14:22:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:11.328 14:22:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:11.328 14:22:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:11.328 14:22:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:11.895 14:22:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:11.895 "name": "raid_bdev1", 00:30:11.895 "uuid": "7e35b6b1-29c2-4314-a635-e686b6024f36", 00:30:11.895 "strip_size_kb": 0, 00:30:11.895 "state": "online", 00:30:11.895 "raid_level": "raid1", 00:30:11.895 "superblock": true, 00:30:11.895 "num_base_bdevs": 4, 00:30:11.895 "num_base_bdevs_discovered": 3, 00:30:11.895 "num_base_bdevs_operational": 3, 00:30:11.895 "base_bdevs_list": [ 00:30:11.895 { 00:30:11.895 "name": "spare", 00:30:11.895 "uuid": "fde72649-721b-5fbe-a181-be0d7a5b31e3", 00:30:11.895 "is_configured": true, 00:30:11.895 "data_offset": 2048, 00:30:11.895 "data_size": 63488 00:30:11.895 }, 00:30:11.895 { 00:30:11.895 "name": null, 00:30:11.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:11.895 "is_configured": false, 00:30:11.895 "data_offset": 2048, 00:30:11.895 "data_size": 63488 00:30:11.895 }, 00:30:11.895 { 00:30:11.895 "name": "BaseBdev3", 00:30:11.895 "uuid": "7a8d4191-0f81-536e-9f82-b06ef2d6dda1", 00:30:11.895 "is_configured": true, 00:30:11.895 "data_offset": 2048, 00:30:11.895 "data_size": 63488 00:30:11.895 }, 00:30:11.895 { 00:30:11.895 "name": "BaseBdev4", 00:30:11.895 "uuid": "8c0817a0-8514-593b-93a9-b71c9eda8fcd", 00:30:11.895 "is_configured": true, 00:30:11.895 "data_offset": 2048, 00:30:11.895 "data_size": 63488 00:30:11.895 } 00:30:11.895 ] 00:30:11.895 }' 00:30:11.895 14:22:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:11.895 14:22:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:30:11.895 14:22:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:11.895 14:22:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:30:11.895 14:22:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # break 00:30:11.895 14:22:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:11.895 14:22:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:11.895 14:22:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:11.895 14:22:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:11.896 14:22:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:11.896 14:22:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:11.896 14:22:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:12.154 14:22:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:12.154 "name": "raid_bdev1", 00:30:12.154 "uuid": "7e35b6b1-29c2-4314-a635-e686b6024f36", 00:30:12.154 "strip_size_kb": 0, 00:30:12.154 "state": "online", 00:30:12.154 "raid_level": "raid1", 00:30:12.154 "superblock": true, 00:30:12.154 "num_base_bdevs": 4, 00:30:12.154 "num_base_bdevs_discovered": 3, 00:30:12.154 "num_base_bdevs_operational": 3, 00:30:12.154 "base_bdevs_list": [ 00:30:12.154 { 00:30:12.154 "name": "spare", 00:30:12.154 "uuid": "fde72649-721b-5fbe-a181-be0d7a5b31e3", 00:30:12.154 "is_configured": true, 00:30:12.154 "data_offset": 2048, 00:30:12.154 "data_size": 63488 00:30:12.154 }, 00:30:12.154 { 00:30:12.154 "name": null, 00:30:12.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:12.154 "is_configured": false, 00:30:12.154 "data_offset": 2048, 00:30:12.154 "data_size": 63488 00:30:12.154 }, 00:30:12.154 { 00:30:12.154 "name": "BaseBdev3", 00:30:12.154 "uuid": "7a8d4191-0f81-536e-9f82-b06ef2d6dda1", 00:30:12.154 "is_configured": true, 00:30:12.154 "data_offset": 2048, 00:30:12.154 "data_size": 63488 00:30:12.154 }, 00:30:12.154 { 00:30:12.154 "name": "BaseBdev4", 00:30:12.154 "uuid": "8c0817a0-8514-593b-93a9-b71c9eda8fcd", 00:30:12.154 "is_configured": true, 00:30:12.154 "data_offset": 2048, 00:30:12.154 "data_size": 63488 00:30:12.154 } 00:30:12.154 ] 00:30:12.154 }' 00:30:12.154 14:22:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:12.154 14:22:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:12.154 14:22:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:12.154 14:22:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:12.154 14:22:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:12.154 14:22:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:12.154 14:22:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:12.154 14:22:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:12.154 14:22:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:12.154 14:22:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:12.154 14:22:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:12.154 14:22:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:12.154 14:22:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:12.154 14:22:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:12.154 14:22:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:12.154 14:22:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:12.413 14:22:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:12.413 "name": "raid_bdev1", 00:30:12.413 "uuid": "7e35b6b1-29c2-4314-a635-e686b6024f36", 00:30:12.413 "strip_size_kb": 0, 00:30:12.413 "state": "online", 00:30:12.413 "raid_level": "raid1", 00:30:12.413 "superblock": true, 00:30:12.413 "num_base_bdevs": 4, 00:30:12.413 "num_base_bdevs_discovered": 3, 00:30:12.413 "num_base_bdevs_operational": 3, 00:30:12.413 "base_bdevs_list": [ 00:30:12.413 { 00:30:12.413 "name": "spare", 00:30:12.413 "uuid": "fde72649-721b-5fbe-a181-be0d7a5b31e3", 00:30:12.413 "is_configured": true, 00:30:12.413 "data_offset": 2048, 00:30:12.413 "data_size": 63488 00:30:12.413 }, 00:30:12.413 { 00:30:12.413 "name": null, 00:30:12.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:12.413 "is_configured": false, 00:30:12.413 "data_offset": 2048, 00:30:12.413 "data_size": 63488 00:30:12.413 }, 00:30:12.413 { 00:30:12.413 "name": "BaseBdev3", 00:30:12.413 "uuid": "7a8d4191-0f81-536e-9f82-b06ef2d6dda1", 00:30:12.413 "is_configured": true, 00:30:12.413 "data_offset": 2048, 00:30:12.413 "data_size": 63488 00:30:12.413 }, 00:30:12.413 { 00:30:12.413 "name": "BaseBdev4", 00:30:12.413 "uuid": "8c0817a0-8514-593b-93a9-b71c9eda8fcd", 00:30:12.413 "is_configured": true, 00:30:12.413 "data_offset": 2048, 00:30:12.413 "data_size": 63488 00:30:12.413 } 00:30:12.413 ] 00:30:12.413 }' 00:30:12.413 14:22:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:12.413 14:22:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:13.347 14:22:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:30:13.636 [2024-07-15 14:22:59.361083] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:13.636 [2024-07-15 14:22:59.361276] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:13.636 [2024-07-15 14:22:59.361475] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:13.636 [2024-07-15 14:22:59.361664] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:13.636 [2024-07-15 14:22:59.361811] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:30:13.636 14:22:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:13.636 14:22:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # jq length 00:30:13.893 14:22:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:30:13.893 14:22:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:30:13.893 14:22:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:30:13.893 14:22:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:30:13.893 14:22:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:13.893 14:22:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:30:13.893 14:22:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:13.893 14:22:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:13.893 14:22:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:13.893 14:22:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:30:13.893 14:22:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:13.893 14:22:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:13.893 14:22:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:30:14.151 /dev/nbd0 00:30:14.151 14:23:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:30:14.151 14:23:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:30:14.151 14:23:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:30:14.151 14:23:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:30:14.151 14:23:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:30:14.151 14:23:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:30:14.151 14:23:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:30:14.151 14:23:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:30:14.151 14:23:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:30:14.151 14:23:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:30:14.151 14:23:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:14.151 1+0 records in 00:30:14.151 1+0 records out 00:30:14.151 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000371937 s, 11.0 MB/s 00:30:14.151 14:23:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:14.151 14:23:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:30:14.151 14:23:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:14.151 14:23:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:30:14.151 14:23:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:30:14.151 14:23:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:14.151 14:23:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:14.151 14:23:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:30:14.410 /dev/nbd1 00:30:14.410 14:23:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:30:14.411 14:23:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:30:14.411 14:23:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:30:14.411 14:23:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # local i 00:30:14.411 14:23:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:30:14.411 14:23:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:30:14.411 14:23:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:30:14.411 14:23:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # break 00:30:14.411 14:23:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:30:14.411 14:23:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:30:14.411 14:23:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:14.411 1+0 records in 00:30:14.411 1+0 records out 00:30:14.411 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000392303 s, 10.4 MB/s 00:30:14.411 14:23:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:14.411 14:23:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # size=4096 00:30:14.411 14:23:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:14.411 14:23:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:30:14.411 14:23:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # return 0 00:30:14.411 14:23:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:14.411 14:23:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:30:14.411 14:23:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:30:14.669 14:23:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:30:14.669 14:23:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:14.669 14:23:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:30:14.669 14:23:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:14.669 14:23:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:30:14.669 14:23:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:14.669 14:23:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:30:14.927 14:23:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:14.927 14:23:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:14.927 14:23:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:14.927 14:23:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:14.927 14:23:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:14.927 14:23:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:14.927 14:23:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:30:14.927 14:23:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:30:14.927 14:23:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:14.927 14:23:00 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:30:15.185 14:23:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:30:15.185 14:23:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:30:15.185 14:23:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:30:15.185 14:23:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:15.185 14:23:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:15.185 14:23:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:15.185 14:23:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:30:15.185 14:23:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:30:15.185 14:23:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:30:15.185 14:23:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:30:15.442 14:23:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:30:15.700 [2024-07-15 14:23:01.597441] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:30:15.700 [2024-07-15 14:23:01.597779] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:15.700 [2024-07-15 14:23:01.597878] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:30:15.700 [2024-07-15 14:23:01.598089] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:15.700 [2024-07-15 14:23:01.599860] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:15.700 [2024-07-15 14:23:01.600066] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:30:15.700 [2024-07-15 14:23:01.600266] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:30:15.700 [2024-07-15 14:23:01.600418] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:15.700 [2024-07-15 14:23:01.600581] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:15.700 [2024-07-15 14:23:01.600791] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:30:15.700 spare 00:30:15.700 14:23:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:15.700 14:23:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:15.700 14:23:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:15.700 14:23:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:15.700 14:23:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:15.700 14:23:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:15.700 14:23:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:15.700 14:23:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:15.700 14:23:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:15.700 14:23:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:15.700 14:23:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:15.700 14:23:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:15.701 [2024-07-15 14:23:01.700986] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000bd80 00:30:15.701 [2024-07-15 14:23:01.701272] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:30:15.701 [2024-07-15 14:23:01.701519] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc2090 00:30:15.701 [2024-07-15 14:23:01.701990] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000bd80 00:30:15.701 [2024-07-15 14:23:01.702109] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000bd80 00:30:15.701 [2024-07-15 14:23:01.702361] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:15.958 14:23:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:15.958 "name": "raid_bdev1", 00:30:15.958 "uuid": "7e35b6b1-29c2-4314-a635-e686b6024f36", 00:30:15.958 "strip_size_kb": 0, 00:30:15.958 "state": "online", 00:30:15.958 "raid_level": "raid1", 00:30:15.958 "superblock": true, 00:30:15.958 "num_base_bdevs": 4, 00:30:15.958 "num_base_bdevs_discovered": 3, 00:30:15.958 "num_base_bdevs_operational": 3, 00:30:15.958 "base_bdevs_list": [ 00:30:15.958 { 00:30:15.958 "name": "spare", 00:30:15.958 "uuid": "fde72649-721b-5fbe-a181-be0d7a5b31e3", 00:30:15.958 "is_configured": true, 00:30:15.958 "data_offset": 2048, 00:30:15.958 "data_size": 63488 00:30:15.958 }, 00:30:15.958 { 00:30:15.958 "name": null, 00:30:15.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:15.958 "is_configured": false, 00:30:15.958 "data_offset": 2048, 00:30:15.958 "data_size": 63488 00:30:15.958 }, 00:30:15.958 { 00:30:15.958 "name": "BaseBdev3", 00:30:15.958 "uuid": "7a8d4191-0f81-536e-9f82-b06ef2d6dda1", 00:30:15.958 "is_configured": true, 00:30:15.958 "data_offset": 2048, 00:30:15.958 "data_size": 63488 00:30:15.958 }, 00:30:15.958 { 00:30:15.958 "name": "BaseBdev4", 00:30:15.958 "uuid": "8c0817a0-8514-593b-93a9-b71c9eda8fcd", 00:30:15.958 "is_configured": true, 00:30:15.958 "data_offset": 2048, 00:30:15.958 "data_size": 63488 00:30:15.958 } 00:30:15.958 ] 00:30:15.958 }' 00:30:15.958 14:23:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:15.958 14:23:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:16.526 14:23:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:16.526 14:23:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:16.526 14:23:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:16.526 14:23:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:16.526 14:23:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:16.526 14:23:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:16.526 14:23:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:17.092 14:23:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:17.092 "name": "raid_bdev1", 00:30:17.092 "uuid": "7e35b6b1-29c2-4314-a635-e686b6024f36", 00:30:17.092 "strip_size_kb": 0, 00:30:17.092 "state": "online", 00:30:17.092 "raid_level": "raid1", 00:30:17.092 "superblock": true, 00:30:17.092 "num_base_bdevs": 4, 00:30:17.092 "num_base_bdevs_discovered": 3, 00:30:17.092 "num_base_bdevs_operational": 3, 00:30:17.092 "base_bdevs_list": [ 00:30:17.092 { 00:30:17.092 "name": "spare", 00:30:17.092 "uuid": "fde72649-721b-5fbe-a181-be0d7a5b31e3", 00:30:17.092 "is_configured": true, 00:30:17.092 "data_offset": 2048, 00:30:17.092 "data_size": 63488 00:30:17.092 }, 00:30:17.092 { 00:30:17.092 "name": null, 00:30:17.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:17.092 "is_configured": false, 00:30:17.092 "data_offset": 2048, 00:30:17.092 "data_size": 63488 00:30:17.092 }, 00:30:17.092 { 00:30:17.092 "name": "BaseBdev3", 00:30:17.092 "uuid": "7a8d4191-0f81-536e-9f82-b06ef2d6dda1", 00:30:17.092 "is_configured": true, 00:30:17.092 "data_offset": 2048, 00:30:17.092 "data_size": 63488 00:30:17.092 }, 00:30:17.092 { 00:30:17.092 "name": "BaseBdev4", 00:30:17.092 "uuid": "8c0817a0-8514-593b-93a9-b71c9eda8fcd", 00:30:17.092 "is_configured": true, 00:30:17.092 "data_offset": 2048, 00:30:17.092 "data_size": 63488 00:30:17.092 } 00:30:17.092 ] 00:30:17.092 }' 00:30:17.092 14:23:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:17.092 14:23:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:17.092 14:23:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:17.092 14:23:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:17.092 14:23:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:17.092 14:23:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:30:17.351 14:23:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:30:17.351 14:23:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:30:17.610 [2024-07-15 14:23:03.374667] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:17.610 14:23:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:17.610 14:23:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:17.610 14:23:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:17.610 14:23:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:17.610 14:23:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:17.610 14:23:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:17.610 14:23:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:17.610 14:23:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:17.610 14:23:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:17.610 14:23:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:17.610 14:23:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:17.610 14:23:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:17.944 14:23:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:17.944 "name": "raid_bdev1", 00:30:17.944 "uuid": "7e35b6b1-29c2-4314-a635-e686b6024f36", 00:30:17.944 "strip_size_kb": 0, 00:30:17.944 "state": "online", 00:30:17.944 "raid_level": "raid1", 00:30:17.944 "superblock": true, 00:30:17.944 "num_base_bdevs": 4, 00:30:17.944 "num_base_bdevs_discovered": 2, 00:30:17.944 "num_base_bdevs_operational": 2, 00:30:17.944 "base_bdevs_list": [ 00:30:17.944 { 00:30:17.944 "name": null, 00:30:17.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:17.944 "is_configured": false, 00:30:17.944 "data_offset": 2048, 00:30:17.944 "data_size": 63488 00:30:17.944 }, 00:30:17.944 { 00:30:17.944 "name": null, 00:30:17.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:17.944 "is_configured": false, 00:30:17.944 "data_offset": 2048, 00:30:17.944 "data_size": 63488 00:30:17.944 }, 00:30:17.944 { 00:30:17.944 "name": "BaseBdev3", 00:30:17.944 "uuid": "7a8d4191-0f81-536e-9f82-b06ef2d6dda1", 00:30:17.944 "is_configured": true, 00:30:17.944 "data_offset": 2048, 00:30:17.944 "data_size": 63488 00:30:17.944 }, 00:30:17.944 { 00:30:17.944 "name": "BaseBdev4", 00:30:17.944 "uuid": "8c0817a0-8514-593b-93a9-b71c9eda8fcd", 00:30:17.944 "is_configured": true, 00:30:17.944 "data_offset": 2048, 00:30:17.944 "data_size": 63488 00:30:17.944 } 00:30:17.944 ] 00:30:17.944 }' 00:30:17.944 14:23:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:17.944 14:23:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:18.512 14:23:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:30:18.512 [2024-07-15 14:23:04.482881] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:18.512 [2024-07-15 14:23:04.483257] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:30:18.512 [2024-07-15 14:23:04.483376] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:30:18.512 [2024-07-15 14:23:04.483843] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:18.512 [2024-07-15 14:23:04.496213] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc2230 00:30:18.512 14:23:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # sleep 1 00:30:18.512 [2024-07-15 14:23:04.510667] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:19.885 14:23:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:19.885 14:23:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:19.885 14:23:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:19.885 14:23:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:19.885 14:23:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:19.885 14:23:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:19.885 14:23:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:19.885 14:23:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:19.885 "name": "raid_bdev1", 00:30:19.885 "uuid": "7e35b6b1-29c2-4314-a635-e686b6024f36", 00:30:19.885 "strip_size_kb": 0, 00:30:19.885 "state": "online", 00:30:19.885 "raid_level": "raid1", 00:30:19.885 "superblock": true, 00:30:19.885 "num_base_bdevs": 4, 00:30:19.885 "num_base_bdevs_discovered": 3, 00:30:19.885 "num_base_bdevs_operational": 3, 00:30:19.885 "process": { 00:30:19.885 "type": "rebuild", 00:30:19.885 "target": "spare", 00:30:19.885 "progress": { 00:30:19.885 "blocks": 26624, 00:30:19.885 "percent": 41 00:30:19.885 } 00:30:19.885 }, 00:30:19.885 "base_bdevs_list": [ 00:30:19.885 { 00:30:19.885 "name": "spare", 00:30:19.885 "uuid": "fde72649-721b-5fbe-a181-be0d7a5b31e3", 00:30:19.885 "is_configured": true, 00:30:19.885 "data_offset": 2048, 00:30:19.885 "data_size": 63488 00:30:19.885 }, 00:30:19.885 { 00:30:19.885 "name": null, 00:30:19.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:19.885 "is_configured": false, 00:30:19.885 "data_offset": 2048, 00:30:19.886 "data_size": 63488 00:30:19.886 }, 00:30:19.886 { 00:30:19.886 "name": "BaseBdev3", 00:30:19.886 "uuid": "7a8d4191-0f81-536e-9f82-b06ef2d6dda1", 00:30:19.886 "is_configured": true, 00:30:19.886 "data_offset": 2048, 00:30:19.886 "data_size": 63488 00:30:19.886 }, 00:30:19.886 { 00:30:19.886 "name": "BaseBdev4", 00:30:19.886 "uuid": "8c0817a0-8514-593b-93a9-b71c9eda8fcd", 00:30:19.886 "is_configured": true, 00:30:19.886 "data_offset": 2048, 00:30:19.886 "data_size": 63488 00:30:19.886 } 00:30:19.886 ] 00:30:19.886 }' 00:30:19.886 14:23:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:20.143 14:23:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:20.143 14:23:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:20.143 14:23:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:20.143 14:23:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:30:20.401 [2024-07-15 14:23:06.220150] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:20.401 [2024-07-15 14:23:06.221182] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:30:20.401 [2024-07-15 14:23:06.221377] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:20.401 [2024-07-15 14:23:06.221502] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:20.401 [2024-07-15 14:23:06.221550] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:30:20.401 14:23:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:20.401 14:23:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:20.401 14:23:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:20.401 14:23:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:20.401 14:23:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:20.401 14:23:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:20.401 14:23:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:20.401 14:23:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:20.401 14:23:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:20.401 14:23:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:20.402 14:23:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:20.402 14:23:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:20.660 14:23:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:20.660 "name": "raid_bdev1", 00:30:20.660 "uuid": "7e35b6b1-29c2-4314-a635-e686b6024f36", 00:30:20.660 "strip_size_kb": 0, 00:30:20.660 "state": "online", 00:30:20.660 "raid_level": "raid1", 00:30:20.660 "superblock": true, 00:30:20.660 "num_base_bdevs": 4, 00:30:20.660 "num_base_bdevs_discovered": 2, 00:30:20.660 "num_base_bdevs_operational": 2, 00:30:20.660 "base_bdevs_list": [ 00:30:20.660 { 00:30:20.660 "name": null, 00:30:20.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:20.660 "is_configured": false, 00:30:20.660 "data_offset": 2048, 00:30:20.660 "data_size": 63488 00:30:20.660 }, 00:30:20.660 { 00:30:20.660 "name": null, 00:30:20.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:20.660 "is_configured": false, 00:30:20.660 "data_offset": 2048, 00:30:20.660 "data_size": 63488 00:30:20.660 }, 00:30:20.660 { 00:30:20.660 "name": "BaseBdev3", 00:30:20.660 "uuid": "7a8d4191-0f81-536e-9f82-b06ef2d6dda1", 00:30:20.660 "is_configured": true, 00:30:20.660 "data_offset": 2048, 00:30:20.660 "data_size": 63488 00:30:20.660 }, 00:30:20.660 { 00:30:20.660 "name": "BaseBdev4", 00:30:20.660 "uuid": "8c0817a0-8514-593b-93a9-b71c9eda8fcd", 00:30:20.660 "is_configured": true, 00:30:20.660 "data_offset": 2048, 00:30:20.660 "data_size": 63488 00:30:20.660 } 00:30:20.660 ] 00:30:20.660 }' 00:30:20.660 14:23:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:20.660 14:23:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:21.597 14:23:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:30:21.597 [2024-07-15 14:23:07.580775] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:30:21.597 [2024-07-15 14:23:07.581008] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:21.597 [2024-07-15 14:23:07.581111] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:30:21.597 [2024-07-15 14:23:07.581353] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:21.597 [2024-07-15 14:23:07.581807] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:21.597 [2024-07-15 14:23:07.581956] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:30:21.597 [2024-07-15 14:23:07.582184] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:30:21.597 [2024-07-15 14:23:07.582306] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:30:21.597 [2024-07-15 14:23:07.582413] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:30:21.597 [2024-07-15 14:23:07.582493] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:21.597 [2024-07-15 14:23:07.594780] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc2570 00:30:21.597 spare 00:30:21.597 [2024-07-15 14:23:07.596322] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:21.856 14:23:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # sleep 1 00:30:22.791 14:23:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:22.791 14:23:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:22.791 14:23:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:22.791 14:23:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:22.791 14:23:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:22.791 14:23:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:22.791 14:23:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:23.049 14:23:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:23.050 "name": "raid_bdev1", 00:30:23.050 "uuid": "7e35b6b1-29c2-4314-a635-e686b6024f36", 00:30:23.050 "strip_size_kb": 0, 00:30:23.050 "state": "online", 00:30:23.050 "raid_level": "raid1", 00:30:23.050 "superblock": true, 00:30:23.050 "num_base_bdevs": 4, 00:30:23.050 "num_base_bdevs_discovered": 3, 00:30:23.050 "num_base_bdevs_operational": 3, 00:30:23.050 "process": { 00:30:23.050 "type": "rebuild", 00:30:23.050 "target": "spare", 00:30:23.050 "progress": { 00:30:23.050 "blocks": 24576, 00:30:23.050 "percent": 38 00:30:23.050 } 00:30:23.050 }, 00:30:23.050 "base_bdevs_list": [ 00:30:23.050 { 00:30:23.050 "name": "spare", 00:30:23.050 "uuid": "fde72649-721b-5fbe-a181-be0d7a5b31e3", 00:30:23.050 "is_configured": true, 00:30:23.050 "data_offset": 2048, 00:30:23.050 "data_size": 63488 00:30:23.050 }, 00:30:23.050 { 00:30:23.050 "name": null, 00:30:23.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:23.050 "is_configured": false, 00:30:23.050 "data_offset": 2048, 00:30:23.050 "data_size": 63488 00:30:23.050 }, 00:30:23.050 { 00:30:23.050 "name": "BaseBdev3", 00:30:23.050 "uuid": "7a8d4191-0f81-536e-9f82-b06ef2d6dda1", 00:30:23.050 "is_configured": true, 00:30:23.050 "data_offset": 2048, 00:30:23.050 "data_size": 63488 00:30:23.050 }, 00:30:23.050 { 00:30:23.050 "name": "BaseBdev4", 00:30:23.050 "uuid": "8c0817a0-8514-593b-93a9-b71c9eda8fcd", 00:30:23.050 "is_configured": true, 00:30:23.050 "data_offset": 2048, 00:30:23.050 "data_size": 63488 00:30:23.050 } 00:30:23.050 ] 00:30:23.050 }' 00:30:23.050 14:23:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:23.050 14:23:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:23.050 14:23:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:23.050 14:23:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:23.050 14:23:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:30:23.308 [2024-07-15 14:23:09.239327] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:23.308 [2024-07-15 14:23:09.306461] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:30:23.308 [2024-07-15 14:23:09.306704] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:23.308 [2024-07-15 14:23:09.306786] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:23.308 [2024-07-15 14:23:09.306908] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:30:23.567 14:23:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:23.567 14:23:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:23.567 14:23:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:23.567 14:23:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:23.567 14:23:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:23.567 14:23:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:23.567 14:23:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:23.567 14:23:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:23.567 14:23:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:23.567 14:23:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:23.567 14:23:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:23.567 14:23:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:23.825 14:23:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:23.825 "name": "raid_bdev1", 00:30:23.825 "uuid": "7e35b6b1-29c2-4314-a635-e686b6024f36", 00:30:23.825 "strip_size_kb": 0, 00:30:23.825 "state": "online", 00:30:23.825 "raid_level": "raid1", 00:30:23.825 "superblock": true, 00:30:23.825 "num_base_bdevs": 4, 00:30:23.825 "num_base_bdevs_discovered": 2, 00:30:23.825 "num_base_bdevs_operational": 2, 00:30:23.825 "base_bdevs_list": [ 00:30:23.825 { 00:30:23.825 "name": null, 00:30:23.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:23.825 "is_configured": false, 00:30:23.825 "data_offset": 2048, 00:30:23.825 "data_size": 63488 00:30:23.825 }, 00:30:23.825 { 00:30:23.825 "name": null, 00:30:23.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:23.825 "is_configured": false, 00:30:23.825 "data_offset": 2048, 00:30:23.825 "data_size": 63488 00:30:23.825 }, 00:30:23.825 { 00:30:23.825 "name": "BaseBdev3", 00:30:23.825 "uuid": "7a8d4191-0f81-536e-9f82-b06ef2d6dda1", 00:30:23.825 "is_configured": true, 00:30:23.825 "data_offset": 2048, 00:30:23.825 "data_size": 63488 00:30:23.825 }, 00:30:23.825 { 00:30:23.825 "name": "BaseBdev4", 00:30:23.825 "uuid": "8c0817a0-8514-593b-93a9-b71c9eda8fcd", 00:30:23.825 "is_configured": true, 00:30:23.825 "data_offset": 2048, 00:30:23.825 "data_size": 63488 00:30:23.825 } 00:30:23.825 ] 00:30:23.825 }' 00:30:23.825 14:23:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:23.825 14:23:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:24.393 14:23:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:24.393 14:23:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:24.393 14:23:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:24.393 14:23:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:24.393 14:23:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:24.393 14:23:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:24.393 14:23:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:24.651 14:23:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:24.651 "name": "raid_bdev1", 00:30:24.651 "uuid": "7e35b6b1-29c2-4314-a635-e686b6024f36", 00:30:24.651 "strip_size_kb": 0, 00:30:24.651 "state": "online", 00:30:24.651 "raid_level": "raid1", 00:30:24.651 "superblock": true, 00:30:24.651 "num_base_bdevs": 4, 00:30:24.651 "num_base_bdevs_discovered": 2, 00:30:24.651 "num_base_bdevs_operational": 2, 00:30:24.651 "base_bdevs_list": [ 00:30:24.651 { 00:30:24.651 "name": null, 00:30:24.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:24.651 "is_configured": false, 00:30:24.651 "data_offset": 2048, 00:30:24.651 "data_size": 63488 00:30:24.651 }, 00:30:24.651 { 00:30:24.651 "name": null, 00:30:24.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:24.651 "is_configured": false, 00:30:24.651 "data_offset": 2048, 00:30:24.651 "data_size": 63488 00:30:24.651 }, 00:30:24.651 { 00:30:24.651 "name": "BaseBdev3", 00:30:24.651 "uuid": "7a8d4191-0f81-536e-9f82-b06ef2d6dda1", 00:30:24.651 "is_configured": true, 00:30:24.651 "data_offset": 2048, 00:30:24.651 "data_size": 63488 00:30:24.651 }, 00:30:24.651 { 00:30:24.651 "name": "BaseBdev4", 00:30:24.651 "uuid": "8c0817a0-8514-593b-93a9-b71c9eda8fcd", 00:30:24.651 "is_configured": true, 00:30:24.651 "data_offset": 2048, 00:30:24.651 "data_size": 63488 00:30:24.651 } 00:30:24.651 ] 00:30:24.651 }' 00:30:24.651 14:23:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:24.651 14:23:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:24.651 14:23:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:24.651 14:23:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:24.651 14:23:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:30:24.909 14:23:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:30:25.476 [2024-07-15 14:23:11.195106] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:30:25.476 [2024-07-15 14:23:11.195381] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:25.476 [2024-07-15 14:23:11.195466] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:30:25.476 [2024-07-15 14:23:11.195597] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:25.476 [2024-07-15 14:23:11.196006] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:25.476 [2024-07-15 14:23:11.196154] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:30:25.476 [2024-07-15 14:23:11.196371] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:30:25.476 [2024-07-15 14:23:11.196490] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:30:25.476 [2024-07-15 14:23:11.196596] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:30:25.476 BaseBdev1 00:30:25.476 14:23:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # sleep 1 00:30:26.413 14:23:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:26.413 14:23:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:26.413 14:23:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:26.413 14:23:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:26.413 14:23:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:26.413 14:23:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:26.413 14:23:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:26.413 14:23:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:26.413 14:23:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:26.413 14:23:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:26.413 14:23:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:26.413 14:23:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:26.671 14:23:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:26.671 "name": "raid_bdev1", 00:30:26.671 "uuid": "7e35b6b1-29c2-4314-a635-e686b6024f36", 00:30:26.671 "strip_size_kb": 0, 00:30:26.671 "state": "online", 00:30:26.671 "raid_level": "raid1", 00:30:26.671 "superblock": true, 00:30:26.671 "num_base_bdevs": 4, 00:30:26.671 "num_base_bdevs_discovered": 2, 00:30:26.671 "num_base_bdevs_operational": 2, 00:30:26.671 "base_bdevs_list": [ 00:30:26.671 { 00:30:26.671 "name": null, 00:30:26.671 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:26.671 "is_configured": false, 00:30:26.671 "data_offset": 2048, 00:30:26.671 "data_size": 63488 00:30:26.671 }, 00:30:26.671 { 00:30:26.671 "name": null, 00:30:26.671 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:26.671 "is_configured": false, 00:30:26.671 "data_offset": 2048, 00:30:26.671 "data_size": 63488 00:30:26.671 }, 00:30:26.671 { 00:30:26.671 "name": "BaseBdev3", 00:30:26.671 "uuid": "7a8d4191-0f81-536e-9f82-b06ef2d6dda1", 00:30:26.671 "is_configured": true, 00:30:26.671 "data_offset": 2048, 00:30:26.671 "data_size": 63488 00:30:26.671 }, 00:30:26.671 { 00:30:26.671 "name": "BaseBdev4", 00:30:26.671 "uuid": "8c0817a0-8514-593b-93a9-b71c9eda8fcd", 00:30:26.671 "is_configured": true, 00:30:26.671 "data_offset": 2048, 00:30:26.671 "data_size": 63488 00:30:26.671 } 00:30:26.671 ] 00:30:26.671 }' 00:30:26.671 14:23:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:26.671 14:23:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:27.238 14:23:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:27.238 14:23:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:27.238 14:23:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:27.238 14:23:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:27.238 14:23:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:27.238 14:23:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:27.238 14:23:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:27.495 14:23:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:27.495 "name": "raid_bdev1", 00:30:27.495 "uuid": "7e35b6b1-29c2-4314-a635-e686b6024f36", 00:30:27.495 "strip_size_kb": 0, 00:30:27.495 "state": "online", 00:30:27.495 "raid_level": "raid1", 00:30:27.495 "superblock": true, 00:30:27.495 "num_base_bdevs": 4, 00:30:27.495 "num_base_bdevs_discovered": 2, 00:30:27.495 "num_base_bdevs_operational": 2, 00:30:27.495 "base_bdevs_list": [ 00:30:27.495 { 00:30:27.495 "name": null, 00:30:27.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:27.495 "is_configured": false, 00:30:27.495 "data_offset": 2048, 00:30:27.495 "data_size": 63488 00:30:27.495 }, 00:30:27.495 { 00:30:27.495 "name": null, 00:30:27.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:27.495 "is_configured": false, 00:30:27.495 "data_offset": 2048, 00:30:27.495 "data_size": 63488 00:30:27.495 }, 00:30:27.495 { 00:30:27.495 "name": "BaseBdev3", 00:30:27.495 "uuid": "7a8d4191-0f81-536e-9f82-b06ef2d6dda1", 00:30:27.495 "is_configured": true, 00:30:27.495 "data_offset": 2048, 00:30:27.495 "data_size": 63488 00:30:27.495 }, 00:30:27.495 { 00:30:27.495 "name": "BaseBdev4", 00:30:27.495 "uuid": "8c0817a0-8514-593b-93a9-b71c9eda8fcd", 00:30:27.495 "is_configured": true, 00:30:27.495 "data_offset": 2048, 00:30:27.495 "data_size": 63488 00:30:27.495 } 00:30:27.495 ] 00:30:27.495 }' 00:30:27.495 14:23:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:27.495 14:23:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:27.495 14:23:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:27.495 14:23:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:27.495 14:23:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:30:27.495 14:23:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@648 -- # local es=0 00:30:27.495 14:23:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:30:27.495 14:23:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:27.495 14:23:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:27.495 14:23:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:27.495 14:23:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:27.495 14:23:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:27.495 14:23:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:27.495 14:23:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:27.495 14:23:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:30:27.496 14:23:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:30:27.753 [2024-07-15 14:23:13.699550] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:27.753 [2024-07-15 14:23:13.699904] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:30:27.753 [2024-07-15 14:23:13.700042] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:30:27.753 request: 00:30:27.753 { 00:30:27.753 "base_bdev": "BaseBdev1", 00:30:27.753 "raid_bdev": "raid_bdev1", 00:30:27.753 "method": "bdev_raid_add_base_bdev", 00:30:27.753 "req_id": 1 00:30:27.753 } 00:30:27.753 Got JSON-RPC error response 00:30:27.753 response: 00:30:27.753 { 00:30:27.753 "code": -22, 00:30:27.753 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:30:27.753 } 00:30:27.753 14:23:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@651 -- # es=1 00:30:27.753 14:23:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:27.753 14:23:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:27.753 14:23:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:27.753 14:23:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # sleep 1 00:30:29.125 14:23:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:29.125 14:23:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:29.125 14:23:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:29.125 14:23:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:29.125 14:23:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:29.125 14:23:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:29.125 14:23:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:29.125 14:23:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:29.125 14:23:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:29.125 14:23:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:29.125 14:23:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:29.125 14:23:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:29.125 14:23:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:29.125 "name": "raid_bdev1", 00:30:29.125 "uuid": "7e35b6b1-29c2-4314-a635-e686b6024f36", 00:30:29.125 "strip_size_kb": 0, 00:30:29.125 "state": "online", 00:30:29.125 "raid_level": "raid1", 00:30:29.125 "superblock": true, 00:30:29.125 "num_base_bdevs": 4, 00:30:29.125 "num_base_bdevs_discovered": 2, 00:30:29.125 "num_base_bdevs_operational": 2, 00:30:29.125 "base_bdevs_list": [ 00:30:29.125 { 00:30:29.125 "name": null, 00:30:29.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:29.125 "is_configured": false, 00:30:29.125 "data_offset": 2048, 00:30:29.125 "data_size": 63488 00:30:29.125 }, 00:30:29.125 { 00:30:29.125 "name": null, 00:30:29.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:29.125 "is_configured": false, 00:30:29.125 "data_offset": 2048, 00:30:29.125 "data_size": 63488 00:30:29.125 }, 00:30:29.125 { 00:30:29.125 "name": "BaseBdev3", 00:30:29.125 "uuid": "7a8d4191-0f81-536e-9f82-b06ef2d6dda1", 00:30:29.125 "is_configured": true, 00:30:29.125 "data_offset": 2048, 00:30:29.125 "data_size": 63488 00:30:29.125 }, 00:30:29.125 { 00:30:29.125 "name": "BaseBdev4", 00:30:29.125 "uuid": "8c0817a0-8514-593b-93a9-b71c9eda8fcd", 00:30:29.125 "is_configured": true, 00:30:29.125 "data_offset": 2048, 00:30:29.125 "data_size": 63488 00:30:29.125 } 00:30:29.125 ] 00:30:29.125 }' 00:30:29.125 14:23:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:29.125 14:23:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:29.692 14:23:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:29.692 14:23:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:29.692 14:23:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:29.692 14:23:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:29.692 14:23:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:29.692 14:23:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:29.692 14:23:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:30.260 14:23:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:30.260 "name": "raid_bdev1", 00:30:30.260 "uuid": "7e35b6b1-29c2-4314-a635-e686b6024f36", 00:30:30.260 "strip_size_kb": 0, 00:30:30.260 "state": "online", 00:30:30.260 "raid_level": "raid1", 00:30:30.260 "superblock": true, 00:30:30.260 "num_base_bdevs": 4, 00:30:30.260 "num_base_bdevs_discovered": 2, 00:30:30.260 "num_base_bdevs_operational": 2, 00:30:30.260 "base_bdevs_list": [ 00:30:30.260 { 00:30:30.260 "name": null, 00:30:30.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:30.260 "is_configured": false, 00:30:30.260 "data_offset": 2048, 00:30:30.260 "data_size": 63488 00:30:30.260 }, 00:30:30.260 { 00:30:30.260 "name": null, 00:30:30.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:30.260 "is_configured": false, 00:30:30.260 "data_offset": 2048, 00:30:30.260 "data_size": 63488 00:30:30.260 }, 00:30:30.260 { 00:30:30.260 "name": "BaseBdev3", 00:30:30.260 "uuid": "7a8d4191-0f81-536e-9f82-b06ef2d6dda1", 00:30:30.260 "is_configured": true, 00:30:30.260 "data_offset": 2048, 00:30:30.260 "data_size": 63488 00:30:30.260 }, 00:30:30.260 { 00:30:30.260 "name": "BaseBdev4", 00:30:30.260 "uuid": "8c0817a0-8514-593b-93a9-b71c9eda8fcd", 00:30:30.260 "is_configured": true, 00:30:30.260 "data_offset": 2048, 00:30:30.260 "data_size": 63488 00:30:30.260 } 00:30:30.260 ] 00:30:30.260 }' 00:30:30.260 14:23:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:30.260 14:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:30.260 14:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:30.260 14:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:30.260 14:23:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@782 -- # killprocess 213755 00:30:30.260 14:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@948 -- # '[' -z 213755 ']' 00:30:30.260 14:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@952 -- # kill -0 213755 00:30:30.260 14:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@953 -- # uname 00:30:30.260 14:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:30.260 14:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 213755 00:30:30.260 14:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:30.260 14:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:30.260 14:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 213755' 00:30:30.260 killing process with pid 213755 00:30:30.260 14:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@967 -- # kill 213755 00:30:30.260 Received shutdown signal, test time was about 60.000000 seconds 00:30:30.260 00:30:30.260 Latency(us) 00:30:30.260 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:30.260 =================================================================================================================== 00:30:30.260 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:30:30.260 14:23:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # wait 213755 00:30:30.260 [2024-07-15 14:23:16.079169] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:30.260 [2024-07-15 14:23:16.079273] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:30.260 [2024-07-15 14:23:16.079319] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:30.260 [2024-07-15 14:23:16.079330] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000bd80 name raid_bdev1, state offline 00:30:30.520 [2024-07-15 14:23:16.490755] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:31.894 14:23:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # return 0 00:30:31.894 00:30:31.894 real 0m39.954s 00:30:31.894 user 1m0.359s 00:30:31.894 sys 0m5.996s 00:30:31.894 14:23:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:31.894 ************************************ 00:30:31.894 END TEST raid_rebuild_test_sb 00:30:31.894 14:23:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:31.894 ************************************ 00:30:31.894 14:23:17 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:30:31.894 14:23:17 bdev_raid -- bdev/bdev_raid.sh@879 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:30:31.894 14:23:17 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:30:31.894 14:23:17 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:31.894 14:23:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:30:31.894 ************************************ 00:30:31.894 START TEST raid_rebuild_test_io 00:30:31.894 ************************************ 00:30:31.894 14:23:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 4 false true true 00:30:31.894 14:23:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:30:31.894 14:23:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:30:31.894 14:23:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:30:31.894 14:23:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local background_io=true 00:30:31.894 14:23:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local verify=true 00:30:31.894 14:23:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:30:31.894 14:23:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:30:31.894 14:23:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:30:31.894 14:23:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:30:31.894 14:23:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:30:31.894 14:23:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:30:31.894 14:23:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:30:31.894 14:23:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:30:31.894 14:23:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev3 00:30:31.894 14:23:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:30:31.894 14:23:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:30:31.894 14:23:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev4 00:30:31.894 14:23:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:30:31.894 14:23:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:30:31.894 14:23:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:30:31.894 14:23:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:30:31.894 14:23:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:30:31.894 14:23:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local strip_size 00:30:31.894 14:23:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local create_arg 00:30:31.894 14:23:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:30:31.894 14:23:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local data_offset 00:30:31.894 14:23:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:30:31.894 14:23:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:30:31.894 14:23:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:30:31.894 14:23:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # raid_pid=214694 00:30:31.894 14:23:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # waitforlisten 214694 /var/tmp/spdk-raid.sock 00:30:31.894 14:23:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:30:31.894 14:23:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@829 -- # '[' -z 214694 ']' 00:30:31.894 14:23:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:30:31.894 14:23:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:31.894 14:23:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:30:31.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:30:31.894 14:23:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:31.894 14:23:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:31.894 [2024-07-15 14:23:17.759549] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:30:31.894 [2024-07-15 14:23:17.759947] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid214694 ] 00:30:31.894 I/O size of 3145728 is greater than zero copy threshold (65536). 00:30:31.894 Zero copy mechanism will not be used. 00:30:32.152 [2024-07-15 14:23:17.924561] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:32.152 [2024-07-15 14:23:18.138104] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:32.410 [2024-07-15 14:23:18.340607] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:32.976 14:23:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:32.976 14:23:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@862 -- # return 0 00:30:32.976 14:23:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:30:32.976 14:23:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:30:33.234 BaseBdev1_malloc 00:30:33.234 14:23:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:30:33.493 [2024-07-15 14:23:19.261718] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:30:33.493 [2024-07-15 14:23:19.262070] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:33.493 [2024-07-15 14:23:19.262243] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:30:33.493 [2024-07-15 14:23:19.262406] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:33.493 [2024-07-15 14:23:19.264252] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:33.493 [2024-07-15 14:23:19.264433] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:30:33.493 BaseBdev1 00:30:33.493 14:23:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:30:33.493 14:23:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:30:33.752 BaseBdev2_malloc 00:30:33.752 14:23:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:30:34.011 [2024-07-15 14:23:19.795646] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:30:34.011 [2024-07-15 14:23:19.795917] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:34.011 [2024-07-15 14:23:19.796006] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:30:34.011 [2024-07-15 14:23:19.796231] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:34.011 [2024-07-15 14:23:19.798069] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:34.011 [2024-07-15 14:23:19.798233] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:30:34.011 BaseBdev2 00:30:34.011 14:23:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:30:34.011 14:23:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:30:34.269 BaseBdev3_malloc 00:30:34.269 14:23:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:30:34.528 [2024-07-15 14:23:20.394523] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:30:34.528 [2024-07-15 14:23:20.394879] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:34.528 [2024-07-15 14:23:20.395037] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:30:34.528 [2024-07-15 14:23:20.395172] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:34.528 [2024-07-15 14:23:20.397015] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:34.528 [2024-07-15 14:23:20.397196] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:30:34.528 BaseBdev3 00:30:34.528 14:23:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:30:34.528 14:23:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:30:34.786 BaseBdev4_malloc 00:30:34.786 14:23:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:30:35.043 [2024-07-15 14:23:20.917211] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:30:35.043 [2024-07-15 14:23:20.917635] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:35.043 [2024-07-15 14:23:20.917809] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:30:35.043 [2024-07-15 14:23:20.917938] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:35.043 [2024-07-15 14:23:20.919667] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:35.043 [2024-07-15 14:23:20.919869] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:30:35.043 BaseBdev4 00:30:35.043 14:23:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:30:35.300 spare_malloc 00:30:35.300 14:23:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:30:35.557 spare_delay 00:30:35.557 14:23:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:30:35.814 [2024-07-15 14:23:21.699580] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:30:35.814 [2024-07-15 14:23:21.699957] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:35.814 [2024-07-15 14:23:21.700114] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:30:35.814 [2024-07-15 14:23:21.700252] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:35.814 [2024-07-15 14:23:21.702190] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:35.814 [2024-07-15 14:23:21.702360] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:30:35.814 spare 00:30:35.814 14:23:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:30:36.072 [2024-07-15 14:23:21.991699] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:36.072 [2024-07-15 14:23:21.993534] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:36.072 [2024-07-15 14:23:21.993774] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:36.072 [2024-07-15 14:23:21.993869] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:30:36.072 [2024-07-15 14:23:21.994061] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:30:36.072 [2024-07-15 14:23:21.994179] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:30:36.072 [2024-07-15 14:23:21.994362] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:30:36.072 [2024-07-15 14:23:21.994748] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:30:36.072 [2024-07-15 14:23:21.994873] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:30:36.072 [2024-07-15 14:23:21.995135] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:36.072 14:23:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:30:36.072 14:23:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:36.072 14:23:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:36.072 14:23:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:36.072 14:23:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:36.072 14:23:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:30:36.072 14:23:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:36.072 14:23:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:36.072 14:23:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:36.072 14:23:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:36.072 14:23:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:36.072 14:23:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:36.330 14:23:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:36.330 "name": "raid_bdev1", 00:30:36.330 "uuid": "25a1d410-23e4-4fff-a318-4f2b6220172d", 00:30:36.330 "strip_size_kb": 0, 00:30:36.330 "state": "online", 00:30:36.330 "raid_level": "raid1", 00:30:36.330 "superblock": false, 00:30:36.330 "num_base_bdevs": 4, 00:30:36.330 "num_base_bdevs_discovered": 4, 00:30:36.330 "num_base_bdevs_operational": 4, 00:30:36.330 "base_bdevs_list": [ 00:30:36.330 { 00:30:36.330 "name": "BaseBdev1", 00:30:36.330 "uuid": "b843e971-a4fa-5aa5-ad74-de847b0ee58d", 00:30:36.330 "is_configured": true, 00:30:36.330 "data_offset": 0, 00:30:36.330 "data_size": 65536 00:30:36.330 }, 00:30:36.330 { 00:30:36.330 "name": "BaseBdev2", 00:30:36.330 "uuid": "7b8a5b25-a062-58ef-abfb-a25f02b8d3ab", 00:30:36.330 "is_configured": true, 00:30:36.330 "data_offset": 0, 00:30:36.330 "data_size": 65536 00:30:36.330 }, 00:30:36.330 { 00:30:36.330 "name": "BaseBdev3", 00:30:36.330 "uuid": "bf71fde1-11d5-5a0a-8d02-15d62a375334", 00:30:36.330 "is_configured": true, 00:30:36.330 "data_offset": 0, 00:30:36.330 "data_size": 65536 00:30:36.330 }, 00:30:36.330 { 00:30:36.330 "name": "BaseBdev4", 00:30:36.330 "uuid": "da3800c4-9c62-51cc-a04c-6adaebdef379", 00:30:36.330 "is_configured": true, 00:30:36.330 "data_offset": 0, 00:30:36.330 "data_size": 65536 00:30:36.330 } 00:30:36.330 ] 00:30:36.330 }' 00:30:36.330 14:23:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:36.330 14:23:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:36.895 14:23:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:30:36.895 14:23:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:30:37.152 [2024-07-15 14:23:23.084042] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:37.152 14:23:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=65536 00:30:37.152 14:23:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:37.152 14:23:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:30:37.409 14:23:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:30:37.409 14:23:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@620 -- # '[' true = true ']' 00:30:37.409 14:23:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:30:37.409 14:23:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:30:37.666 [2024-07-15 14:23:23.443620] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:30:37.666 I/O size of 3145728 is greater than zero copy threshold (65536). 00:30:37.666 Zero copy mechanism will not be used. 00:30:37.666 Running I/O for 60 seconds... 00:30:37.666 [2024-07-15 14:23:23.597975] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:37.666 [2024-07-15 14:23:23.606652] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000062f0 00:30:37.666 14:23:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:37.666 14:23:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:37.666 14:23:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:37.666 14:23:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:37.666 14:23:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:37.666 14:23:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:37.666 14:23:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:37.666 14:23:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:37.666 14:23:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:37.666 14:23:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:37.666 14:23:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:37.666 14:23:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:38.232 14:23:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:38.232 "name": "raid_bdev1", 00:30:38.232 "uuid": "25a1d410-23e4-4fff-a318-4f2b6220172d", 00:30:38.232 "strip_size_kb": 0, 00:30:38.232 "state": "online", 00:30:38.232 "raid_level": "raid1", 00:30:38.232 "superblock": false, 00:30:38.232 "num_base_bdevs": 4, 00:30:38.232 "num_base_bdevs_discovered": 3, 00:30:38.232 "num_base_bdevs_operational": 3, 00:30:38.232 "base_bdevs_list": [ 00:30:38.232 { 00:30:38.232 "name": null, 00:30:38.232 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:38.232 "is_configured": false, 00:30:38.232 "data_offset": 0, 00:30:38.232 "data_size": 65536 00:30:38.232 }, 00:30:38.232 { 00:30:38.232 "name": "BaseBdev2", 00:30:38.232 "uuid": "7b8a5b25-a062-58ef-abfb-a25f02b8d3ab", 00:30:38.232 "is_configured": true, 00:30:38.232 "data_offset": 0, 00:30:38.232 "data_size": 65536 00:30:38.232 }, 00:30:38.232 { 00:30:38.232 "name": "BaseBdev3", 00:30:38.232 "uuid": "bf71fde1-11d5-5a0a-8d02-15d62a375334", 00:30:38.233 "is_configured": true, 00:30:38.233 "data_offset": 0, 00:30:38.233 "data_size": 65536 00:30:38.233 }, 00:30:38.233 { 00:30:38.233 "name": "BaseBdev4", 00:30:38.233 "uuid": "da3800c4-9c62-51cc-a04c-6adaebdef379", 00:30:38.233 "is_configured": true, 00:30:38.233 "data_offset": 0, 00:30:38.233 "data_size": 65536 00:30:38.233 } 00:30:38.233 ] 00:30:38.233 }' 00:30:38.233 14:23:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:38.233 14:23:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:38.803 14:23:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:30:39.061 [2024-07-15 14:23:24.948174] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:39.061 14:23:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # sleep 1 00:30:39.061 [2024-07-15 14:23:24.998641] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:30:39.061 [2024-07-15 14:23:25.000272] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:39.319 [2024-07-15 14:23:25.108618] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:30:39.319 [2024-07-15 14:23:25.109562] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:30:39.319 [2024-07-15 14:23:25.227278] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:30:39.319 [2024-07-15 14:23:25.228126] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:30:39.577 [2024-07-15 14:23:25.564863] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:30:39.835 [2024-07-15 14:23:25.680508] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:30:40.093 14:23:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:40.093 14:23:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:40.093 14:23:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:40.093 14:23:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:40.093 14:23:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:40.093 14:23:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:40.093 14:23:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:40.093 [2024-07-15 14:23:26.006890] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:30:40.352 [2024-07-15 14:23:26.133095] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:30:40.352 [2024-07-15 14:23:26.133979] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:30:40.352 14:23:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:40.352 "name": "raid_bdev1", 00:30:40.352 "uuid": "25a1d410-23e4-4fff-a318-4f2b6220172d", 00:30:40.352 "strip_size_kb": 0, 00:30:40.352 "state": "online", 00:30:40.352 "raid_level": "raid1", 00:30:40.352 "superblock": false, 00:30:40.352 "num_base_bdevs": 4, 00:30:40.352 "num_base_bdevs_discovered": 4, 00:30:40.352 "num_base_bdevs_operational": 4, 00:30:40.352 "process": { 00:30:40.352 "type": "rebuild", 00:30:40.352 "target": "spare", 00:30:40.352 "progress": { 00:30:40.352 "blocks": 16384, 00:30:40.352 "percent": 25 00:30:40.352 } 00:30:40.352 }, 00:30:40.352 "base_bdevs_list": [ 00:30:40.352 { 00:30:40.352 "name": "spare", 00:30:40.352 "uuid": "3c3d89d7-cde0-58c5-b26c-e2c52fabd465", 00:30:40.352 "is_configured": true, 00:30:40.352 "data_offset": 0, 00:30:40.352 "data_size": 65536 00:30:40.352 }, 00:30:40.352 { 00:30:40.352 "name": "BaseBdev2", 00:30:40.352 "uuid": "7b8a5b25-a062-58ef-abfb-a25f02b8d3ab", 00:30:40.352 "is_configured": true, 00:30:40.352 "data_offset": 0, 00:30:40.352 "data_size": 65536 00:30:40.352 }, 00:30:40.352 { 00:30:40.352 "name": "BaseBdev3", 00:30:40.352 "uuid": "bf71fde1-11d5-5a0a-8d02-15d62a375334", 00:30:40.352 "is_configured": true, 00:30:40.352 "data_offset": 0, 00:30:40.352 "data_size": 65536 00:30:40.352 }, 00:30:40.352 { 00:30:40.352 "name": "BaseBdev4", 00:30:40.352 "uuid": "da3800c4-9c62-51cc-a04c-6adaebdef379", 00:30:40.352 "is_configured": true, 00:30:40.352 "data_offset": 0, 00:30:40.352 "data_size": 65536 00:30:40.352 } 00:30:40.352 ] 00:30:40.352 }' 00:30:40.352 14:23:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:40.352 14:23:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:40.352 14:23:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:40.611 14:23:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:40.611 14:23:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:30:40.611 [2024-07-15 14:23:26.456856] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:30:40.869 [2024-07-15 14:23:26.625068] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:40.869 [2024-07-15 14:23:26.768816] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:30:40.869 [2024-07-15 14:23:26.776731] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:40.870 [2024-07-15 14:23:26.776900] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:30:40.870 [2024-07-15 14:23:26.776950] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:30:40.870 [2024-07-15 14:23:26.801185] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000062f0 00:30:40.870 14:23:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:40.870 14:23:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:40.870 14:23:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:40.870 14:23:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:40.870 14:23:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:40.870 14:23:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:40.870 14:23:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:40.870 14:23:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:40.870 14:23:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:40.870 14:23:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:40.870 14:23:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:40.870 14:23:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:41.436 14:23:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:41.436 "name": "raid_bdev1", 00:30:41.436 "uuid": "25a1d410-23e4-4fff-a318-4f2b6220172d", 00:30:41.436 "strip_size_kb": 0, 00:30:41.436 "state": "online", 00:30:41.436 "raid_level": "raid1", 00:30:41.436 "superblock": false, 00:30:41.436 "num_base_bdevs": 4, 00:30:41.436 "num_base_bdevs_discovered": 3, 00:30:41.436 "num_base_bdevs_operational": 3, 00:30:41.436 "base_bdevs_list": [ 00:30:41.436 { 00:30:41.436 "name": null, 00:30:41.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:41.436 "is_configured": false, 00:30:41.436 "data_offset": 0, 00:30:41.436 "data_size": 65536 00:30:41.436 }, 00:30:41.436 { 00:30:41.436 "name": "BaseBdev2", 00:30:41.436 "uuid": "7b8a5b25-a062-58ef-abfb-a25f02b8d3ab", 00:30:41.436 "is_configured": true, 00:30:41.436 "data_offset": 0, 00:30:41.436 "data_size": 65536 00:30:41.436 }, 00:30:41.436 { 00:30:41.436 "name": "BaseBdev3", 00:30:41.436 "uuid": "bf71fde1-11d5-5a0a-8d02-15d62a375334", 00:30:41.437 "is_configured": true, 00:30:41.437 "data_offset": 0, 00:30:41.437 "data_size": 65536 00:30:41.437 }, 00:30:41.437 { 00:30:41.437 "name": "BaseBdev4", 00:30:41.437 "uuid": "da3800c4-9c62-51cc-a04c-6adaebdef379", 00:30:41.437 "is_configured": true, 00:30:41.437 "data_offset": 0, 00:30:41.437 "data_size": 65536 00:30:41.437 } 00:30:41.437 ] 00:30:41.437 }' 00:30:41.437 14:23:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:41.437 14:23:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:42.003 14:23:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:42.003 14:23:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:42.003 14:23:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:42.003 14:23:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:42.003 14:23:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:42.003 14:23:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:42.003 14:23:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:42.262 14:23:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:42.262 "name": "raid_bdev1", 00:30:42.262 "uuid": "25a1d410-23e4-4fff-a318-4f2b6220172d", 00:30:42.262 "strip_size_kb": 0, 00:30:42.262 "state": "online", 00:30:42.262 "raid_level": "raid1", 00:30:42.262 "superblock": false, 00:30:42.262 "num_base_bdevs": 4, 00:30:42.262 "num_base_bdevs_discovered": 3, 00:30:42.262 "num_base_bdevs_operational": 3, 00:30:42.262 "base_bdevs_list": [ 00:30:42.262 { 00:30:42.262 "name": null, 00:30:42.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:42.262 "is_configured": false, 00:30:42.262 "data_offset": 0, 00:30:42.262 "data_size": 65536 00:30:42.262 }, 00:30:42.262 { 00:30:42.262 "name": "BaseBdev2", 00:30:42.262 "uuid": "7b8a5b25-a062-58ef-abfb-a25f02b8d3ab", 00:30:42.262 "is_configured": true, 00:30:42.262 "data_offset": 0, 00:30:42.262 "data_size": 65536 00:30:42.262 }, 00:30:42.262 { 00:30:42.262 "name": "BaseBdev3", 00:30:42.262 "uuid": "bf71fde1-11d5-5a0a-8d02-15d62a375334", 00:30:42.262 "is_configured": true, 00:30:42.262 "data_offset": 0, 00:30:42.262 "data_size": 65536 00:30:42.262 }, 00:30:42.262 { 00:30:42.262 "name": "BaseBdev4", 00:30:42.262 "uuid": "da3800c4-9c62-51cc-a04c-6adaebdef379", 00:30:42.262 "is_configured": true, 00:30:42.262 "data_offset": 0, 00:30:42.262 "data_size": 65536 00:30:42.262 } 00:30:42.262 ] 00:30:42.262 }' 00:30:42.262 14:23:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:42.262 14:23:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:42.262 14:23:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:42.520 14:23:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:42.520 14:23:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:30:42.520 [2024-07-15 14:23:28.499936] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:30:42.779 14:23:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:30:42.779 [2024-07-15 14:23:28.559785] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:30:42.779 [2024-07-15 14:23:28.561339] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:30:42.779 [2024-07-15 14:23:28.677130] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:30:43.037 [2024-07-15 14:23:28.791979] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:30:43.037 [2024-07-15 14:23:28.792486] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:30:43.037 [2024-07-15 14:23:29.026571] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:30:43.295 [2024-07-15 14:23:29.252911] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:30:43.867 14:23:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:43.867 14:23:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:43.867 14:23:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:43.867 14:23:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:43.867 14:23:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:43.867 14:23:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:43.867 14:23:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:43.867 [2024-07-15 14:23:29.607029] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:30:43.867 14:23:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:43.867 "name": "raid_bdev1", 00:30:43.867 "uuid": "25a1d410-23e4-4fff-a318-4f2b6220172d", 00:30:43.867 "strip_size_kb": 0, 00:30:43.867 "state": "online", 00:30:43.867 "raid_level": "raid1", 00:30:43.867 "superblock": false, 00:30:43.867 "num_base_bdevs": 4, 00:30:43.867 "num_base_bdevs_discovered": 4, 00:30:43.867 "num_base_bdevs_operational": 4, 00:30:43.867 "process": { 00:30:43.867 "type": "rebuild", 00:30:43.867 "target": "spare", 00:30:43.867 "progress": { 00:30:43.867 "blocks": 14336, 00:30:43.867 "percent": 21 00:30:43.867 } 00:30:43.867 }, 00:30:43.867 "base_bdevs_list": [ 00:30:43.867 { 00:30:43.867 "name": "spare", 00:30:43.867 "uuid": "3c3d89d7-cde0-58c5-b26c-e2c52fabd465", 00:30:43.867 "is_configured": true, 00:30:43.867 "data_offset": 0, 00:30:43.867 "data_size": 65536 00:30:43.867 }, 00:30:43.867 { 00:30:43.867 "name": "BaseBdev2", 00:30:43.867 "uuid": "7b8a5b25-a062-58ef-abfb-a25f02b8d3ab", 00:30:43.867 "is_configured": true, 00:30:43.867 "data_offset": 0, 00:30:43.867 "data_size": 65536 00:30:43.867 }, 00:30:43.867 { 00:30:43.867 "name": "BaseBdev3", 00:30:43.867 "uuid": "bf71fde1-11d5-5a0a-8d02-15d62a375334", 00:30:43.867 "is_configured": true, 00:30:43.867 "data_offset": 0, 00:30:43.867 "data_size": 65536 00:30:43.867 }, 00:30:43.867 { 00:30:43.867 "name": "BaseBdev4", 00:30:43.867 "uuid": "da3800c4-9c62-51cc-a04c-6adaebdef379", 00:30:43.867 "is_configured": true, 00:30:43.867 "data_offset": 0, 00:30:43.867 "data_size": 65536 00:30:43.867 } 00:30:43.867 ] 00:30:43.867 }' 00:30:43.867 14:23:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:43.867 14:23:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:43.867 14:23:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:44.125 14:23:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:44.125 14:23:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:30:44.125 14:23:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:30:44.125 14:23:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:30:44.125 14:23:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@692 -- # '[' 4 -gt 2 ']' 00:30:44.125 14:23:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@694 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:30:44.125 [2024-07-15 14:23:30.063277] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:30:44.383 [2024-07-15 14:23:30.143914] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:30:44.383 [2024-07-15 14:23:30.171675] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:30:44.383 [2024-07-15 14:23:30.278221] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000062f0 00:30:44.383 [2024-07-15 14:23:30.278460] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006560 00:30:44.383 14:23:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@697 -- # base_bdevs[1]= 00:30:44.383 14:23:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # (( num_base_bdevs_operational-- )) 00:30:44.383 14:23:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@701 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:44.383 14:23:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:44.383 14:23:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:44.383 14:23:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:44.383 14:23:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:44.383 14:23:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:44.383 14:23:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:44.642 [2024-07-15 14:23:30.511318] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:30:44.642 14:23:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:44.642 "name": "raid_bdev1", 00:30:44.642 "uuid": "25a1d410-23e4-4fff-a318-4f2b6220172d", 00:30:44.642 "strip_size_kb": 0, 00:30:44.642 "state": "online", 00:30:44.642 "raid_level": "raid1", 00:30:44.642 "superblock": false, 00:30:44.642 "num_base_bdevs": 4, 00:30:44.642 "num_base_bdevs_discovered": 3, 00:30:44.642 "num_base_bdevs_operational": 3, 00:30:44.642 "process": { 00:30:44.642 "type": "rebuild", 00:30:44.642 "target": "spare", 00:30:44.642 "progress": { 00:30:44.642 "blocks": 26624, 00:30:44.642 "percent": 40 00:30:44.642 } 00:30:44.642 }, 00:30:44.642 "base_bdevs_list": [ 00:30:44.642 { 00:30:44.642 "name": "spare", 00:30:44.642 "uuid": "3c3d89d7-cde0-58c5-b26c-e2c52fabd465", 00:30:44.642 "is_configured": true, 00:30:44.642 "data_offset": 0, 00:30:44.642 "data_size": 65536 00:30:44.642 }, 00:30:44.642 { 00:30:44.642 "name": null, 00:30:44.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:44.642 "is_configured": false, 00:30:44.642 "data_offset": 0, 00:30:44.642 "data_size": 65536 00:30:44.642 }, 00:30:44.642 { 00:30:44.642 "name": "BaseBdev3", 00:30:44.642 "uuid": "bf71fde1-11d5-5a0a-8d02-15d62a375334", 00:30:44.642 "is_configured": true, 00:30:44.642 "data_offset": 0, 00:30:44.642 "data_size": 65536 00:30:44.642 }, 00:30:44.642 { 00:30:44.642 "name": "BaseBdev4", 00:30:44.642 "uuid": "da3800c4-9c62-51cc-a04c-6adaebdef379", 00:30:44.642 "is_configured": true, 00:30:44.642 "data_offset": 0, 00:30:44.642 "data_size": 65536 00:30:44.642 } 00:30:44.642 ] 00:30:44.642 }' 00:30:44.642 14:23:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:44.642 [2024-07-15 14:23:30.623956] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:30:44.642 14:23:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:44.642 14:23:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:44.900 14:23:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:44.900 14:23:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@705 -- # local timeout=1067 00:30:44.900 14:23:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:30:44.900 14:23:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:44.900 14:23:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:44.900 14:23:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:44.900 14:23:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:44.901 14:23:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:44.901 14:23:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:44.901 14:23:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:44.901 [2024-07-15 14:23:30.857853] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:30:45.159 14:23:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:45.159 "name": "raid_bdev1", 00:30:45.159 "uuid": "25a1d410-23e4-4fff-a318-4f2b6220172d", 00:30:45.159 "strip_size_kb": 0, 00:30:45.159 "state": "online", 00:30:45.159 "raid_level": "raid1", 00:30:45.159 "superblock": false, 00:30:45.159 "num_base_bdevs": 4, 00:30:45.159 "num_base_bdevs_discovered": 3, 00:30:45.159 "num_base_bdevs_operational": 3, 00:30:45.159 "process": { 00:30:45.159 "type": "rebuild", 00:30:45.159 "target": "spare", 00:30:45.159 "progress": { 00:30:45.159 "blocks": 32768, 00:30:45.159 "percent": 50 00:30:45.159 } 00:30:45.159 }, 00:30:45.159 "base_bdevs_list": [ 00:30:45.159 { 00:30:45.159 "name": "spare", 00:30:45.159 "uuid": "3c3d89d7-cde0-58c5-b26c-e2c52fabd465", 00:30:45.159 "is_configured": true, 00:30:45.159 "data_offset": 0, 00:30:45.159 "data_size": 65536 00:30:45.159 }, 00:30:45.159 { 00:30:45.159 "name": null, 00:30:45.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:45.159 "is_configured": false, 00:30:45.159 "data_offset": 0, 00:30:45.159 "data_size": 65536 00:30:45.159 }, 00:30:45.159 { 00:30:45.159 "name": "BaseBdev3", 00:30:45.159 "uuid": "bf71fde1-11d5-5a0a-8d02-15d62a375334", 00:30:45.159 "is_configured": true, 00:30:45.159 "data_offset": 0, 00:30:45.159 "data_size": 65536 00:30:45.159 }, 00:30:45.159 { 00:30:45.159 "name": "BaseBdev4", 00:30:45.159 "uuid": "da3800c4-9c62-51cc-a04c-6adaebdef379", 00:30:45.159 "is_configured": true, 00:30:45.159 "data_offset": 0, 00:30:45.159 "data_size": 65536 00:30:45.159 } 00:30:45.159 ] 00:30:45.159 }' 00:30:45.159 14:23:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:45.159 14:23:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:45.159 14:23:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:45.159 14:23:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:45.159 14:23:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:30:45.727 [2024-07-15 14:23:31.663605] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:30:45.985 [2024-07-15 14:23:31.866897] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:30:46.245 14:23:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:30:46.245 14:23:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:46.245 14:23:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:46.245 14:23:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:46.245 14:23:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:46.245 14:23:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:46.245 14:23:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:46.245 14:23:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:46.504 14:23:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:46.504 "name": "raid_bdev1", 00:30:46.504 "uuid": "25a1d410-23e4-4fff-a318-4f2b6220172d", 00:30:46.504 "strip_size_kb": 0, 00:30:46.504 "state": "online", 00:30:46.504 "raid_level": "raid1", 00:30:46.504 "superblock": false, 00:30:46.504 "num_base_bdevs": 4, 00:30:46.504 "num_base_bdevs_discovered": 3, 00:30:46.504 "num_base_bdevs_operational": 3, 00:30:46.504 "process": { 00:30:46.504 "type": "rebuild", 00:30:46.504 "target": "spare", 00:30:46.504 "progress": { 00:30:46.504 "blocks": 53248, 00:30:46.504 "percent": 81 00:30:46.504 } 00:30:46.504 }, 00:30:46.504 "base_bdevs_list": [ 00:30:46.504 { 00:30:46.504 "name": "spare", 00:30:46.504 "uuid": "3c3d89d7-cde0-58c5-b26c-e2c52fabd465", 00:30:46.504 "is_configured": true, 00:30:46.504 "data_offset": 0, 00:30:46.504 "data_size": 65536 00:30:46.504 }, 00:30:46.504 { 00:30:46.504 "name": null, 00:30:46.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:46.504 "is_configured": false, 00:30:46.504 "data_offset": 0, 00:30:46.504 "data_size": 65536 00:30:46.504 }, 00:30:46.504 { 00:30:46.504 "name": "BaseBdev3", 00:30:46.504 "uuid": "bf71fde1-11d5-5a0a-8d02-15d62a375334", 00:30:46.504 "is_configured": true, 00:30:46.504 "data_offset": 0, 00:30:46.504 "data_size": 65536 00:30:46.504 }, 00:30:46.504 { 00:30:46.504 "name": "BaseBdev4", 00:30:46.504 "uuid": "da3800c4-9c62-51cc-a04c-6adaebdef379", 00:30:46.504 "is_configured": true, 00:30:46.504 "data_offset": 0, 00:30:46.504 "data_size": 65536 00:30:46.504 } 00:30:46.504 ] 00:30:46.504 }' 00:30:46.504 14:23:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:46.504 14:23:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:30:46.504 14:23:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:46.504 14:23:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:30:46.504 14:23:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:30:46.763 [2024-07-15 14:23:32.519972] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:30:47.021 [2024-07-15 14:23:32.946819] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:30:47.300 [2024-07-15 14:23:33.046887] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:30:47.300 [2024-07-15 14:23:33.049309] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:47.576 14:23:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:30:47.576 14:23:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:30:47.576 14:23:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:47.576 14:23:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:30:47.576 14:23:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:30:47.576 14:23:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:47.576 14:23:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:47.576 14:23:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:47.834 14:23:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:47.834 "name": "raid_bdev1", 00:30:47.834 "uuid": "25a1d410-23e4-4fff-a318-4f2b6220172d", 00:30:47.834 "strip_size_kb": 0, 00:30:47.834 "state": "online", 00:30:47.834 "raid_level": "raid1", 00:30:47.834 "superblock": false, 00:30:47.834 "num_base_bdevs": 4, 00:30:47.834 "num_base_bdevs_discovered": 3, 00:30:47.834 "num_base_bdevs_operational": 3, 00:30:47.834 "base_bdevs_list": [ 00:30:47.834 { 00:30:47.834 "name": "spare", 00:30:47.834 "uuid": "3c3d89d7-cde0-58c5-b26c-e2c52fabd465", 00:30:47.834 "is_configured": true, 00:30:47.834 "data_offset": 0, 00:30:47.834 "data_size": 65536 00:30:47.834 }, 00:30:47.834 { 00:30:47.834 "name": null, 00:30:47.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:47.834 "is_configured": false, 00:30:47.834 "data_offset": 0, 00:30:47.834 "data_size": 65536 00:30:47.834 }, 00:30:47.834 { 00:30:47.834 "name": "BaseBdev3", 00:30:47.834 "uuid": "bf71fde1-11d5-5a0a-8d02-15d62a375334", 00:30:47.834 "is_configured": true, 00:30:47.834 "data_offset": 0, 00:30:47.834 "data_size": 65536 00:30:47.834 }, 00:30:47.834 { 00:30:47.834 "name": "BaseBdev4", 00:30:47.834 "uuid": "da3800c4-9c62-51cc-a04c-6adaebdef379", 00:30:47.834 "is_configured": true, 00:30:47.834 "data_offset": 0, 00:30:47.834 "data_size": 65536 00:30:47.834 } 00:30:47.834 ] 00:30:47.834 }' 00:30:47.834 14:23:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:47.834 14:23:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:30:47.834 14:23:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:47.834 14:23:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:30:47.834 14:23:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # break 00:30:47.834 14:23:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:47.834 14:23:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:47.834 14:23:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:47.834 14:23:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:47.834 14:23:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:47.834 14:23:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:47.834 14:23:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:48.092 14:23:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:48.092 "name": "raid_bdev1", 00:30:48.092 "uuid": "25a1d410-23e4-4fff-a318-4f2b6220172d", 00:30:48.092 "strip_size_kb": 0, 00:30:48.092 "state": "online", 00:30:48.092 "raid_level": "raid1", 00:30:48.092 "superblock": false, 00:30:48.092 "num_base_bdevs": 4, 00:30:48.092 "num_base_bdevs_discovered": 3, 00:30:48.092 "num_base_bdevs_operational": 3, 00:30:48.092 "base_bdevs_list": [ 00:30:48.092 { 00:30:48.092 "name": "spare", 00:30:48.092 "uuid": "3c3d89d7-cde0-58c5-b26c-e2c52fabd465", 00:30:48.092 "is_configured": true, 00:30:48.092 "data_offset": 0, 00:30:48.092 "data_size": 65536 00:30:48.092 }, 00:30:48.092 { 00:30:48.092 "name": null, 00:30:48.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:48.092 "is_configured": false, 00:30:48.092 "data_offset": 0, 00:30:48.092 "data_size": 65536 00:30:48.092 }, 00:30:48.092 { 00:30:48.092 "name": "BaseBdev3", 00:30:48.092 "uuid": "bf71fde1-11d5-5a0a-8d02-15d62a375334", 00:30:48.092 "is_configured": true, 00:30:48.092 "data_offset": 0, 00:30:48.092 "data_size": 65536 00:30:48.092 }, 00:30:48.092 { 00:30:48.092 "name": "BaseBdev4", 00:30:48.092 "uuid": "da3800c4-9c62-51cc-a04c-6adaebdef379", 00:30:48.092 "is_configured": true, 00:30:48.092 "data_offset": 0, 00:30:48.092 "data_size": 65536 00:30:48.092 } 00:30:48.092 ] 00:30:48.092 }' 00:30:48.092 14:23:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:48.092 14:23:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:48.092 14:23:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:48.351 14:23:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:48.351 14:23:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:48.351 14:23:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:48.351 14:23:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:48.351 14:23:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:48.351 14:23:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:48.351 14:23:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:48.351 14:23:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:48.351 14:23:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:48.351 14:23:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:48.351 14:23:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:48.351 14:23:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:48.351 14:23:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:48.609 14:23:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:48.609 "name": "raid_bdev1", 00:30:48.609 "uuid": "25a1d410-23e4-4fff-a318-4f2b6220172d", 00:30:48.609 "strip_size_kb": 0, 00:30:48.609 "state": "online", 00:30:48.609 "raid_level": "raid1", 00:30:48.609 "superblock": false, 00:30:48.609 "num_base_bdevs": 4, 00:30:48.609 "num_base_bdevs_discovered": 3, 00:30:48.609 "num_base_bdevs_operational": 3, 00:30:48.609 "base_bdevs_list": [ 00:30:48.609 { 00:30:48.609 "name": "spare", 00:30:48.609 "uuid": "3c3d89d7-cde0-58c5-b26c-e2c52fabd465", 00:30:48.609 "is_configured": true, 00:30:48.609 "data_offset": 0, 00:30:48.609 "data_size": 65536 00:30:48.609 }, 00:30:48.609 { 00:30:48.609 "name": null, 00:30:48.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:48.609 "is_configured": false, 00:30:48.609 "data_offset": 0, 00:30:48.609 "data_size": 65536 00:30:48.609 }, 00:30:48.609 { 00:30:48.609 "name": "BaseBdev3", 00:30:48.609 "uuid": "bf71fde1-11d5-5a0a-8d02-15d62a375334", 00:30:48.609 "is_configured": true, 00:30:48.609 "data_offset": 0, 00:30:48.609 "data_size": 65536 00:30:48.609 }, 00:30:48.609 { 00:30:48.609 "name": "BaseBdev4", 00:30:48.609 "uuid": "da3800c4-9c62-51cc-a04c-6adaebdef379", 00:30:48.609 "is_configured": true, 00:30:48.609 "data_offset": 0, 00:30:48.609 "data_size": 65536 00:30:48.609 } 00:30:48.609 ] 00:30:48.609 }' 00:30:48.609 14:23:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:48.609 14:23:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:49.175 14:23:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:30:49.433 [2024-07-15 14:23:35.340850] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:49.433 [2024-07-15 14:23:35.341096] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:49.433 00:30:49.433 Latency(us) 00:30:49.433 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:49.433 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:30:49.433 raid_bdev1 : 11.97 138.08 414.25 0.00 0.00 10813.75 297.89 115343.36 00:30:49.433 =================================================================================================================== 00:30:49.433 Total : 138.08 414.25 0.00 0.00 10813.75 297.89 115343.36 00:30:49.433 [2024-07-15 14:23:35.434849] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:49.433 [2024-07-15 14:23:35.435027] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:49.433 [2024-07-15 14:23:35.435142] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:49.433 0 00:30:49.433 [2024-07-15 14:23:35.435373] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:30:49.691 14:23:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # jq length 00:30:49.691 14:23:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:49.950 14:23:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:30:49.950 14:23:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:30:49.950 14:23:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:30:49.950 14:23:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@724 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:30:49.950 14:23:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:49.950 14:23:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:30:49.950 14:23:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:49.950 14:23:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:30:49.950 14:23:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:49.950 14:23:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:30:49.950 14:23:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:49.950 14:23:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:49.950 14:23:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:30:50.209 /dev/nbd0 00:30:50.209 14:23:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:30:50.209 14:23:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:30:50.209 14:23:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:30:50.209 14:23:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # local i 00:30:50.209 14:23:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:30:50.209 14:23:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:30:50.209 14:23:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:30:50.209 14:23:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # break 00:30:50.209 14:23:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:30:50.209 14:23:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:30:50.209 14:23:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:50.209 1+0 records in 00:30:50.209 1+0 records out 00:30:50.209 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00074364 s, 5.5 MB/s 00:30:50.209 14:23:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:50.209 14:23:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # size=4096 00:30:50.209 14:23:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:50.209 14:23:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:30:50.209 14:23:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # return 0 00:30:50.209 14:23:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:50.209 14:23:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:50.209 14:23:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:30:50.209 14:23:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # '[' -z '' ']' 00:30:50.209 14:23:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # continue 00:30:50.209 14:23:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:30:50.209 14:23:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev3 ']' 00:30:50.209 14:23:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:30:50.209 14:23:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:50.209 14:23:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:30:50.209 14:23:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:50.209 14:23:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:30:50.209 14:23:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:50.209 14:23:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:30:50.209 14:23:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:50.209 14:23:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:50.209 14:23:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:30:50.468 /dev/nbd1 00:30:50.468 14:23:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:30:50.468 14:23:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:30:50.468 14:23:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:30:50.468 14:23:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # local i 00:30:50.468 14:23:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:30:50.468 14:23:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:30:50.468 14:23:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:30:50.468 14:23:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # break 00:30:50.468 14:23:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:30:50.468 14:23:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:30:50.468 14:23:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:50.468 1+0 records in 00:30:50.468 1+0 records out 00:30:50.468 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000580693 s, 7.1 MB/s 00:30:50.468 14:23:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:50.468 14:23:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # size=4096 00:30:50.468 14:23:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:50.468 14:23:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:30:50.468 14:23:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # return 0 00:30:50.468 14:23:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:50.468 14:23:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:50.468 14:23:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:30:50.468 14:23:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:30:50.468 14:23:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:50.468 14:23:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:30:50.468 14:23:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:50.468 14:23:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:30:50.468 14:23:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:50.468 14:23:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:30:51.061 14:23:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:30:51.061 14:23:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:30:51.061 14:23:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:30:51.061 14:23:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:51.061 14:23:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:51.062 14:23:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:51.062 14:23:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:30:51.062 14:23:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:30:51.062 14:23:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:30:51.062 14:23:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev4 ']' 00:30:51.062 14:23:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:30:51.062 14:23:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:51.062 14:23:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:30:51.062 14:23:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:30:51.062 14:23:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:30:51.062 14:23:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:30:51.062 14:23:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:30:51.062 14:23:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:30:51.062 14:23:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:51.062 14:23:36 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:30:51.062 /dev/nbd1 00:30:51.320 14:23:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:30:51.320 14:23:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:30:51.320 14:23:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:30:51.320 14:23:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # local i 00:30:51.320 14:23:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:30:51.320 14:23:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:30:51.320 14:23:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:30:51.320 14:23:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # break 00:30:51.320 14:23:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:30:51.320 14:23:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:30:51.320 14:23:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:30:51.320 1+0 records in 00:30:51.320 1+0 records out 00:30:51.320 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000449111 s, 9.1 MB/s 00:30:51.320 14:23:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:51.320 14:23:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # size=4096 00:30:51.320 14:23:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:30:51.320 14:23:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:30:51.320 14:23:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # return 0 00:30:51.320 14:23:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:30:51.320 14:23:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:30:51.320 14:23:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:30:51.320 14:23:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:30:51.320 14:23:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:51.320 14:23:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:30:51.320 14:23:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:51.320 14:23:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:30:51.320 14:23:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:51.320 14:23:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:30:51.578 14:23:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:30:51.578 14:23:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:30:51.578 14:23:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:30:51.578 14:23:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:51.578 14:23:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:51.578 14:23:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:30:51.578 14:23:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:30:51.578 14:23:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:30:51.578 14:23:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@733 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:30:51.578 14:23:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:30:51.578 14:23:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:30:51.578 14:23:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:30:51.578 14:23:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:30:51.578 14:23:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:30:51.578 14:23:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:30:51.837 14:23:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:30:51.837 14:23:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:30:51.837 14:23:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:30:51.837 14:23:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:30:51.837 14:23:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:30:51.837 14:23:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:30:51.837 14:23:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:30:51.837 14:23:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:30:51.837 14:23:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:30:51.837 14:23:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@782 -- # killprocess 214694 00:30:51.837 14:23:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@948 -- # '[' -z 214694 ']' 00:30:51.837 14:23:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@952 -- # kill -0 214694 00:30:51.837 14:23:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@953 -- # uname 00:30:51.837 14:23:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:51.837 14:23:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 214694 00:30:51.837 14:23:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:51.837 14:23:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:51.837 14:23:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@966 -- # echo 'killing process with pid 214694' 00:30:51.837 killing process with pid 214694 00:30:51.837 14:23:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@967 -- # kill 214694 00:30:51.837 Received shutdown signal, test time was about 14.300377 seconds 00:30:51.837 00:30:51.837 Latency(us) 00:30:51.837 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:51.837 =================================================================================================================== 00:30:51.837 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:51.837 [2024-07-15 14:23:37.746105] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:51.837 14:23:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # wait 214694 00:30:52.404 [2024-07-15 14:23:38.109670] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:53.340 14:23:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # return 0 00:30:53.340 00:30:53.340 real 0m21.587s 00:30:53.340 user 0m33.853s 00:30:53.340 sys 0m2.628s 00:30:53.340 14:23:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:53.340 14:23:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:30:53.340 ************************************ 00:30:53.340 END TEST raid_rebuild_test_io 00:30:53.340 ************************************ 00:30:53.340 14:23:39 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:30:53.340 14:23:39 bdev_raid -- bdev/bdev_raid.sh@880 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:30:53.340 14:23:39 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:30:53.340 14:23:39 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:53.340 14:23:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:30:53.599 ************************************ 00:30:53.599 START TEST raid_rebuild_test_sb_io 00:30:53.599 ************************************ 00:30:53.599 14:23:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 4 true true true 00:30:53.599 14:23:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:30:53.599 14:23:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:30:53.599 14:23:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:30:53.599 14:23:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local background_io=true 00:30:53.599 14:23:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local verify=true 00:30:53.599 14:23:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:30:53.599 14:23:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:30:53.599 14:23:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:30:53.599 14:23:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:30:53.599 14:23:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:30:53.599 14:23:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:30:53.599 14:23:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:30:53.599 14:23:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:30:53.599 14:23:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev3 00:30:53.599 14:23:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:30:53.599 14:23:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:30:53.599 14:23:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev4 00:30:53.599 14:23:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:30:53.599 14:23:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:30:53.599 14:23:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:30:53.599 14:23:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:30:53.599 14:23:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:30:53.599 14:23:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local strip_size 00:30:53.599 14:23:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local create_arg 00:30:53.599 14:23:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:30:53.599 14:23:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local data_offset 00:30:53.599 14:23:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:30:53.599 14:23:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:30:53.599 14:23:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:30:53.599 14:23:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:30:53.599 14:23:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # raid_pid=215208 00:30:53.599 14:23:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:30:53.599 14:23:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # waitforlisten 215208 /var/tmp/spdk-raid.sock 00:30:53.599 14:23:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@829 -- # '[' -z 215208 ']' 00:30:53.600 14:23:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:30:53.600 14:23:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:53.600 14:23:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:30:53.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:30:53.600 14:23:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:53.600 14:23:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:53.600 [2024-07-15 14:23:39.400865] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:30:53.600 [2024-07-15 14:23:39.401189] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid215208 ] 00:30:53.600 I/O size of 3145728 is greater than zero copy threshold (65536). 00:30:53.600 Zero copy mechanism will not be used. 00:30:53.600 [2024-07-15 14:23:39.549705] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:53.858 [2024-07-15 14:23:39.764480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:54.117 [2024-07-15 14:23:39.960540] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:54.375 14:23:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:54.375 14:23:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@862 -- # return 0 00:30:54.375 14:23:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:30:54.375 14:23:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:30:54.633 BaseBdev1_malloc 00:30:54.892 14:23:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:30:54.892 [2024-07-15 14:23:40.851992] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:30:54.892 [2024-07-15 14:23:40.852323] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:54.892 [2024-07-15 14:23:40.852491] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:30:54.892 [2024-07-15 14:23:40.852619] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:54.892 [2024-07-15 14:23:40.854453] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:54.892 [2024-07-15 14:23:40.854660] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:30:54.892 BaseBdev1 00:30:54.892 14:23:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:30:54.892 14:23:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:30:55.150 BaseBdev2_malloc 00:30:55.408 14:23:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:30:55.408 [2024-07-15 14:23:41.383028] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:30:55.408 [2024-07-15 14:23:41.383336] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:55.408 [2024-07-15 14:23:41.383499] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:30:55.408 [2024-07-15 14:23:41.383638] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:55.408 [2024-07-15 14:23:41.385467] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:55.408 [2024-07-15 14:23:41.385654] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:30:55.408 BaseBdev2 00:30:55.408 14:23:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:30:55.408 14:23:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:30:55.975 BaseBdev3_malloc 00:30:55.975 14:23:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:30:55.975 [2024-07-15 14:23:41.941595] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:30:55.975 [2024-07-15 14:23:41.941893] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:55.975 [2024-07-15 14:23:41.942049] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:30:55.975 [2024-07-15 14:23:41.942184] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:55.975 [2024-07-15 14:23:41.943980] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:55.975 [2024-07-15 14:23:41.944153] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:30:55.975 BaseBdev3 00:30:55.975 14:23:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:30:55.975 14:23:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:30:56.233 BaseBdev4_malloc 00:30:56.233 14:23:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:30:56.491 [2024-07-15 14:23:42.431845] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:30:56.491 [2024-07-15 14:23:42.432171] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:56.491 [2024-07-15 14:23:42.432362] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:30:56.491 [2024-07-15 14:23:42.432505] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:56.491 [2024-07-15 14:23:42.434339] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:56.491 [2024-07-15 14:23:42.434507] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:30:56.491 BaseBdev4 00:30:56.491 14:23:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:30:57.058 spare_malloc 00:30:57.058 14:23:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:30:57.058 spare_delay 00:30:57.058 14:23:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:30:57.318 [2024-07-15 14:23:43.282417] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:30:57.318 [2024-07-15 14:23:43.282673] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:57.318 [2024-07-15 14:23:43.282843] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:30:57.318 [2024-07-15 14:23:43.282983] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:57.318 [2024-07-15 14:23:43.284921] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:57.318 [2024-07-15 14:23:43.285105] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:30:57.318 spare 00:30:57.318 14:23:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:30:57.577 [2024-07-15 14:23:43.522543] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:57.577 [2024-07-15 14:23:43.524160] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:57.577 [2024-07-15 14:23:43.524345] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:57.577 [2024-07-15 14:23:43.524505] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:30:57.577 [2024-07-15 14:23:43.524797] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:30:57.577 [2024-07-15 14:23:43.524926] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:30:57.577 [2024-07-15 14:23:43.525149] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:30:57.577 [2024-07-15 14:23:43.525528] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:30:57.577 [2024-07-15 14:23:43.525655] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:30:57.577 [2024-07-15 14:23:43.525900] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:57.577 14:23:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:30:57.577 14:23:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:57.577 14:23:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:57.577 14:23:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:57.577 14:23:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:57.577 14:23:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:30:57.577 14:23:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:57.577 14:23:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:57.577 14:23:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:57.577 14:23:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:57.577 14:23:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:57.577 14:23:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:57.837 14:23:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:57.837 "name": "raid_bdev1", 00:30:57.837 "uuid": "7aa81bcf-94ab-4ce4-8e4f-d7e601751fd5", 00:30:57.837 "strip_size_kb": 0, 00:30:57.837 "state": "online", 00:30:57.837 "raid_level": "raid1", 00:30:57.837 "superblock": true, 00:30:57.837 "num_base_bdevs": 4, 00:30:57.837 "num_base_bdevs_discovered": 4, 00:30:57.837 "num_base_bdevs_operational": 4, 00:30:57.837 "base_bdevs_list": [ 00:30:57.837 { 00:30:57.837 "name": "BaseBdev1", 00:30:57.837 "uuid": "bfd09628-6fe6-5a17-9288-218654c31c1c", 00:30:57.837 "is_configured": true, 00:30:57.837 "data_offset": 2048, 00:30:57.837 "data_size": 63488 00:30:57.837 }, 00:30:57.837 { 00:30:57.837 "name": "BaseBdev2", 00:30:57.837 "uuid": "89890445-a285-5779-af2c-bcb6cfe06b23", 00:30:57.837 "is_configured": true, 00:30:57.837 "data_offset": 2048, 00:30:57.837 "data_size": 63488 00:30:57.837 }, 00:30:57.837 { 00:30:57.837 "name": "BaseBdev3", 00:30:57.837 "uuid": "3b54f718-b11d-581e-914b-3870d2d688f3", 00:30:57.837 "is_configured": true, 00:30:57.837 "data_offset": 2048, 00:30:57.837 "data_size": 63488 00:30:57.837 }, 00:30:57.837 { 00:30:57.837 "name": "BaseBdev4", 00:30:57.837 "uuid": "4bef2656-246f-5ba1-8161-8f9a580c8227", 00:30:57.837 "is_configured": true, 00:30:57.837 "data_offset": 2048, 00:30:57.837 "data_size": 63488 00:30:57.837 } 00:30:57.837 ] 00:30:57.837 }' 00:30:57.837 14:23:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:57.837 14:23:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:58.777 14:23:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:30:58.777 14:23:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:30:58.777 [2024-07-15 14:23:44.742846] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:58.777 14:23:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=63488 00:30:58.777 14:23:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:30:58.777 14:23:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:59.342 14:23:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:30:59.342 14:23:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@620 -- # '[' true = true ']' 00:30:59.342 14:23:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:30:59.342 14:23:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:30:59.342 [2024-07-15 14:23:45.162392] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:30:59.342 I/O size of 3145728 is greater than zero copy threshold (65536). 00:30:59.342 Zero copy mechanism will not be used. 00:30:59.342 Running I/O for 60 seconds... 00:30:59.342 [2024-07-15 14:23:45.334216] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:59.342 [2024-07-15 14:23:45.338820] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000062f0 00:30:59.601 14:23:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:59.601 14:23:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:59.601 14:23:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:59.601 14:23:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:59.601 14:23:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:59.601 14:23:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:59.601 14:23:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:59.601 14:23:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:59.601 14:23:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:59.601 14:23:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:59.601 14:23:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:59.601 14:23:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:59.859 14:23:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:59.859 "name": "raid_bdev1", 00:30:59.859 "uuid": "7aa81bcf-94ab-4ce4-8e4f-d7e601751fd5", 00:30:59.859 "strip_size_kb": 0, 00:30:59.859 "state": "online", 00:30:59.859 "raid_level": "raid1", 00:30:59.859 "superblock": true, 00:30:59.859 "num_base_bdevs": 4, 00:30:59.859 "num_base_bdevs_discovered": 3, 00:30:59.859 "num_base_bdevs_operational": 3, 00:30:59.859 "base_bdevs_list": [ 00:30:59.859 { 00:30:59.859 "name": null, 00:30:59.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:59.859 "is_configured": false, 00:30:59.859 "data_offset": 2048, 00:30:59.859 "data_size": 63488 00:30:59.859 }, 00:30:59.859 { 00:30:59.859 "name": "BaseBdev2", 00:30:59.859 "uuid": "89890445-a285-5779-af2c-bcb6cfe06b23", 00:30:59.859 "is_configured": true, 00:30:59.859 "data_offset": 2048, 00:30:59.859 "data_size": 63488 00:30:59.859 }, 00:30:59.859 { 00:30:59.859 "name": "BaseBdev3", 00:30:59.859 "uuid": "3b54f718-b11d-581e-914b-3870d2d688f3", 00:30:59.859 "is_configured": true, 00:30:59.859 "data_offset": 2048, 00:30:59.859 "data_size": 63488 00:30:59.859 }, 00:30:59.859 { 00:30:59.859 "name": "BaseBdev4", 00:30:59.859 "uuid": "4bef2656-246f-5ba1-8161-8f9a580c8227", 00:30:59.859 "is_configured": true, 00:30:59.859 "data_offset": 2048, 00:30:59.859 "data_size": 63488 00:30:59.859 } 00:30:59.859 ] 00:30:59.859 }' 00:30:59.859 14:23:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:59.859 14:23:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:00.427 14:23:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:31:00.991 [2024-07-15 14:23:46.696981] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:00.991 14:23:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # sleep 1 00:31:00.991 [2024-07-15 14:23:46.743753] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:31:00.991 [2024-07-15 14:23:46.745327] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:00.991 [2024-07-15 14:23:46.870886] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:31:00.991 [2024-07-15 14:23:46.872587] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:31:01.250 [2024-07-15 14:23:47.124378] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:31:01.250 [2024-07-15 14:23:47.125420] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:31:01.508 [2024-07-15 14:23:47.485401] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:31:01.766 [2024-07-15 14:23:47.606263] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:31:01.766 [2024-07-15 14:23:47.607300] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:31:01.766 14:23:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:01.766 14:23:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:01.766 14:23:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:01.766 14:23:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:01.766 14:23:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:01.766 14:23:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:01.766 14:23:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:02.024 [2024-07-15 14:23:47.961493] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:31:02.281 14:23:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:02.281 "name": "raid_bdev1", 00:31:02.281 "uuid": "7aa81bcf-94ab-4ce4-8e4f-d7e601751fd5", 00:31:02.281 "strip_size_kb": 0, 00:31:02.281 "state": "online", 00:31:02.281 "raid_level": "raid1", 00:31:02.281 "superblock": true, 00:31:02.281 "num_base_bdevs": 4, 00:31:02.281 "num_base_bdevs_discovered": 4, 00:31:02.281 "num_base_bdevs_operational": 4, 00:31:02.281 "process": { 00:31:02.281 "type": "rebuild", 00:31:02.281 "target": "spare", 00:31:02.281 "progress": { 00:31:02.281 "blocks": 14336, 00:31:02.281 "percent": 22 00:31:02.281 } 00:31:02.281 }, 00:31:02.281 "base_bdevs_list": [ 00:31:02.281 { 00:31:02.281 "name": "spare", 00:31:02.281 "uuid": "a016284c-506e-59fc-aeea-13dd36d2c9de", 00:31:02.281 "is_configured": true, 00:31:02.281 "data_offset": 2048, 00:31:02.281 "data_size": 63488 00:31:02.281 }, 00:31:02.281 { 00:31:02.281 "name": "BaseBdev2", 00:31:02.281 "uuid": "89890445-a285-5779-af2c-bcb6cfe06b23", 00:31:02.281 "is_configured": true, 00:31:02.281 "data_offset": 2048, 00:31:02.281 "data_size": 63488 00:31:02.281 }, 00:31:02.281 { 00:31:02.281 "name": "BaseBdev3", 00:31:02.281 "uuid": "3b54f718-b11d-581e-914b-3870d2d688f3", 00:31:02.281 "is_configured": true, 00:31:02.281 "data_offset": 2048, 00:31:02.281 "data_size": 63488 00:31:02.281 }, 00:31:02.281 { 00:31:02.281 "name": "BaseBdev4", 00:31:02.281 "uuid": "4bef2656-246f-5ba1-8161-8f9a580c8227", 00:31:02.281 "is_configured": true, 00:31:02.281 "data_offset": 2048, 00:31:02.281 "data_size": 63488 00:31:02.281 } 00:31:02.281 ] 00:31:02.281 }' 00:31:02.281 14:23:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:02.281 14:23:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:02.281 14:23:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:02.281 14:23:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:02.281 14:23:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:31:02.281 [2024-07-15 14:23:48.201890] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:31:02.540 [2024-07-15 14:23:48.410579] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:02.798 [2024-07-15 14:23:48.544203] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:31:02.798 [2024-07-15 14:23:48.548472] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:02.798 [2024-07-15 14:23:48.548648] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:02.798 [2024-07-15 14:23:48.548698] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:31:02.798 [2024-07-15 14:23:48.580404] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000062f0 00:31:02.798 14:23:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:31:02.798 14:23:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:02.798 14:23:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:02.798 14:23:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:02.798 14:23:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:02.798 14:23:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:02.798 14:23:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:02.798 14:23:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:02.798 14:23:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:02.798 14:23:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:02.798 14:23:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:02.798 14:23:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:03.056 14:23:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:03.056 "name": "raid_bdev1", 00:31:03.056 "uuid": "7aa81bcf-94ab-4ce4-8e4f-d7e601751fd5", 00:31:03.056 "strip_size_kb": 0, 00:31:03.056 "state": "online", 00:31:03.056 "raid_level": "raid1", 00:31:03.056 "superblock": true, 00:31:03.056 "num_base_bdevs": 4, 00:31:03.056 "num_base_bdevs_discovered": 3, 00:31:03.056 "num_base_bdevs_operational": 3, 00:31:03.056 "base_bdevs_list": [ 00:31:03.056 { 00:31:03.056 "name": null, 00:31:03.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:03.056 "is_configured": false, 00:31:03.056 "data_offset": 2048, 00:31:03.056 "data_size": 63488 00:31:03.056 }, 00:31:03.056 { 00:31:03.056 "name": "BaseBdev2", 00:31:03.056 "uuid": "89890445-a285-5779-af2c-bcb6cfe06b23", 00:31:03.056 "is_configured": true, 00:31:03.056 "data_offset": 2048, 00:31:03.056 "data_size": 63488 00:31:03.056 }, 00:31:03.056 { 00:31:03.056 "name": "BaseBdev3", 00:31:03.056 "uuid": "3b54f718-b11d-581e-914b-3870d2d688f3", 00:31:03.056 "is_configured": true, 00:31:03.056 "data_offset": 2048, 00:31:03.056 "data_size": 63488 00:31:03.056 }, 00:31:03.056 { 00:31:03.056 "name": "BaseBdev4", 00:31:03.056 "uuid": "4bef2656-246f-5ba1-8161-8f9a580c8227", 00:31:03.056 "is_configured": true, 00:31:03.056 "data_offset": 2048, 00:31:03.056 "data_size": 63488 00:31:03.056 } 00:31:03.056 ] 00:31:03.056 }' 00:31:03.056 14:23:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:03.056 14:23:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:03.620 14:23:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:03.621 14:23:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:03.621 14:23:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:03.621 14:23:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:03.621 14:23:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:03.621 14:23:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:03.621 14:23:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:03.879 14:23:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:03.879 "name": "raid_bdev1", 00:31:03.879 "uuid": "7aa81bcf-94ab-4ce4-8e4f-d7e601751fd5", 00:31:03.879 "strip_size_kb": 0, 00:31:03.879 "state": "online", 00:31:03.879 "raid_level": "raid1", 00:31:03.879 "superblock": true, 00:31:03.879 "num_base_bdevs": 4, 00:31:03.879 "num_base_bdevs_discovered": 3, 00:31:03.879 "num_base_bdevs_operational": 3, 00:31:03.879 "base_bdevs_list": [ 00:31:03.879 { 00:31:03.879 "name": null, 00:31:03.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:03.879 "is_configured": false, 00:31:03.879 "data_offset": 2048, 00:31:03.879 "data_size": 63488 00:31:03.879 }, 00:31:03.879 { 00:31:03.879 "name": "BaseBdev2", 00:31:03.879 "uuid": "89890445-a285-5779-af2c-bcb6cfe06b23", 00:31:03.879 "is_configured": true, 00:31:03.879 "data_offset": 2048, 00:31:03.879 "data_size": 63488 00:31:03.879 }, 00:31:03.879 { 00:31:03.879 "name": "BaseBdev3", 00:31:03.879 "uuid": "3b54f718-b11d-581e-914b-3870d2d688f3", 00:31:03.879 "is_configured": true, 00:31:03.879 "data_offset": 2048, 00:31:03.879 "data_size": 63488 00:31:03.879 }, 00:31:03.879 { 00:31:03.879 "name": "BaseBdev4", 00:31:03.879 "uuid": "4bef2656-246f-5ba1-8161-8f9a580c8227", 00:31:03.879 "is_configured": true, 00:31:03.879 "data_offset": 2048, 00:31:03.879 "data_size": 63488 00:31:03.879 } 00:31:03.879 ] 00:31:03.879 }' 00:31:03.879 14:23:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:03.879 14:23:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:03.879 14:23:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:04.139 14:23:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:04.139 14:23:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:31:04.398 [2024-07-15 14:23:50.146748] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:04.398 14:23:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:31:04.398 [2024-07-15 14:23:50.205906] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:31:04.398 [2024-07-15 14:23:50.207443] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:04.398 [2024-07-15 14:23:50.321816] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:31:04.398 [2024-07-15 14:23:50.322567] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:31:04.657 [2024-07-15 14:23:50.543703] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:31:04.657 [2024-07-15 14:23:50.544637] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:31:04.915 [2024-07-15 14:23:50.903432] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:31:05.172 [2024-07-15 14:23:51.032196] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:31:05.172 [2024-07-15 14:23:51.032718] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:31:05.430 14:23:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:05.430 14:23:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:05.430 14:23:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:05.430 14:23:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:05.430 14:23:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:05.430 14:23:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:05.430 14:23:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:05.430 [2024-07-15 14:23:51.351742] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:31:05.687 14:23:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:05.687 "name": "raid_bdev1", 00:31:05.687 "uuid": "7aa81bcf-94ab-4ce4-8e4f-d7e601751fd5", 00:31:05.687 "strip_size_kb": 0, 00:31:05.687 "state": "online", 00:31:05.687 "raid_level": "raid1", 00:31:05.687 "superblock": true, 00:31:05.687 "num_base_bdevs": 4, 00:31:05.687 "num_base_bdevs_discovered": 4, 00:31:05.687 "num_base_bdevs_operational": 4, 00:31:05.687 "process": { 00:31:05.687 "type": "rebuild", 00:31:05.688 "target": "spare", 00:31:05.688 "progress": { 00:31:05.688 "blocks": 14336, 00:31:05.688 "percent": 22 00:31:05.688 } 00:31:05.688 }, 00:31:05.688 "base_bdevs_list": [ 00:31:05.688 { 00:31:05.688 "name": "spare", 00:31:05.688 "uuid": "a016284c-506e-59fc-aeea-13dd36d2c9de", 00:31:05.688 "is_configured": true, 00:31:05.688 "data_offset": 2048, 00:31:05.688 "data_size": 63488 00:31:05.688 }, 00:31:05.688 { 00:31:05.688 "name": "BaseBdev2", 00:31:05.688 "uuid": "89890445-a285-5779-af2c-bcb6cfe06b23", 00:31:05.688 "is_configured": true, 00:31:05.688 "data_offset": 2048, 00:31:05.688 "data_size": 63488 00:31:05.688 }, 00:31:05.688 { 00:31:05.688 "name": "BaseBdev3", 00:31:05.688 "uuid": "3b54f718-b11d-581e-914b-3870d2d688f3", 00:31:05.688 "is_configured": true, 00:31:05.688 "data_offset": 2048, 00:31:05.688 "data_size": 63488 00:31:05.688 }, 00:31:05.688 { 00:31:05.688 "name": "BaseBdev4", 00:31:05.688 "uuid": "4bef2656-246f-5ba1-8161-8f9a580c8227", 00:31:05.688 "is_configured": true, 00:31:05.688 "data_offset": 2048, 00:31:05.688 "data_size": 63488 00:31:05.688 } 00:31:05.688 ] 00:31:05.688 }' 00:31:05.688 14:23:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:05.688 14:23:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:05.688 14:23:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:05.688 [2024-07-15 14:23:51.583645] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:31:05.688 14:23:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:05.688 14:23:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:31:05.688 14:23:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:31:05.688 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:31:05.688 14:23:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:31:05.688 14:23:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:31:05.688 14:23:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@692 -- # '[' 4 -gt 2 ']' 00:31:05.688 14:23:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@694 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:31:05.946 [2024-07-15 14:23:51.884546] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:31:06.206 [2024-07-15 14:23:51.952215] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:31:06.206 [2024-07-15 14:23:52.158845] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000062f0 00:31:06.206 [2024-07-15 14:23:52.159088] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006560 00:31:06.206 [2024-07-15 14:23:52.159198] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:31:06.206 14:23:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@697 -- # base_bdevs[1]= 00:31:06.206 14:23:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # (( num_base_bdevs_operational-- )) 00:31:06.206 14:23:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@701 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:06.206 14:23:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:06.206 14:23:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:06.206 14:23:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:06.206 14:23:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:06.206 14:23:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:06.206 14:23:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:06.465 [2024-07-15 14:23:52.290128] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:31:06.724 14:23:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:06.724 "name": "raid_bdev1", 00:31:06.724 "uuid": "7aa81bcf-94ab-4ce4-8e4f-d7e601751fd5", 00:31:06.724 "strip_size_kb": 0, 00:31:06.724 "state": "online", 00:31:06.724 "raid_level": "raid1", 00:31:06.724 "superblock": true, 00:31:06.724 "num_base_bdevs": 4, 00:31:06.724 "num_base_bdevs_discovered": 3, 00:31:06.724 "num_base_bdevs_operational": 3, 00:31:06.724 "process": { 00:31:06.724 "type": "rebuild", 00:31:06.724 "target": "spare", 00:31:06.724 "progress": { 00:31:06.724 "blocks": 22528, 00:31:06.724 "percent": 35 00:31:06.724 } 00:31:06.724 }, 00:31:06.724 "base_bdevs_list": [ 00:31:06.724 { 00:31:06.724 "name": "spare", 00:31:06.724 "uuid": "a016284c-506e-59fc-aeea-13dd36d2c9de", 00:31:06.724 "is_configured": true, 00:31:06.724 "data_offset": 2048, 00:31:06.724 "data_size": 63488 00:31:06.724 }, 00:31:06.724 { 00:31:06.724 "name": null, 00:31:06.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:06.724 "is_configured": false, 00:31:06.724 "data_offset": 2048, 00:31:06.724 "data_size": 63488 00:31:06.724 }, 00:31:06.724 { 00:31:06.724 "name": "BaseBdev3", 00:31:06.724 "uuid": "3b54f718-b11d-581e-914b-3870d2d688f3", 00:31:06.724 "is_configured": true, 00:31:06.724 "data_offset": 2048, 00:31:06.724 "data_size": 63488 00:31:06.724 }, 00:31:06.724 { 00:31:06.724 "name": "BaseBdev4", 00:31:06.724 "uuid": "4bef2656-246f-5ba1-8161-8f9a580c8227", 00:31:06.724 "is_configured": true, 00:31:06.724 "data_offset": 2048, 00:31:06.724 "data_size": 63488 00:31:06.724 } 00:31:06.724 ] 00:31:06.724 }' 00:31:06.724 14:23:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:06.724 14:23:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:06.724 14:23:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:06.724 14:23:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:06.724 14:23:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@705 -- # local timeout=1089 00:31:06.724 14:23:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:31:06.724 14:23:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:06.724 14:23:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:06.724 14:23:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:06.724 14:23:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:06.724 14:23:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:06.724 14:23:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:06.724 14:23:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:06.983 [2024-07-15 14:23:52.751246] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:31:06.983 14:23:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:06.983 "name": "raid_bdev1", 00:31:06.983 "uuid": "7aa81bcf-94ab-4ce4-8e4f-d7e601751fd5", 00:31:06.983 "strip_size_kb": 0, 00:31:06.983 "state": "online", 00:31:06.983 "raid_level": "raid1", 00:31:06.983 "superblock": true, 00:31:06.983 "num_base_bdevs": 4, 00:31:06.983 "num_base_bdevs_discovered": 3, 00:31:06.983 "num_base_bdevs_operational": 3, 00:31:06.983 "process": { 00:31:06.983 "type": "rebuild", 00:31:06.983 "target": "spare", 00:31:06.983 "progress": { 00:31:06.983 "blocks": 28672, 00:31:06.983 "percent": 45 00:31:06.983 } 00:31:06.983 }, 00:31:06.983 "base_bdevs_list": [ 00:31:06.983 { 00:31:06.983 "name": "spare", 00:31:06.983 "uuid": "a016284c-506e-59fc-aeea-13dd36d2c9de", 00:31:06.983 "is_configured": true, 00:31:06.983 "data_offset": 2048, 00:31:06.983 "data_size": 63488 00:31:06.983 }, 00:31:06.983 { 00:31:06.983 "name": null, 00:31:06.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:06.983 "is_configured": false, 00:31:06.983 "data_offset": 2048, 00:31:06.983 "data_size": 63488 00:31:06.983 }, 00:31:06.983 { 00:31:06.983 "name": "BaseBdev3", 00:31:06.983 "uuid": "3b54f718-b11d-581e-914b-3870d2d688f3", 00:31:06.983 "is_configured": true, 00:31:06.983 "data_offset": 2048, 00:31:06.983 "data_size": 63488 00:31:06.983 }, 00:31:06.983 { 00:31:06.983 "name": "BaseBdev4", 00:31:06.983 "uuid": "4bef2656-246f-5ba1-8161-8f9a580c8227", 00:31:06.983 "is_configured": true, 00:31:06.983 "data_offset": 2048, 00:31:06.983 "data_size": 63488 00:31:06.983 } 00:31:06.983 ] 00:31:06.983 }' 00:31:06.983 14:23:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:06.983 14:23:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:07.242 14:23:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:07.242 14:23:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:07.242 14:23:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:31:07.242 [2024-07-15 14:23:53.086971] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:31:07.808 [2024-07-15 14:23:53.634636] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:31:08.066 14:23:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:31:08.066 14:23:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:08.066 14:23:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:08.066 14:23:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:08.067 14:23:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:08.067 14:23:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:08.067 14:23:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:08.067 14:23:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:08.325 [2024-07-15 14:23:54.309550] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:31:08.325 14:23:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:08.325 "name": "raid_bdev1", 00:31:08.325 "uuid": "7aa81bcf-94ab-4ce4-8e4f-d7e601751fd5", 00:31:08.325 "strip_size_kb": 0, 00:31:08.325 "state": "online", 00:31:08.325 "raid_level": "raid1", 00:31:08.325 "superblock": true, 00:31:08.325 "num_base_bdevs": 4, 00:31:08.325 "num_base_bdevs_discovered": 3, 00:31:08.325 "num_base_bdevs_operational": 3, 00:31:08.325 "process": { 00:31:08.325 "type": "rebuild", 00:31:08.325 "target": "spare", 00:31:08.325 "progress": { 00:31:08.325 "blocks": 51200, 00:31:08.325 "percent": 80 00:31:08.325 } 00:31:08.325 }, 00:31:08.325 "base_bdevs_list": [ 00:31:08.325 { 00:31:08.325 "name": "spare", 00:31:08.325 "uuid": "a016284c-506e-59fc-aeea-13dd36d2c9de", 00:31:08.325 "is_configured": true, 00:31:08.325 "data_offset": 2048, 00:31:08.325 "data_size": 63488 00:31:08.325 }, 00:31:08.325 { 00:31:08.325 "name": null, 00:31:08.325 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:08.325 "is_configured": false, 00:31:08.325 "data_offset": 2048, 00:31:08.325 "data_size": 63488 00:31:08.325 }, 00:31:08.325 { 00:31:08.325 "name": "BaseBdev3", 00:31:08.325 "uuid": "3b54f718-b11d-581e-914b-3870d2d688f3", 00:31:08.325 "is_configured": true, 00:31:08.325 "data_offset": 2048, 00:31:08.325 "data_size": 63488 00:31:08.325 }, 00:31:08.325 { 00:31:08.325 "name": "BaseBdev4", 00:31:08.325 "uuid": "4bef2656-246f-5ba1-8161-8f9a580c8227", 00:31:08.325 "is_configured": true, 00:31:08.325 "data_offset": 2048, 00:31:08.325 "data_size": 63488 00:31:08.325 } 00:31:08.325 ] 00:31:08.325 }' 00:31:08.584 14:23:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:08.584 14:23:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:08.584 14:23:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:08.584 14:23:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:08.584 14:23:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:31:08.584 [2024-07-15 14:23:54.530282] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:31:08.851 [2024-07-15 14:23:54.637257] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:31:09.120 [2024-07-15 14:23:54.857664] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:31:09.120 [2024-07-15 14:23:54.957681] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:31:09.120 [2024-07-15 14:23:54.968646] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:09.687 14:23:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:31:09.687 14:23:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:09.687 14:23:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:09.687 14:23:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:09.687 14:23:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:09.687 14:23:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:09.687 14:23:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:09.687 14:23:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:09.947 14:23:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:09.947 "name": "raid_bdev1", 00:31:09.947 "uuid": "7aa81bcf-94ab-4ce4-8e4f-d7e601751fd5", 00:31:09.947 "strip_size_kb": 0, 00:31:09.947 "state": "online", 00:31:09.947 "raid_level": "raid1", 00:31:09.947 "superblock": true, 00:31:09.947 "num_base_bdevs": 4, 00:31:09.947 "num_base_bdevs_discovered": 3, 00:31:09.947 "num_base_bdevs_operational": 3, 00:31:09.947 "base_bdevs_list": [ 00:31:09.947 { 00:31:09.947 "name": "spare", 00:31:09.947 "uuid": "a016284c-506e-59fc-aeea-13dd36d2c9de", 00:31:09.947 "is_configured": true, 00:31:09.947 "data_offset": 2048, 00:31:09.947 "data_size": 63488 00:31:09.947 }, 00:31:09.947 { 00:31:09.947 "name": null, 00:31:09.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:09.947 "is_configured": false, 00:31:09.947 "data_offset": 2048, 00:31:09.947 "data_size": 63488 00:31:09.947 }, 00:31:09.947 { 00:31:09.947 "name": "BaseBdev3", 00:31:09.947 "uuid": "3b54f718-b11d-581e-914b-3870d2d688f3", 00:31:09.947 "is_configured": true, 00:31:09.947 "data_offset": 2048, 00:31:09.947 "data_size": 63488 00:31:09.947 }, 00:31:09.947 { 00:31:09.947 "name": "BaseBdev4", 00:31:09.947 "uuid": "4bef2656-246f-5ba1-8161-8f9a580c8227", 00:31:09.947 "is_configured": true, 00:31:09.947 "data_offset": 2048, 00:31:09.947 "data_size": 63488 00:31:09.947 } 00:31:09.947 ] 00:31:09.947 }' 00:31:09.947 14:23:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:09.947 14:23:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:31:09.947 14:23:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:09.947 14:23:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:31:09.947 14:23:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # break 00:31:09.947 14:23:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:09.947 14:23:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:09.947 14:23:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:09.947 14:23:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:09.947 14:23:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:09.947 14:23:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:09.947 14:23:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:10.206 14:23:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:10.206 "name": "raid_bdev1", 00:31:10.206 "uuid": "7aa81bcf-94ab-4ce4-8e4f-d7e601751fd5", 00:31:10.206 "strip_size_kb": 0, 00:31:10.206 "state": "online", 00:31:10.206 "raid_level": "raid1", 00:31:10.206 "superblock": true, 00:31:10.206 "num_base_bdevs": 4, 00:31:10.206 "num_base_bdevs_discovered": 3, 00:31:10.206 "num_base_bdevs_operational": 3, 00:31:10.206 "base_bdevs_list": [ 00:31:10.206 { 00:31:10.206 "name": "spare", 00:31:10.206 "uuid": "a016284c-506e-59fc-aeea-13dd36d2c9de", 00:31:10.206 "is_configured": true, 00:31:10.206 "data_offset": 2048, 00:31:10.206 "data_size": 63488 00:31:10.206 }, 00:31:10.206 { 00:31:10.206 "name": null, 00:31:10.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:10.206 "is_configured": false, 00:31:10.206 "data_offset": 2048, 00:31:10.206 "data_size": 63488 00:31:10.206 }, 00:31:10.206 { 00:31:10.206 "name": "BaseBdev3", 00:31:10.206 "uuid": "3b54f718-b11d-581e-914b-3870d2d688f3", 00:31:10.206 "is_configured": true, 00:31:10.206 "data_offset": 2048, 00:31:10.206 "data_size": 63488 00:31:10.206 }, 00:31:10.206 { 00:31:10.206 "name": "BaseBdev4", 00:31:10.206 "uuid": "4bef2656-246f-5ba1-8161-8f9a580c8227", 00:31:10.206 "is_configured": true, 00:31:10.206 "data_offset": 2048, 00:31:10.206 "data_size": 63488 00:31:10.206 } 00:31:10.206 ] 00:31:10.206 }' 00:31:10.206 14:23:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:10.206 14:23:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:10.206 14:23:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:10.206 14:23:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:10.206 14:23:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:31:10.206 14:23:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:10.206 14:23:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:10.206 14:23:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:10.206 14:23:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:10.206 14:23:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:10.206 14:23:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:10.206 14:23:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:10.206 14:23:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:10.206 14:23:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:10.206 14:23:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:10.206 14:23:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:10.771 14:23:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:10.771 "name": "raid_bdev1", 00:31:10.771 "uuid": "7aa81bcf-94ab-4ce4-8e4f-d7e601751fd5", 00:31:10.771 "strip_size_kb": 0, 00:31:10.771 "state": "online", 00:31:10.771 "raid_level": "raid1", 00:31:10.771 "superblock": true, 00:31:10.771 "num_base_bdevs": 4, 00:31:10.771 "num_base_bdevs_discovered": 3, 00:31:10.771 "num_base_bdevs_operational": 3, 00:31:10.771 "base_bdevs_list": [ 00:31:10.771 { 00:31:10.771 "name": "spare", 00:31:10.771 "uuid": "a016284c-506e-59fc-aeea-13dd36d2c9de", 00:31:10.771 "is_configured": true, 00:31:10.771 "data_offset": 2048, 00:31:10.771 "data_size": 63488 00:31:10.771 }, 00:31:10.771 { 00:31:10.771 "name": null, 00:31:10.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:10.771 "is_configured": false, 00:31:10.771 "data_offset": 2048, 00:31:10.771 "data_size": 63488 00:31:10.771 }, 00:31:10.771 { 00:31:10.771 "name": "BaseBdev3", 00:31:10.771 "uuid": "3b54f718-b11d-581e-914b-3870d2d688f3", 00:31:10.771 "is_configured": true, 00:31:10.771 "data_offset": 2048, 00:31:10.771 "data_size": 63488 00:31:10.771 }, 00:31:10.771 { 00:31:10.771 "name": "BaseBdev4", 00:31:10.771 "uuid": "4bef2656-246f-5ba1-8161-8f9a580c8227", 00:31:10.771 "is_configured": true, 00:31:10.771 "data_offset": 2048, 00:31:10.771 "data_size": 63488 00:31:10.771 } 00:31:10.771 ] 00:31:10.771 }' 00:31:10.771 14:23:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:10.771 14:23:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:11.335 14:23:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:31:11.593 [2024-07-15 14:23:57.402214] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:11.593 [2024-07-15 14:23:57.402414] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:11.593 00:31:11.593 Latency(us) 00:31:11.593 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:11.593 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:31:11.593 raid_bdev1 : 12.27 108.93 326.79 0.00 0.00 13025.09 310.92 115819.99 00:31:11.593 =================================================================================================================== 00:31:11.593 Total : 108.93 326.79 0.00 0.00 13025.09 310.92 115819.99 00:31:11.593 [2024-07-15 14:23:57.455678] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:11.593 [2024-07-15 14:23:57.455862] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:11.593 [2024-07-15 14:23:57.455975] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:11.593 0 00:31:11.593 [2024-07-15 14:23:57.456182] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:31:11.593 14:23:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:11.593 14:23:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # jq length 00:31:11.851 14:23:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:31:11.851 14:23:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:31:11.851 14:23:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:31:11.851 14:23:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@724 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:31:11.851 14:23:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:11.851 14:23:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:31:11.851 14:23:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:11.851 14:23:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:31:11.851 14:23:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:11.851 14:23:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:31:11.851 14:23:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:11.851 14:23:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:11.851 14:23:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:31:12.108 /dev/nbd0 00:31:12.108 14:23:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:31:12.108 14:23:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:31:12.108 14:23:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:31:12.109 14:23:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # local i 00:31:12.109 14:23:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:31:12.109 14:23:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:31:12.109 14:23:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:31:12.109 14:23:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # break 00:31:12.109 14:23:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:31:12.109 14:23:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:31:12.109 14:23:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:12.109 1+0 records in 00:31:12.109 1+0 records out 00:31:12.109 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00039913 s, 10.3 MB/s 00:31:12.109 14:23:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:12.109 14:23:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # size=4096 00:31:12.109 14:23:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:12.109 14:23:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:31:12.109 14:23:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # return 0 00:31:12.109 14:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:12.109 14:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:12.109 14:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:31:12.109 14:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # '[' -z '' ']' 00:31:12.109 14:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # continue 00:31:12.109 14:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:31:12.109 14:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev3 ']' 00:31:12.109 14:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:31:12.109 14:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:12.109 14:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:31:12.109 14:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:12.109 14:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:31:12.109 14:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:12.109 14:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:31:12.109 14:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:12.109 14:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:12.109 14:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:31:12.371 /dev/nbd1 00:31:12.372 14:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:31:12.372 14:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:31:12.372 14:23:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:31:12.372 14:23:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # local i 00:31:12.372 14:23:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:31:12.372 14:23:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:31:12.372 14:23:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:31:12.372 14:23:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # break 00:31:12.372 14:23:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:31:12.372 14:23:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:31:12.372 14:23:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:12.372 1+0 records in 00:31:12.372 1+0 records out 00:31:12.372 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000524697 s, 7.8 MB/s 00:31:12.372 14:23:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:12.372 14:23:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # size=4096 00:31:12.372 14:23:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:12.372 14:23:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:31:12.372 14:23:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # return 0 00:31:12.372 14:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:12.372 14:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:12.372 14:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:31:12.629 14:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:31:12.629 14:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:12.629 14:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:31:12.629 14:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:12.629 14:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:31:12.629 14:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:12.629 14:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:31:12.887 14:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:31:12.887 14:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:31:12.887 14:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:31:12.887 14:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:12.887 14:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:12.887 14:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:31:12.887 14:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:31:12.887 14:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:31:12.887 14:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:31:12.887 14:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev4 ']' 00:31:12.887 14:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:31:12.887 14:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:12.887 14:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:31:12.887 14:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:12.887 14:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:31:12.887 14:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:12.887 14:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:31:12.887 14:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:12.887 14:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:12.887 14:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:31:13.145 /dev/nbd1 00:31:13.145 14:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:31:13.145 14:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:31:13.145 14:23:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:31:13.145 14:23:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # local i 00:31:13.145 14:23:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:31:13.145 14:23:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:31:13.145 14:23:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:31:13.145 14:23:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # break 00:31:13.145 14:23:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:31:13.145 14:23:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:31:13.145 14:23:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:13.145 1+0 records in 00:31:13.145 1+0 records out 00:31:13.145 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000284251 s, 14.4 MB/s 00:31:13.145 14:23:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:13.145 14:23:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # size=4096 00:31:13.145 14:23:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:13.145 14:23:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:31:13.145 14:23:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # return 0 00:31:13.145 14:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:13.145 14:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:13.145 14:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:31:13.403 14:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:31:13.403 14:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:13.403 14:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:31:13.403 14:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:13.403 14:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:31:13.403 14:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:13.403 14:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:31:13.660 14:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:31:13.660 14:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:31:13.660 14:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:31:13.660 14:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:13.660 14:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:13.660 14:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:31:13.660 14:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:31:13.660 14:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:31:13.660 14:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@733 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:31:13.661 14:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:13.661 14:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:31:13.661 14:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:13.661 14:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:31:13.661 14:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:13.661 14:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:31:13.918 14:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:13.918 14:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:13.918 14:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:13.918 14:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:13.918 14:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:13.918 14:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:13.918 14:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:31:13.918 14:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:31:13.918 14:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:31:13.918 14:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:31:14.176 14:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:31:14.433 [2024-07-15 14:24:00.326917] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:31:14.433 [2024-07-15 14:24:00.327017] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:14.433 [2024-07-15 14:24:00.327068] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:31:14.433 [2024-07-15 14:24:00.327098] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:14.433 [2024-07-15 14:24:00.329019] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:14.433 [2024-07-15 14:24:00.329101] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:31:14.433 [2024-07-15 14:24:00.329244] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:31:14.433 [2024-07-15 14:24:00.329345] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:14.433 [2024-07-15 14:24:00.329483] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:31:14.433 [2024-07-15 14:24:00.329583] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:31:14.433 spare 00:31:14.433 14:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:31:14.433 14:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:14.433 14:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:14.433 14:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:14.433 14:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:14.433 14:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:14.433 14:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:14.433 14:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:14.433 14:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:14.433 14:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:14.433 14:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:14.433 14:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:14.433 [2024-07-15 14:24:00.429676] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c080 00:31:14.433 [2024-07-15 14:24:00.429737] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:31:14.433 [2024-07-15 14:24:00.429881] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037570 00:31:14.433 [2024-07-15 14:24:00.430194] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c080 00:31:14.433 [2024-07-15 14:24:00.430209] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c080 00:31:14.433 [2024-07-15 14:24:00.430331] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:14.691 14:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:14.691 "name": "raid_bdev1", 00:31:14.691 "uuid": "7aa81bcf-94ab-4ce4-8e4f-d7e601751fd5", 00:31:14.691 "strip_size_kb": 0, 00:31:14.691 "state": "online", 00:31:14.692 "raid_level": "raid1", 00:31:14.692 "superblock": true, 00:31:14.692 "num_base_bdevs": 4, 00:31:14.692 "num_base_bdevs_discovered": 3, 00:31:14.692 "num_base_bdevs_operational": 3, 00:31:14.692 "base_bdevs_list": [ 00:31:14.692 { 00:31:14.692 "name": "spare", 00:31:14.692 "uuid": "a016284c-506e-59fc-aeea-13dd36d2c9de", 00:31:14.692 "is_configured": true, 00:31:14.692 "data_offset": 2048, 00:31:14.692 "data_size": 63488 00:31:14.692 }, 00:31:14.692 { 00:31:14.692 "name": null, 00:31:14.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:14.692 "is_configured": false, 00:31:14.692 "data_offset": 2048, 00:31:14.692 "data_size": 63488 00:31:14.692 }, 00:31:14.692 { 00:31:14.692 "name": "BaseBdev3", 00:31:14.692 "uuid": "3b54f718-b11d-581e-914b-3870d2d688f3", 00:31:14.692 "is_configured": true, 00:31:14.692 "data_offset": 2048, 00:31:14.692 "data_size": 63488 00:31:14.692 }, 00:31:14.692 { 00:31:14.692 "name": "BaseBdev4", 00:31:14.692 "uuid": "4bef2656-246f-5ba1-8161-8f9a580c8227", 00:31:14.692 "is_configured": true, 00:31:14.692 "data_offset": 2048, 00:31:14.692 "data_size": 63488 00:31:14.692 } 00:31:14.692 ] 00:31:14.692 }' 00:31:14.692 14:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:14.692 14:24:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:15.628 14:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:15.628 14:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:15.628 14:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:15.628 14:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:15.628 14:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:15.628 14:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:15.628 14:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:15.886 14:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:15.886 "name": "raid_bdev1", 00:31:15.886 "uuid": "7aa81bcf-94ab-4ce4-8e4f-d7e601751fd5", 00:31:15.886 "strip_size_kb": 0, 00:31:15.886 "state": "online", 00:31:15.886 "raid_level": "raid1", 00:31:15.886 "superblock": true, 00:31:15.886 "num_base_bdevs": 4, 00:31:15.886 "num_base_bdevs_discovered": 3, 00:31:15.886 "num_base_bdevs_operational": 3, 00:31:15.886 "base_bdevs_list": [ 00:31:15.886 { 00:31:15.886 "name": "spare", 00:31:15.886 "uuid": "a016284c-506e-59fc-aeea-13dd36d2c9de", 00:31:15.886 "is_configured": true, 00:31:15.886 "data_offset": 2048, 00:31:15.886 "data_size": 63488 00:31:15.886 }, 00:31:15.886 { 00:31:15.886 "name": null, 00:31:15.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:15.886 "is_configured": false, 00:31:15.886 "data_offset": 2048, 00:31:15.886 "data_size": 63488 00:31:15.886 }, 00:31:15.886 { 00:31:15.886 "name": "BaseBdev3", 00:31:15.886 "uuid": "3b54f718-b11d-581e-914b-3870d2d688f3", 00:31:15.886 "is_configured": true, 00:31:15.886 "data_offset": 2048, 00:31:15.886 "data_size": 63488 00:31:15.886 }, 00:31:15.886 { 00:31:15.886 "name": "BaseBdev4", 00:31:15.886 "uuid": "4bef2656-246f-5ba1-8161-8f9a580c8227", 00:31:15.886 "is_configured": true, 00:31:15.886 "data_offset": 2048, 00:31:15.886 "data_size": 63488 00:31:15.886 } 00:31:15.886 ] 00:31:15.886 }' 00:31:15.886 14:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:15.886 14:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:15.886 14:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:15.886 14:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:15.886 14:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:15.886 14:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:31:16.145 14:24:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:31:16.145 14:24:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:31:16.404 [2024-07-15 14:24:02.291366] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:16.404 14:24:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:16.404 14:24:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:16.404 14:24:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:16.404 14:24:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:16.404 14:24:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:16.404 14:24:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:16.404 14:24:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:16.404 14:24:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:16.404 14:24:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:16.404 14:24:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:16.404 14:24:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:16.404 14:24:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:16.661 14:24:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:16.661 "name": "raid_bdev1", 00:31:16.661 "uuid": "7aa81bcf-94ab-4ce4-8e4f-d7e601751fd5", 00:31:16.661 "strip_size_kb": 0, 00:31:16.661 "state": "online", 00:31:16.661 "raid_level": "raid1", 00:31:16.661 "superblock": true, 00:31:16.661 "num_base_bdevs": 4, 00:31:16.661 "num_base_bdevs_discovered": 2, 00:31:16.661 "num_base_bdevs_operational": 2, 00:31:16.661 "base_bdevs_list": [ 00:31:16.661 { 00:31:16.661 "name": null, 00:31:16.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:16.661 "is_configured": false, 00:31:16.661 "data_offset": 2048, 00:31:16.661 "data_size": 63488 00:31:16.661 }, 00:31:16.661 { 00:31:16.661 "name": null, 00:31:16.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:16.661 "is_configured": false, 00:31:16.661 "data_offset": 2048, 00:31:16.662 "data_size": 63488 00:31:16.662 }, 00:31:16.662 { 00:31:16.662 "name": "BaseBdev3", 00:31:16.662 "uuid": "3b54f718-b11d-581e-914b-3870d2d688f3", 00:31:16.662 "is_configured": true, 00:31:16.662 "data_offset": 2048, 00:31:16.662 "data_size": 63488 00:31:16.662 }, 00:31:16.662 { 00:31:16.662 "name": "BaseBdev4", 00:31:16.662 "uuid": "4bef2656-246f-5ba1-8161-8f9a580c8227", 00:31:16.662 "is_configured": true, 00:31:16.662 "data_offset": 2048, 00:31:16.662 "data_size": 63488 00:31:16.662 } 00:31:16.662 ] 00:31:16.662 }' 00:31:16.662 14:24:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:16.662 14:24:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:17.595 14:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:31:17.595 [2024-07-15 14:24:03.527640] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:17.595 [2024-07-15 14:24:03.527868] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:31:17.595 [2024-07-15 14:24:03.527885] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:31:17.595 [2024-07-15 14:24:03.527937] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:17.595 [2024-07-15 14:24:03.540049] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037710 00:31:17.595 [2024-07-15 14:24:03.541639] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:17.595 14:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # sleep 1 00:31:18.970 14:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:18.970 14:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:18.970 14:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:18.970 14:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:18.970 14:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:18.970 14:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:18.970 14:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:18.970 14:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:18.970 "name": "raid_bdev1", 00:31:18.970 "uuid": "7aa81bcf-94ab-4ce4-8e4f-d7e601751fd5", 00:31:18.970 "strip_size_kb": 0, 00:31:18.970 "state": "online", 00:31:18.970 "raid_level": "raid1", 00:31:18.970 "superblock": true, 00:31:18.970 "num_base_bdevs": 4, 00:31:18.970 "num_base_bdevs_discovered": 3, 00:31:18.970 "num_base_bdevs_operational": 3, 00:31:18.970 "process": { 00:31:18.970 "type": "rebuild", 00:31:18.970 "target": "spare", 00:31:18.970 "progress": { 00:31:18.970 "blocks": 24576, 00:31:18.970 "percent": 38 00:31:18.970 } 00:31:18.970 }, 00:31:18.970 "base_bdevs_list": [ 00:31:18.970 { 00:31:18.970 "name": "spare", 00:31:18.970 "uuid": "a016284c-506e-59fc-aeea-13dd36d2c9de", 00:31:18.970 "is_configured": true, 00:31:18.970 "data_offset": 2048, 00:31:18.970 "data_size": 63488 00:31:18.970 }, 00:31:18.970 { 00:31:18.970 "name": null, 00:31:18.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:18.970 "is_configured": false, 00:31:18.970 "data_offset": 2048, 00:31:18.970 "data_size": 63488 00:31:18.970 }, 00:31:18.970 { 00:31:18.970 "name": "BaseBdev3", 00:31:18.970 "uuid": "3b54f718-b11d-581e-914b-3870d2d688f3", 00:31:18.970 "is_configured": true, 00:31:18.970 "data_offset": 2048, 00:31:18.970 "data_size": 63488 00:31:18.970 }, 00:31:18.970 { 00:31:18.970 "name": "BaseBdev4", 00:31:18.970 "uuid": "4bef2656-246f-5ba1-8161-8f9a580c8227", 00:31:18.970 "is_configured": true, 00:31:18.970 "data_offset": 2048, 00:31:18.970 "data_size": 63488 00:31:18.970 } 00:31:18.970 ] 00:31:18.970 }' 00:31:18.970 14:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:18.970 14:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:18.970 14:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:18.970 14:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:18.970 14:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:31:19.228 [2024-07-15 14:24:05.196169] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:19.487 [2024-07-15 14:24:05.251345] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:31:19.487 [2024-07-15 14:24:05.251412] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:19.487 [2024-07-15 14:24:05.251430] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:19.487 [2024-07-15 14:24:05.251439] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:31:19.487 14:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:19.487 14:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:19.487 14:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:19.487 14:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:19.487 14:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:19.487 14:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:19.487 14:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:19.487 14:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:19.487 14:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:19.487 14:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:19.487 14:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:19.487 14:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:19.745 14:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:19.745 "name": "raid_bdev1", 00:31:19.745 "uuid": "7aa81bcf-94ab-4ce4-8e4f-d7e601751fd5", 00:31:19.745 "strip_size_kb": 0, 00:31:19.745 "state": "online", 00:31:19.745 "raid_level": "raid1", 00:31:19.745 "superblock": true, 00:31:19.745 "num_base_bdevs": 4, 00:31:19.745 "num_base_bdevs_discovered": 2, 00:31:19.745 "num_base_bdevs_operational": 2, 00:31:19.745 "base_bdevs_list": [ 00:31:19.745 { 00:31:19.745 "name": null, 00:31:19.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:19.745 "is_configured": false, 00:31:19.745 "data_offset": 2048, 00:31:19.745 "data_size": 63488 00:31:19.745 }, 00:31:19.745 { 00:31:19.745 "name": null, 00:31:19.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:19.745 "is_configured": false, 00:31:19.745 "data_offset": 2048, 00:31:19.745 "data_size": 63488 00:31:19.745 }, 00:31:19.745 { 00:31:19.745 "name": "BaseBdev3", 00:31:19.745 "uuid": "3b54f718-b11d-581e-914b-3870d2d688f3", 00:31:19.745 "is_configured": true, 00:31:19.745 "data_offset": 2048, 00:31:19.745 "data_size": 63488 00:31:19.745 }, 00:31:19.745 { 00:31:19.745 "name": "BaseBdev4", 00:31:19.745 "uuid": "4bef2656-246f-5ba1-8161-8f9a580c8227", 00:31:19.746 "is_configured": true, 00:31:19.746 "data_offset": 2048, 00:31:19.746 "data_size": 63488 00:31:19.746 } 00:31:19.746 ] 00:31:19.746 }' 00:31:19.746 14:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:19.746 14:24:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:20.312 14:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:31:20.570 [2024-07-15 14:24:06.469744] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:31:20.570 [2024-07-15 14:24:06.469816] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:20.570 [2024-07-15 14:24:06.469858] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:31:20.570 [2024-07-15 14:24:06.469881] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:20.570 [2024-07-15 14:24:06.470275] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:20.570 [2024-07-15 14:24:06.470306] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:31:20.570 [2024-07-15 14:24:06.470402] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:31:20.570 [2024-07-15 14:24:06.470417] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:31:20.570 [2024-07-15 14:24:06.470426] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:31:20.570 [2024-07-15 14:24:06.470461] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:20.570 [2024-07-15 14:24:06.483002] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037a50 00:31:20.570 spare 00:31:20.570 [2024-07-15 14:24:06.484425] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:20.570 14:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # sleep 1 00:31:21.505 14:24:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:21.505 14:24:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:21.505 14:24:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:21.505 14:24:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:21.505 14:24:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:21.764 14:24:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:21.764 14:24:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:22.024 14:24:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:22.024 "name": "raid_bdev1", 00:31:22.024 "uuid": "7aa81bcf-94ab-4ce4-8e4f-d7e601751fd5", 00:31:22.024 "strip_size_kb": 0, 00:31:22.024 "state": "online", 00:31:22.024 "raid_level": "raid1", 00:31:22.024 "superblock": true, 00:31:22.024 "num_base_bdevs": 4, 00:31:22.024 "num_base_bdevs_discovered": 3, 00:31:22.024 "num_base_bdevs_operational": 3, 00:31:22.024 "process": { 00:31:22.024 "type": "rebuild", 00:31:22.024 "target": "spare", 00:31:22.024 "progress": { 00:31:22.024 "blocks": 24576, 00:31:22.024 "percent": 38 00:31:22.024 } 00:31:22.024 }, 00:31:22.024 "base_bdevs_list": [ 00:31:22.024 { 00:31:22.024 "name": "spare", 00:31:22.024 "uuid": "a016284c-506e-59fc-aeea-13dd36d2c9de", 00:31:22.024 "is_configured": true, 00:31:22.024 "data_offset": 2048, 00:31:22.024 "data_size": 63488 00:31:22.024 }, 00:31:22.024 { 00:31:22.024 "name": null, 00:31:22.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:22.024 "is_configured": false, 00:31:22.024 "data_offset": 2048, 00:31:22.024 "data_size": 63488 00:31:22.024 }, 00:31:22.024 { 00:31:22.024 "name": "BaseBdev3", 00:31:22.024 "uuid": "3b54f718-b11d-581e-914b-3870d2d688f3", 00:31:22.024 "is_configured": true, 00:31:22.024 "data_offset": 2048, 00:31:22.024 "data_size": 63488 00:31:22.024 }, 00:31:22.024 { 00:31:22.024 "name": "BaseBdev4", 00:31:22.024 "uuid": "4bef2656-246f-5ba1-8161-8f9a580c8227", 00:31:22.024 "is_configured": true, 00:31:22.024 "data_offset": 2048, 00:31:22.024 "data_size": 63488 00:31:22.024 } 00:31:22.024 ] 00:31:22.024 }' 00:31:22.025 14:24:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:22.025 14:24:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:22.025 14:24:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:22.025 14:24:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:22.025 14:24:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:31:22.283 [2024-07-15 14:24:08.139105] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:22.283 [2024-07-15 14:24:08.194226] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:31:22.283 [2024-07-15 14:24:08.194433] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:22.283 [2024-07-15 14:24:08.194491] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:22.283 [2024-07-15 14:24:08.194591] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:31:22.283 14:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:22.283 14:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:22.283 14:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:22.283 14:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:22.283 14:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:22.283 14:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:22.283 14:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:22.283 14:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:22.283 14:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:22.283 14:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:22.283 14:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:22.283 14:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:22.541 14:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:22.541 "name": "raid_bdev1", 00:31:22.541 "uuid": "7aa81bcf-94ab-4ce4-8e4f-d7e601751fd5", 00:31:22.541 "strip_size_kb": 0, 00:31:22.541 "state": "online", 00:31:22.541 "raid_level": "raid1", 00:31:22.541 "superblock": true, 00:31:22.541 "num_base_bdevs": 4, 00:31:22.541 "num_base_bdevs_discovered": 2, 00:31:22.541 "num_base_bdevs_operational": 2, 00:31:22.541 "base_bdevs_list": [ 00:31:22.541 { 00:31:22.541 "name": null, 00:31:22.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:22.541 "is_configured": false, 00:31:22.541 "data_offset": 2048, 00:31:22.541 "data_size": 63488 00:31:22.541 }, 00:31:22.541 { 00:31:22.541 "name": null, 00:31:22.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:22.541 "is_configured": false, 00:31:22.541 "data_offset": 2048, 00:31:22.541 "data_size": 63488 00:31:22.541 }, 00:31:22.541 { 00:31:22.541 "name": "BaseBdev3", 00:31:22.541 "uuid": "3b54f718-b11d-581e-914b-3870d2d688f3", 00:31:22.541 "is_configured": true, 00:31:22.541 "data_offset": 2048, 00:31:22.541 "data_size": 63488 00:31:22.541 }, 00:31:22.541 { 00:31:22.541 "name": "BaseBdev4", 00:31:22.541 "uuid": "4bef2656-246f-5ba1-8161-8f9a580c8227", 00:31:22.541 "is_configured": true, 00:31:22.541 "data_offset": 2048, 00:31:22.541 "data_size": 63488 00:31:22.541 } 00:31:22.541 ] 00:31:22.541 }' 00:31:22.541 14:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:22.541 14:24:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:23.474 14:24:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:23.474 14:24:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:23.474 14:24:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:23.474 14:24:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:23.474 14:24:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:23.474 14:24:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:23.475 14:24:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:23.475 14:24:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:23.475 "name": "raid_bdev1", 00:31:23.475 "uuid": "7aa81bcf-94ab-4ce4-8e4f-d7e601751fd5", 00:31:23.475 "strip_size_kb": 0, 00:31:23.475 "state": "online", 00:31:23.475 "raid_level": "raid1", 00:31:23.475 "superblock": true, 00:31:23.475 "num_base_bdevs": 4, 00:31:23.475 "num_base_bdevs_discovered": 2, 00:31:23.475 "num_base_bdevs_operational": 2, 00:31:23.475 "base_bdevs_list": [ 00:31:23.475 { 00:31:23.475 "name": null, 00:31:23.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:23.475 "is_configured": false, 00:31:23.475 "data_offset": 2048, 00:31:23.475 "data_size": 63488 00:31:23.475 }, 00:31:23.475 { 00:31:23.475 "name": null, 00:31:23.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:23.475 "is_configured": false, 00:31:23.475 "data_offset": 2048, 00:31:23.475 "data_size": 63488 00:31:23.475 }, 00:31:23.475 { 00:31:23.475 "name": "BaseBdev3", 00:31:23.475 "uuid": "3b54f718-b11d-581e-914b-3870d2d688f3", 00:31:23.475 "is_configured": true, 00:31:23.475 "data_offset": 2048, 00:31:23.475 "data_size": 63488 00:31:23.475 }, 00:31:23.475 { 00:31:23.475 "name": "BaseBdev4", 00:31:23.475 "uuid": "4bef2656-246f-5ba1-8161-8f9a580c8227", 00:31:23.475 "is_configured": true, 00:31:23.475 "data_offset": 2048, 00:31:23.475 "data_size": 63488 00:31:23.475 } 00:31:23.475 ] 00:31:23.475 }' 00:31:23.475 14:24:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:23.475 14:24:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:23.475 14:24:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:23.733 14:24:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:23.733 14:24:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:31:23.991 14:24:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:31:24.249 [2024-07-15 14:24:10.044979] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:31:24.249 [2024-07-15 14:24:10.045244] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:24.249 [2024-07-15 14:24:10.045327] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:31:24.249 [2024-07-15 14:24:10.045607] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:24.249 [2024-07-15 14:24:10.046007] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:24.249 [2024-07-15 14:24:10.046157] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:31:24.249 [2024-07-15 14:24:10.046372] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:31:24.249 [2024-07-15 14:24:10.046491] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:31:24.249 [2024-07-15 14:24:10.046593] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:31:24.249 BaseBdev1 00:31:24.249 14:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # sleep 1 00:31:25.188 14:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:25.188 14:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:25.189 14:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:25.189 14:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:25.189 14:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:25.189 14:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:25.189 14:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:25.189 14:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:25.189 14:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:25.189 14:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:25.189 14:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:25.189 14:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:25.448 14:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:25.448 "name": "raid_bdev1", 00:31:25.448 "uuid": "7aa81bcf-94ab-4ce4-8e4f-d7e601751fd5", 00:31:25.448 "strip_size_kb": 0, 00:31:25.448 "state": "online", 00:31:25.448 "raid_level": "raid1", 00:31:25.448 "superblock": true, 00:31:25.448 "num_base_bdevs": 4, 00:31:25.448 "num_base_bdevs_discovered": 2, 00:31:25.448 "num_base_bdevs_operational": 2, 00:31:25.448 "base_bdevs_list": [ 00:31:25.448 { 00:31:25.448 "name": null, 00:31:25.448 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:25.448 "is_configured": false, 00:31:25.448 "data_offset": 2048, 00:31:25.448 "data_size": 63488 00:31:25.448 }, 00:31:25.448 { 00:31:25.448 "name": null, 00:31:25.448 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:25.448 "is_configured": false, 00:31:25.448 "data_offset": 2048, 00:31:25.448 "data_size": 63488 00:31:25.448 }, 00:31:25.448 { 00:31:25.448 "name": "BaseBdev3", 00:31:25.448 "uuid": "3b54f718-b11d-581e-914b-3870d2d688f3", 00:31:25.448 "is_configured": true, 00:31:25.448 "data_offset": 2048, 00:31:25.448 "data_size": 63488 00:31:25.448 }, 00:31:25.448 { 00:31:25.448 "name": "BaseBdev4", 00:31:25.448 "uuid": "4bef2656-246f-5ba1-8161-8f9a580c8227", 00:31:25.448 "is_configured": true, 00:31:25.448 "data_offset": 2048, 00:31:25.448 "data_size": 63488 00:31:25.448 } 00:31:25.448 ] 00:31:25.448 }' 00:31:25.448 14:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:25.448 14:24:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:26.385 14:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:26.385 14:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:26.385 14:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:26.385 14:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:26.385 14:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:26.385 14:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:26.385 14:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:26.385 14:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:26.385 "name": "raid_bdev1", 00:31:26.385 "uuid": "7aa81bcf-94ab-4ce4-8e4f-d7e601751fd5", 00:31:26.385 "strip_size_kb": 0, 00:31:26.385 "state": "online", 00:31:26.385 "raid_level": "raid1", 00:31:26.385 "superblock": true, 00:31:26.385 "num_base_bdevs": 4, 00:31:26.385 "num_base_bdevs_discovered": 2, 00:31:26.385 "num_base_bdevs_operational": 2, 00:31:26.385 "base_bdevs_list": [ 00:31:26.385 { 00:31:26.385 "name": null, 00:31:26.385 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:26.385 "is_configured": false, 00:31:26.385 "data_offset": 2048, 00:31:26.385 "data_size": 63488 00:31:26.385 }, 00:31:26.385 { 00:31:26.385 "name": null, 00:31:26.385 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:26.385 "is_configured": false, 00:31:26.385 "data_offset": 2048, 00:31:26.385 "data_size": 63488 00:31:26.385 }, 00:31:26.385 { 00:31:26.385 "name": "BaseBdev3", 00:31:26.385 "uuid": "3b54f718-b11d-581e-914b-3870d2d688f3", 00:31:26.385 "is_configured": true, 00:31:26.385 "data_offset": 2048, 00:31:26.385 "data_size": 63488 00:31:26.385 }, 00:31:26.385 { 00:31:26.385 "name": "BaseBdev4", 00:31:26.385 "uuid": "4bef2656-246f-5ba1-8161-8f9a580c8227", 00:31:26.385 "is_configured": true, 00:31:26.385 "data_offset": 2048, 00:31:26.385 "data_size": 63488 00:31:26.385 } 00:31:26.385 ] 00:31:26.385 }' 00:31:26.385 14:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:26.385 14:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:26.385 14:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:26.644 14:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:26.644 14:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:31:26.644 14:24:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@648 -- # local es=0 00:31:26.644 14:24:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:31:26.644 14:24:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:26.644 14:24:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:26.644 14:24:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:26.644 14:24:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:26.644 14:24:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:26.644 14:24:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:26.644 14:24:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:26.644 14:24:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:31:26.644 14:24:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:31:26.903 [2024-07-15 14:24:12.677525] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:26.903 [2024-07-15 14:24:12.677813] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:31:26.903 [2024-07-15 14:24:12.677950] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:31:26.903 request: 00:31:26.903 { 00:31:26.903 "base_bdev": "BaseBdev1", 00:31:26.903 "raid_bdev": "raid_bdev1", 00:31:26.903 "method": "bdev_raid_add_base_bdev", 00:31:26.903 "req_id": 1 00:31:26.903 } 00:31:26.903 Got JSON-RPC error response 00:31:26.903 response: 00:31:26.903 { 00:31:26.903 "code": -22, 00:31:26.903 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:31:26.903 } 00:31:26.903 14:24:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@651 -- # es=1 00:31:26.903 14:24:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:26.903 14:24:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:26.903 14:24:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:26.903 14:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # sleep 1 00:31:27.856 14:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:27.856 14:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:27.856 14:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:27.856 14:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:27.856 14:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:27.856 14:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:27.857 14:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:27.857 14:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:27.857 14:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:27.857 14:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:27.857 14:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:27.857 14:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:28.115 14:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:28.115 "name": "raid_bdev1", 00:31:28.115 "uuid": "7aa81bcf-94ab-4ce4-8e4f-d7e601751fd5", 00:31:28.115 "strip_size_kb": 0, 00:31:28.115 "state": "online", 00:31:28.115 "raid_level": "raid1", 00:31:28.115 "superblock": true, 00:31:28.115 "num_base_bdevs": 4, 00:31:28.115 "num_base_bdevs_discovered": 2, 00:31:28.115 "num_base_bdevs_operational": 2, 00:31:28.115 "base_bdevs_list": [ 00:31:28.115 { 00:31:28.115 "name": null, 00:31:28.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:28.115 "is_configured": false, 00:31:28.115 "data_offset": 2048, 00:31:28.115 "data_size": 63488 00:31:28.115 }, 00:31:28.115 { 00:31:28.115 "name": null, 00:31:28.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:28.115 "is_configured": false, 00:31:28.115 "data_offset": 2048, 00:31:28.115 "data_size": 63488 00:31:28.115 }, 00:31:28.115 { 00:31:28.115 "name": "BaseBdev3", 00:31:28.115 "uuid": "3b54f718-b11d-581e-914b-3870d2d688f3", 00:31:28.115 "is_configured": true, 00:31:28.115 "data_offset": 2048, 00:31:28.115 "data_size": 63488 00:31:28.115 }, 00:31:28.115 { 00:31:28.115 "name": "BaseBdev4", 00:31:28.115 "uuid": "4bef2656-246f-5ba1-8161-8f9a580c8227", 00:31:28.115 "is_configured": true, 00:31:28.115 "data_offset": 2048, 00:31:28.115 "data_size": 63488 00:31:28.115 } 00:31:28.115 ] 00:31:28.115 }' 00:31:28.115 14:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:28.115 14:24:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:28.683 14:24:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:28.683 14:24:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:28.683 14:24:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:28.683 14:24:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:28.683 14:24:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:28.683 14:24:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:28.683 14:24:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:28.943 14:24:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:28.943 "name": "raid_bdev1", 00:31:28.943 "uuid": "7aa81bcf-94ab-4ce4-8e4f-d7e601751fd5", 00:31:28.943 "strip_size_kb": 0, 00:31:28.943 "state": "online", 00:31:28.943 "raid_level": "raid1", 00:31:28.943 "superblock": true, 00:31:28.943 "num_base_bdevs": 4, 00:31:28.943 "num_base_bdevs_discovered": 2, 00:31:28.943 "num_base_bdevs_operational": 2, 00:31:28.943 "base_bdevs_list": [ 00:31:28.943 { 00:31:28.943 "name": null, 00:31:28.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:28.943 "is_configured": false, 00:31:28.943 "data_offset": 2048, 00:31:28.943 "data_size": 63488 00:31:28.943 }, 00:31:28.943 { 00:31:28.943 "name": null, 00:31:28.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:28.943 "is_configured": false, 00:31:28.943 "data_offset": 2048, 00:31:28.943 "data_size": 63488 00:31:28.943 }, 00:31:28.943 { 00:31:28.943 "name": "BaseBdev3", 00:31:28.943 "uuid": "3b54f718-b11d-581e-914b-3870d2d688f3", 00:31:28.943 "is_configured": true, 00:31:28.943 "data_offset": 2048, 00:31:28.943 "data_size": 63488 00:31:28.943 }, 00:31:28.943 { 00:31:28.943 "name": "BaseBdev4", 00:31:28.943 "uuid": "4bef2656-246f-5ba1-8161-8f9a580c8227", 00:31:28.943 "is_configured": true, 00:31:28.943 "data_offset": 2048, 00:31:28.943 "data_size": 63488 00:31:28.943 } 00:31:28.943 ] 00:31:28.943 }' 00:31:28.943 14:24:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:28.943 14:24:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:28.943 14:24:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:28.943 14:24:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:28.943 14:24:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@782 -- # killprocess 215208 00:31:28.943 14:24:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@948 -- # '[' -z 215208 ']' 00:31:28.943 14:24:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@952 -- # kill -0 215208 00:31:28.943 14:24:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@953 -- # uname 00:31:28.943 14:24:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:28.943 14:24:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 215208 00:31:28.943 14:24:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:28.943 14:24:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:28.943 14:24:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@966 -- # echo 'killing process with pid 215208' 00:31:28.943 killing process with pid 215208 00:31:28.943 14:24:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@967 -- # kill 215208 00:31:28.943 Received shutdown signal, test time was about 29.704331 seconds 00:31:28.943 00:31:28.943 Latency(us) 00:31:28.943 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:28.943 =================================================================================================================== 00:31:28.943 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:28.943 [2024-07-15 14:24:14.868793] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:28.943 [2024-07-15 14:24:14.868993] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:28.943 [2024-07-15 14:24:14.869152] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:28.943 [2024-07-15 14:24:14.869265] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c080 name raid_bdev1, state offline 00:31:28.943 14:24:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # wait 215208 00:31:29.512 [2024-07-15 14:24:15.232116] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:30.448 14:24:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # return 0 00:31:30.448 00:31:30.448 real 0m37.073s 00:31:30.448 user 0m59.799s 00:31:30.448 sys 0m3.970s 00:31:30.448 14:24:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:30.448 14:24:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:31:30.448 ************************************ 00:31:30.448 END TEST raid_rebuild_test_sb_io 00:31:30.448 ************************************ 00:31:30.706 14:24:16 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:31:30.706 14:24:16 bdev_raid -- bdev/bdev_raid.sh@884 -- # '[' n == y ']' 00:31:30.706 14:24:16 bdev_raid -- bdev/bdev_raid.sh@896 -- # base_blocklen=4096 00:31:30.706 14:24:16 bdev_raid -- bdev/bdev_raid.sh@898 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:31:30.706 14:24:16 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:31:30.706 14:24:16 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:30.706 14:24:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:30.706 ************************************ 00:31:30.706 START TEST raid_state_function_test_sb_4k 00:31:30.706 ************************************ 00:31:30.706 14:24:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 true 00:31:30.706 14:24:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:31:30.707 14:24:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:31:30.707 14:24:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:31:30.707 14:24:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:31:30.707 14:24:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:31:30.707 14:24:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:31:30.707 14:24:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:31:30.707 14:24:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:31:30.707 14:24:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:31:30.707 14:24:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:31:30.707 14:24:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:31:30.707 14:24:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:31:30.707 14:24:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:31:30.707 14:24:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:31:30.707 14:24:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:31:30.707 14:24:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # local strip_size 00:31:30.707 14:24:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:31:30.707 14:24:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:31:30.707 14:24:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:31:30.707 14:24:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:31:30.707 14:24:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:31:30.707 14:24:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:31:30.707 14:24:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # raid_pid=216136 00:31:30.707 14:24:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 216136' 00:31:30.707 Process raid pid: 216136 00:31:30.707 14:24:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:31:30.707 14:24:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@246 -- # waitforlisten 216136 /var/tmp/spdk-raid.sock 00:31:30.707 14:24:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@829 -- # '[' -z 216136 ']' 00:31:30.707 14:24:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:31:30.707 14:24:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:30.707 14:24:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:31:30.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:31:30.707 14:24:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:30.707 14:24:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:30.707 [2024-07-15 14:24:16.548012] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:31:30.707 [2024-07-15 14:24:16.548380] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:30.965 [2024-07-15 14:24:16.712506] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:30.965 [2024-07-15 14:24:16.930497] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:31.224 [2024-07-15 14:24:17.129358] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:31.792 14:24:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:31.792 14:24:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@862 -- # return 0 00:31:31.792 14:24:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:31:31.792 [2024-07-15 14:24:17.752864] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:31:31.792 [2024-07-15 14:24:17.753135] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:31:31.792 [2024-07-15 14:24:17.753306] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:31.792 [2024-07-15 14:24:17.753499] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:31.792 14:24:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:31:31.792 14:24:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:31.792 14:24:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:31:31.792 14:24:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:31.792 14:24:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:31.792 14:24:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:31.792 14:24:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:31.792 14:24:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:31.792 14:24:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:31.792 14:24:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:31.792 14:24:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:31.792 14:24:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:32.050 14:24:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:32.050 "name": "Existed_Raid", 00:31:32.050 "uuid": "8982dcc7-be8f-4a49-acf6-ab555cccfbff", 00:31:32.050 "strip_size_kb": 0, 00:31:32.050 "state": "configuring", 00:31:32.050 "raid_level": "raid1", 00:31:32.050 "superblock": true, 00:31:32.050 "num_base_bdevs": 2, 00:31:32.050 "num_base_bdevs_discovered": 0, 00:31:32.050 "num_base_bdevs_operational": 2, 00:31:32.050 "base_bdevs_list": [ 00:31:32.050 { 00:31:32.050 "name": "BaseBdev1", 00:31:32.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:32.051 "is_configured": false, 00:31:32.051 "data_offset": 0, 00:31:32.051 "data_size": 0 00:31:32.051 }, 00:31:32.051 { 00:31:32.051 "name": "BaseBdev2", 00:31:32.051 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:32.051 "is_configured": false, 00:31:32.051 "data_offset": 0, 00:31:32.051 "data_size": 0 00:31:32.051 } 00:31:32.051 ] 00:31:32.051 }' 00:31:32.051 14:24:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:32.051 14:24:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:32.987 14:24:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:31:32.987 [2024-07-15 14:24:18.957121] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:31:32.987 [2024-07-15 14:24:18.957355] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:31:32.987 14:24:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:31:33.246 [2024-07-15 14:24:19.225168] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:31:33.246 [2024-07-15 14:24:19.225418] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:31:33.246 [2024-07-15 14:24:19.225598] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:33.246 [2024-07-15 14:24:19.225815] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:33.246 14:24:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev1 00:31:33.811 [2024-07-15 14:24:19.532041] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:33.811 BaseBdev1 00:31:33.811 14:24:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:31:33.811 14:24:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:31:33.811 14:24:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:31:33.811 14:24:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local i 00:31:33.811 14:24:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:31:33.811 14:24:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:31:33.811 14:24:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:31:34.070 14:24:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:31:34.070 [ 00:31:34.070 { 00:31:34.070 "name": "BaseBdev1", 00:31:34.070 "aliases": [ 00:31:34.070 "26baa16a-07dc-4043-89bd-dcaf8c10ea38" 00:31:34.070 ], 00:31:34.070 "product_name": "Malloc disk", 00:31:34.070 "block_size": 4096, 00:31:34.070 "num_blocks": 8192, 00:31:34.070 "uuid": "26baa16a-07dc-4043-89bd-dcaf8c10ea38", 00:31:34.070 "assigned_rate_limits": { 00:31:34.070 "rw_ios_per_sec": 0, 00:31:34.070 "rw_mbytes_per_sec": 0, 00:31:34.070 "r_mbytes_per_sec": 0, 00:31:34.070 "w_mbytes_per_sec": 0 00:31:34.070 }, 00:31:34.070 "claimed": true, 00:31:34.070 "claim_type": "exclusive_write", 00:31:34.070 "zoned": false, 00:31:34.070 "supported_io_types": { 00:31:34.070 "read": true, 00:31:34.070 "write": true, 00:31:34.070 "unmap": true, 00:31:34.070 "flush": true, 00:31:34.070 "reset": true, 00:31:34.070 "nvme_admin": false, 00:31:34.070 "nvme_io": false, 00:31:34.070 "nvme_io_md": false, 00:31:34.070 "write_zeroes": true, 00:31:34.070 "zcopy": true, 00:31:34.070 "get_zone_info": false, 00:31:34.070 "zone_management": false, 00:31:34.070 "zone_append": false, 00:31:34.070 "compare": false, 00:31:34.070 "compare_and_write": false, 00:31:34.070 "abort": true, 00:31:34.070 "seek_hole": false, 00:31:34.070 "seek_data": false, 00:31:34.070 "copy": true, 00:31:34.070 "nvme_iov_md": false 00:31:34.070 }, 00:31:34.070 "memory_domains": [ 00:31:34.070 { 00:31:34.070 "dma_device_id": "system", 00:31:34.070 "dma_device_type": 1 00:31:34.070 }, 00:31:34.070 { 00:31:34.070 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:34.070 "dma_device_type": 2 00:31:34.070 } 00:31:34.070 ], 00:31:34.070 "driver_specific": {} 00:31:34.070 } 00:31:34.070 ] 00:31:34.328 14:24:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # return 0 00:31:34.328 14:24:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:31:34.328 14:24:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:34.328 14:24:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:31:34.328 14:24:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:34.328 14:24:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:34.328 14:24:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:34.328 14:24:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:34.328 14:24:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:34.328 14:24:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:34.328 14:24:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:34.328 14:24:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:34.328 14:24:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:34.587 14:24:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:34.587 "name": "Existed_Raid", 00:31:34.587 "uuid": "e60530c3-87ad-40e9-8974-1a92877bb00d", 00:31:34.587 "strip_size_kb": 0, 00:31:34.587 "state": "configuring", 00:31:34.587 "raid_level": "raid1", 00:31:34.587 "superblock": true, 00:31:34.587 "num_base_bdevs": 2, 00:31:34.587 "num_base_bdevs_discovered": 1, 00:31:34.587 "num_base_bdevs_operational": 2, 00:31:34.587 "base_bdevs_list": [ 00:31:34.587 { 00:31:34.587 "name": "BaseBdev1", 00:31:34.587 "uuid": "26baa16a-07dc-4043-89bd-dcaf8c10ea38", 00:31:34.587 "is_configured": true, 00:31:34.587 "data_offset": 256, 00:31:34.587 "data_size": 7936 00:31:34.587 }, 00:31:34.587 { 00:31:34.587 "name": "BaseBdev2", 00:31:34.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:34.587 "is_configured": false, 00:31:34.587 "data_offset": 0, 00:31:34.587 "data_size": 0 00:31:34.587 } 00:31:34.587 ] 00:31:34.587 }' 00:31:34.587 14:24:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:34.587 14:24:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:35.200 14:24:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:31:35.200 [2024-07-15 14:24:21.188374] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:31:35.200 [2024-07-15 14:24:21.188652] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:31:35.459 14:24:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:31:35.718 [2024-07-15 14:24:21.488545] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:35.718 [2024-07-15 14:24:21.490798] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:31:35.718 [2024-07-15 14:24:21.491028] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:31:35.718 14:24:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:31:35.718 14:24:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:31:35.718 14:24:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:31:35.718 14:24:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:35.718 14:24:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:31:35.718 14:24:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:35.718 14:24:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:35.718 14:24:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:35.718 14:24:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:35.718 14:24:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:35.718 14:24:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:35.718 14:24:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:35.718 14:24:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:35.718 14:24:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:35.977 14:24:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:35.977 "name": "Existed_Raid", 00:31:35.977 "uuid": "a418d32a-e952-47f3-9e86-ae7022bcf40f", 00:31:35.977 "strip_size_kb": 0, 00:31:35.977 "state": "configuring", 00:31:35.977 "raid_level": "raid1", 00:31:35.977 "superblock": true, 00:31:35.977 "num_base_bdevs": 2, 00:31:35.977 "num_base_bdevs_discovered": 1, 00:31:35.977 "num_base_bdevs_operational": 2, 00:31:35.977 "base_bdevs_list": [ 00:31:35.977 { 00:31:35.977 "name": "BaseBdev1", 00:31:35.977 "uuid": "26baa16a-07dc-4043-89bd-dcaf8c10ea38", 00:31:35.977 "is_configured": true, 00:31:35.977 "data_offset": 256, 00:31:35.977 "data_size": 7936 00:31:35.977 }, 00:31:35.977 { 00:31:35.977 "name": "BaseBdev2", 00:31:35.977 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:35.977 "is_configured": false, 00:31:35.977 "data_offset": 0, 00:31:35.977 "data_size": 0 00:31:35.977 } 00:31:35.977 ] 00:31:35.977 }' 00:31:35.977 14:24:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:35.977 14:24:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:36.545 14:24:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev2 00:31:37.120 [2024-07-15 14:24:22.847225] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:37.120 [2024-07-15 14:24:22.847778] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:31:37.120 [2024-07-15 14:24:22.847918] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:31:37.120 [2024-07-15 14:24:22.848085] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:31:37.120 [2024-07-15 14:24:22.848463] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:31:37.120 [2024-07-15 14:24:22.848601] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:31:37.120 [2024-07-15 14:24:22.848841] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:37.120 BaseBdev2 00:31:37.120 14:24:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:31:37.120 14:24:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:31:37.120 14:24:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:31:37.120 14:24:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local i 00:31:37.120 14:24:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:31:37.120 14:24:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:31:37.120 14:24:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:31:37.377 14:24:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:31:37.635 [ 00:31:37.635 { 00:31:37.635 "name": "BaseBdev2", 00:31:37.635 "aliases": [ 00:31:37.635 "79f0abdb-4d27-410f-9ccf-c5d5582f00da" 00:31:37.635 ], 00:31:37.635 "product_name": "Malloc disk", 00:31:37.635 "block_size": 4096, 00:31:37.635 "num_blocks": 8192, 00:31:37.635 "uuid": "79f0abdb-4d27-410f-9ccf-c5d5582f00da", 00:31:37.635 "assigned_rate_limits": { 00:31:37.635 "rw_ios_per_sec": 0, 00:31:37.635 "rw_mbytes_per_sec": 0, 00:31:37.635 "r_mbytes_per_sec": 0, 00:31:37.635 "w_mbytes_per_sec": 0 00:31:37.635 }, 00:31:37.635 "claimed": true, 00:31:37.635 "claim_type": "exclusive_write", 00:31:37.635 "zoned": false, 00:31:37.635 "supported_io_types": { 00:31:37.635 "read": true, 00:31:37.635 "write": true, 00:31:37.635 "unmap": true, 00:31:37.635 "flush": true, 00:31:37.635 "reset": true, 00:31:37.635 "nvme_admin": false, 00:31:37.635 "nvme_io": false, 00:31:37.635 "nvme_io_md": false, 00:31:37.635 "write_zeroes": true, 00:31:37.635 "zcopy": true, 00:31:37.635 "get_zone_info": false, 00:31:37.635 "zone_management": false, 00:31:37.635 "zone_append": false, 00:31:37.635 "compare": false, 00:31:37.635 "compare_and_write": false, 00:31:37.635 "abort": true, 00:31:37.635 "seek_hole": false, 00:31:37.635 "seek_data": false, 00:31:37.635 "copy": true, 00:31:37.635 "nvme_iov_md": false 00:31:37.635 }, 00:31:37.635 "memory_domains": [ 00:31:37.635 { 00:31:37.635 "dma_device_id": "system", 00:31:37.635 "dma_device_type": 1 00:31:37.635 }, 00:31:37.635 { 00:31:37.635 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:37.635 "dma_device_type": 2 00:31:37.635 } 00:31:37.635 ], 00:31:37.635 "driver_specific": {} 00:31:37.635 } 00:31:37.635 ] 00:31:37.635 14:24:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # return 0 00:31:37.635 14:24:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:31:37.635 14:24:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:31:37.635 14:24:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:31:37.635 14:24:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:37.635 14:24:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:37.635 14:24:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:37.635 14:24:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:37.635 14:24:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:37.635 14:24:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:37.635 14:24:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:37.635 14:24:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:37.635 14:24:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:37.635 14:24:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:37.635 14:24:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:37.893 14:24:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:37.893 "name": "Existed_Raid", 00:31:37.893 "uuid": "a418d32a-e952-47f3-9e86-ae7022bcf40f", 00:31:37.893 "strip_size_kb": 0, 00:31:37.893 "state": "online", 00:31:37.893 "raid_level": "raid1", 00:31:37.893 "superblock": true, 00:31:37.893 "num_base_bdevs": 2, 00:31:37.893 "num_base_bdevs_discovered": 2, 00:31:37.893 "num_base_bdevs_operational": 2, 00:31:37.893 "base_bdevs_list": [ 00:31:37.893 { 00:31:37.893 "name": "BaseBdev1", 00:31:37.893 "uuid": "26baa16a-07dc-4043-89bd-dcaf8c10ea38", 00:31:37.893 "is_configured": true, 00:31:37.893 "data_offset": 256, 00:31:37.893 "data_size": 7936 00:31:37.893 }, 00:31:37.893 { 00:31:37.893 "name": "BaseBdev2", 00:31:37.893 "uuid": "79f0abdb-4d27-410f-9ccf-c5d5582f00da", 00:31:37.893 "is_configured": true, 00:31:37.893 "data_offset": 256, 00:31:37.893 "data_size": 7936 00:31:37.893 } 00:31:37.893 ] 00:31:37.893 }' 00:31:37.893 14:24:23 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:37.893 14:24:23 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:38.523 14:24:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:31:38.523 14:24:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:31:38.523 14:24:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:31:38.523 14:24:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:31:38.523 14:24:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:31:38.523 14:24:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # local name 00:31:38.523 14:24:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:31:38.523 14:24:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:31:38.782 [2024-07-15 14:24:24.611766] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:38.782 14:24:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:31:38.782 "name": "Existed_Raid", 00:31:38.782 "aliases": [ 00:31:38.782 "a418d32a-e952-47f3-9e86-ae7022bcf40f" 00:31:38.782 ], 00:31:38.782 "product_name": "Raid Volume", 00:31:38.782 "block_size": 4096, 00:31:38.782 "num_blocks": 7936, 00:31:38.782 "uuid": "a418d32a-e952-47f3-9e86-ae7022bcf40f", 00:31:38.782 "assigned_rate_limits": { 00:31:38.782 "rw_ios_per_sec": 0, 00:31:38.782 "rw_mbytes_per_sec": 0, 00:31:38.782 "r_mbytes_per_sec": 0, 00:31:38.782 "w_mbytes_per_sec": 0 00:31:38.782 }, 00:31:38.782 "claimed": false, 00:31:38.782 "zoned": false, 00:31:38.782 "supported_io_types": { 00:31:38.782 "read": true, 00:31:38.782 "write": true, 00:31:38.782 "unmap": false, 00:31:38.782 "flush": false, 00:31:38.782 "reset": true, 00:31:38.782 "nvme_admin": false, 00:31:38.782 "nvme_io": false, 00:31:38.782 "nvme_io_md": false, 00:31:38.782 "write_zeroes": true, 00:31:38.782 "zcopy": false, 00:31:38.782 "get_zone_info": false, 00:31:38.782 "zone_management": false, 00:31:38.782 "zone_append": false, 00:31:38.782 "compare": false, 00:31:38.782 "compare_and_write": false, 00:31:38.782 "abort": false, 00:31:38.782 "seek_hole": false, 00:31:38.782 "seek_data": false, 00:31:38.782 "copy": false, 00:31:38.782 "nvme_iov_md": false 00:31:38.782 }, 00:31:38.782 "memory_domains": [ 00:31:38.782 { 00:31:38.782 "dma_device_id": "system", 00:31:38.782 "dma_device_type": 1 00:31:38.782 }, 00:31:38.782 { 00:31:38.782 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:38.782 "dma_device_type": 2 00:31:38.782 }, 00:31:38.782 { 00:31:38.782 "dma_device_id": "system", 00:31:38.782 "dma_device_type": 1 00:31:38.782 }, 00:31:38.782 { 00:31:38.782 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:38.782 "dma_device_type": 2 00:31:38.782 } 00:31:38.782 ], 00:31:38.782 "driver_specific": { 00:31:38.782 "raid": { 00:31:38.782 "uuid": "a418d32a-e952-47f3-9e86-ae7022bcf40f", 00:31:38.782 "strip_size_kb": 0, 00:31:38.782 "state": "online", 00:31:38.782 "raid_level": "raid1", 00:31:38.782 "superblock": true, 00:31:38.782 "num_base_bdevs": 2, 00:31:38.782 "num_base_bdevs_discovered": 2, 00:31:38.782 "num_base_bdevs_operational": 2, 00:31:38.782 "base_bdevs_list": [ 00:31:38.782 { 00:31:38.782 "name": "BaseBdev1", 00:31:38.782 "uuid": "26baa16a-07dc-4043-89bd-dcaf8c10ea38", 00:31:38.782 "is_configured": true, 00:31:38.782 "data_offset": 256, 00:31:38.782 "data_size": 7936 00:31:38.782 }, 00:31:38.782 { 00:31:38.782 "name": "BaseBdev2", 00:31:38.782 "uuid": "79f0abdb-4d27-410f-9ccf-c5d5582f00da", 00:31:38.782 "is_configured": true, 00:31:38.782 "data_offset": 256, 00:31:38.782 "data_size": 7936 00:31:38.782 } 00:31:38.782 ] 00:31:38.782 } 00:31:38.782 } 00:31:38.782 }' 00:31:38.782 14:24:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:31:38.782 14:24:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:31:38.782 BaseBdev2' 00:31:38.782 14:24:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:31:38.782 14:24:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:31:38.782 14:24:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:31:39.041 14:24:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:31:39.041 "name": "BaseBdev1", 00:31:39.041 "aliases": [ 00:31:39.041 "26baa16a-07dc-4043-89bd-dcaf8c10ea38" 00:31:39.041 ], 00:31:39.041 "product_name": "Malloc disk", 00:31:39.041 "block_size": 4096, 00:31:39.041 "num_blocks": 8192, 00:31:39.041 "uuid": "26baa16a-07dc-4043-89bd-dcaf8c10ea38", 00:31:39.041 "assigned_rate_limits": { 00:31:39.041 "rw_ios_per_sec": 0, 00:31:39.041 "rw_mbytes_per_sec": 0, 00:31:39.041 "r_mbytes_per_sec": 0, 00:31:39.041 "w_mbytes_per_sec": 0 00:31:39.041 }, 00:31:39.041 "claimed": true, 00:31:39.041 "claim_type": "exclusive_write", 00:31:39.041 "zoned": false, 00:31:39.041 "supported_io_types": { 00:31:39.041 "read": true, 00:31:39.041 "write": true, 00:31:39.041 "unmap": true, 00:31:39.041 "flush": true, 00:31:39.041 "reset": true, 00:31:39.041 "nvme_admin": false, 00:31:39.041 "nvme_io": false, 00:31:39.041 "nvme_io_md": false, 00:31:39.041 "write_zeroes": true, 00:31:39.041 "zcopy": true, 00:31:39.041 "get_zone_info": false, 00:31:39.041 "zone_management": false, 00:31:39.041 "zone_append": false, 00:31:39.041 "compare": false, 00:31:39.041 "compare_and_write": false, 00:31:39.041 "abort": true, 00:31:39.041 "seek_hole": false, 00:31:39.041 "seek_data": false, 00:31:39.041 "copy": true, 00:31:39.041 "nvme_iov_md": false 00:31:39.041 }, 00:31:39.041 "memory_domains": [ 00:31:39.041 { 00:31:39.041 "dma_device_id": "system", 00:31:39.041 "dma_device_type": 1 00:31:39.041 }, 00:31:39.041 { 00:31:39.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:39.041 "dma_device_type": 2 00:31:39.041 } 00:31:39.041 ], 00:31:39.041 "driver_specific": {} 00:31:39.041 }' 00:31:39.041 14:24:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:39.041 14:24:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:39.041 14:24:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:31:39.041 14:24:24 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:39.299 14:24:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:39.299 14:24:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:31:39.299 14:24:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:39.299 14:24:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:39.299 14:24:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:39.299 14:24:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:39.299 14:24:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:39.299 14:24:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:39.299 14:24:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:31:39.300 14:24:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:31:39.300 14:24:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:31:39.559 14:24:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:31:39.559 "name": "BaseBdev2", 00:31:39.559 "aliases": [ 00:31:39.559 "79f0abdb-4d27-410f-9ccf-c5d5582f00da" 00:31:39.559 ], 00:31:39.559 "product_name": "Malloc disk", 00:31:39.559 "block_size": 4096, 00:31:39.559 "num_blocks": 8192, 00:31:39.559 "uuid": "79f0abdb-4d27-410f-9ccf-c5d5582f00da", 00:31:39.559 "assigned_rate_limits": { 00:31:39.559 "rw_ios_per_sec": 0, 00:31:39.559 "rw_mbytes_per_sec": 0, 00:31:39.559 "r_mbytes_per_sec": 0, 00:31:39.559 "w_mbytes_per_sec": 0 00:31:39.559 }, 00:31:39.559 "claimed": true, 00:31:39.559 "claim_type": "exclusive_write", 00:31:39.559 "zoned": false, 00:31:39.559 "supported_io_types": { 00:31:39.559 "read": true, 00:31:39.559 "write": true, 00:31:39.559 "unmap": true, 00:31:39.559 "flush": true, 00:31:39.559 "reset": true, 00:31:39.559 "nvme_admin": false, 00:31:39.559 "nvme_io": false, 00:31:39.559 "nvme_io_md": false, 00:31:39.559 "write_zeroes": true, 00:31:39.559 "zcopy": true, 00:31:39.559 "get_zone_info": false, 00:31:39.559 "zone_management": false, 00:31:39.559 "zone_append": false, 00:31:39.559 "compare": false, 00:31:39.559 "compare_and_write": false, 00:31:39.559 "abort": true, 00:31:39.559 "seek_hole": false, 00:31:39.559 "seek_data": false, 00:31:39.559 "copy": true, 00:31:39.559 "nvme_iov_md": false 00:31:39.559 }, 00:31:39.559 "memory_domains": [ 00:31:39.559 { 00:31:39.559 "dma_device_id": "system", 00:31:39.559 "dma_device_type": 1 00:31:39.559 }, 00:31:39.559 { 00:31:39.559 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:39.559 "dma_device_type": 2 00:31:39.559 } 00:31:39.559 ], 00:31:39.559 "driver_specific": {} 00:31:39.559 }' 00:31:39.559 14:24:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:39.559 14:24:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:39.818 14:24:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:31:39.818 14:24:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:39.818 14:24:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:39.818 14:24:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:31:39.818 14:24:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:39.818 14:24:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:39.818 14:24:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:39.818 14:24:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:40.077 14:24:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:40.077 14:24:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:40.077 14:24:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:31:40.336 [2024-07-15 14:24:26.151932] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:31:40.336 14:24:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@275 -- # local expected_state 00:31:40.336 14:24:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:31:40.336 14:24:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # case $1 in 00:31:40.336 14:24:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@214 -- # return 0 00:31:40.336 14:24:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:31:40.336 14:24:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:31:40.336 14:24:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:31:40.336 14:24:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:40.336 14:24:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:40.336 14:24:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:40.336 14:24:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:31:40.336 14:24:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:40.336 14:24:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:40.336 14:24:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:40.336 14:24:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:40.336 14:24:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:40.336 14:24:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:31:40.595 14:24:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:40.595 "name": "Existed_Raid", 00:31:40.595 "uuid": "a418d32a-e952-47f3-9e86-ae7022bcf40f", 00:31:40.595 "strip_size_kb": 0, 00:31:40.595 "state": "online", 00:31:40.595 "raid_level": "raid1", 00:31:40.595 "superblock": true, 00:31:40.595 "num_base_bdevs": 2, 00:31:40.595 "num_base_bdevs_discovered": 1, 00:31:40.595 "num_base_bdevs_operational": 1, 00:31:40.595 "base_bdevs_list": [ 00:31:40.595 { 00:31:40.595 "name": null, 00:31:40.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:40.595 "is_configured": false, 00:31:40.595 "data_offset": 256, 00:31:40.595 "data_size": 7936 00:31:40.595 }, 00:31:40.595 { 00:31:40.595 "name": "BaseBdev2", 00:31:40.595 "uuid": "79f0abdb-4d27-410f-9ccf-c5d5582f00da", 00:31:40.595 "is_configured": true, 00:31:40.595 "data_offset": 256, 00:31:40.595 "data_size": 7936 00:31:40.595 } 00:31:40.595 ] 00:31:40.595 }' 00:31:40.595 14:24:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:40.595 14:24:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:41.529 14:24:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:31:41.529 14:24:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:31:41.529 14:24:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:41.529 14:24:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:31:41.529 14:24:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:31:41.529 14:24:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:31:41.529 14:24:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:31:41.787 [2024-07-15 14:24:27.741492] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:31:41.787 [2024-07-15 14:24:27.741599] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:42.046 [2024-07-15 14:24:27.828465] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:42.046 [2024-07-15 14:24:27.828529] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:42.046 [2024-07-15 14:24:27.828542] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:31:42.046 14:24:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:31:42.046 14:24:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:31:42.046 14:24:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:42.046 14:24:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:31:42.369 14:24:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:31:42.369 14:24:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:31:42.369 14:24:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:31:42.369 14:24:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@341 -- # killprocess 216136 00:31:42.369 14:24:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@948 -- # '[' -z 216136 ']' 00:31:42.369 14:24:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@952 -- # kill -0 216136 00:31:42.369 14:24:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@953 -- # uname 00:31:42.369 14:24:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:42.369 14:24:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 216136 00:31:42.369 14:24:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:42.369 14:24:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:42.369 14:24:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@966 -- # echo 'killing process with pid 216136' 00:31:42.369 killing process with pid 216136 00:31:42.369 14:24:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@967 -- # kill 216136 00:31:42.369 [2024-07-15 14:24:28.160929] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:42.369 [2024-07-15 14:24:28.161053] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:42.369 14:24:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # wait 216136 00:31:43.304 14:24:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@343 -- # return 0 00:31:43.305 00:31:43.305 real 0m12.799s 00:31:43.305 user 0m22.499s 00:31:43.305 sys 0m1.464s 00:31:43.305 14:24:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:43.305 14:24:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:31:43.305 ************************************ 00:31:43.305 END TEST raid_state_function_test_sb_4k 00:31:43.305 ************************************ 00:31:43.563 14:24:29 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:31:43.563 14:24:29 bdev_raid -- bdev/bdev_raid.sh@899 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:31:43.563 14:24:29 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:31:43.563 14:24:29 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:43.563 14:24:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:43.563 ************************************ 00:31:43.563 START TEST raid_superblock_test_4k 00:31:43.563 ************************************ 00:31:43.563 14:24:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 2 00:31:43.563 14:24:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:31:43.563 14:24:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:31:43.563 14:24:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:31:43.563 14:24:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:31:43.563 14:24:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:31:43.563 14:24:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:31:43.563 14:24:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:31:43.563 14:24:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:31:43.563 14:24:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:31:43.563 14:24:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local strip_size 00:31:43.563 14:24:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:31:43.563 14:24:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:31:43.563 14:24:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:31:43.563 14:24:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:31:43.563 14:24:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:31:43.563 14:24:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # raid_pid=216517 00:31:43.563 14:24:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # waitforlisten 216517 /var/tmp/spdk-raid.sock 00:31:43.563 14:24:29 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:31:43.564 14:24:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@829 -- # '[' -z 216517 ']' 00:31:43.564 14:24:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:31:43.564 14:24:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:43.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:31:43.564 14:24:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:31:43.564 14:24:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:43.564 14:24:29 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:43.564 [2024-07-15 14:24:29.398347] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:31:43.564 [2024-07-15 14:24:29.398593] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid216517 ] 00:31:43.564 [2024-07-15 14:24:29.561366] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:43.822 [2024-07-15 14:24:29.770913] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:44.081 [2024-07-15 14:24:29.967203] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:44.648 14:24:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:44.648 14:24:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@862 -- # return 0 00:31:44.648 14:24:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:31:44.648 14:24:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:31:44.648 14:24:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:31:44.648 14:24:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:31:44.648 14:24:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:31:44.648 14:24:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:31:44.648 14:24:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:31:44.648 14:24:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:31:44.648 14:24:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b malloc1 00:31:44.648 malloc1 00:31:44.648 14:24:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:31:44.906 [2024-07-15 14:24:30.872311] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:31:44.906 [2024-07-15 14:24:30.873118] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:44.906 [2024-07-15 14:24:30.873371] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:31:44.906 [2024-07-15 14:24:30.873611] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:44.906 [2024-07-15 14:24:30.875570] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:44.906 [2024-07-15 14:24:30.875859] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:31:44.906 pt1 00:31:44.906 14:24:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:31:44.906 14:24:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:31:44.906 14:24:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:31:44.906 14:24:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:31:44.906 14:24:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:31:44.906 14:24:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:31:44.906 14:24:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:31:44.906 14:24:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:31:44.906 14:24:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b malloc2 00:31:45.471 malloc2 00:31:45.471 14:24:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:31:45.730 [2024-07-15 14:24:31.488341] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:31:45.730 [2024-07-15 14:24:31.488926] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:45.730 [2024-07-15 14:24:31.489199] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:31:45.730 [2024-07-15 14:24:31.489426] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:45.730 [2024-07-15 14:24:31.491395] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:45.730 [2024-07-15 14:24:31.491643] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:31:45.730 pt2 00:31:45.730 14:24:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:31:45.730 14:24:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:31:45.730 14:24:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:31:45.730 [2024-07-15 14:24:31.732517] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:31:45.988 [2024-07-15 14:24:31.734246] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:45.988 [2024-07-15 14:24:31.734578] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:31:45.988 [2024-07-15 14:24:31.734713] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:31:45.988 [2024-07-15 14:24:31.734980] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:31:45.988 [2024-07-15 14:24:31.735370] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:31:45.988 [2024-07-15 14:24:31.735498] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:31:45.988 [2024-07-15 14:24:31.735756] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:45.988 14:24:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:45.988 14:24:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:45.988 14:24:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:45.988 14:24:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:45.988 14:24:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:45.988 14:24:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:45.988 14:24:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:45.988 14:24:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:45.988 14:24:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:45.988 14:24:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:45.988 14:24:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:45.988 14:24:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:46.262 14:24:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:46.262 "name": "raid_bdev1", 00:31:46.262 "uuid": "6792d0f3-e398-452b-b26a-a65ecfd09511", 00:31:46.262 "strip_size_kb": 0, 00:31:46.262 "state": "online", 00:31:46.262 "raid_level": "raid1", 00:31:46.262 "superblock": true, 00:31:46.262 "num_base_bdevs": 2, 00:31:46.262 "num_base_bdevs_discovered": 2, 00:31:46.262 "num_base_bdevs_operational": 2, 00:31:46.262 "base_bdevs_list": [ 00:31:46.262 { 00:31:46.262 "name": "pt1", 00:31:46.262 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:46.262 "is_configured": true, 00:31:46.262 "data_offset": 256, 00:31:46.262 "data_size": 7936 00:31:46.262 }, 00:31:46.262 { 00:31:46.262 "name": "pt2", 00:31:46.262 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:46.262 "is_configured": true, 00:31:46.262 "data_offset": 256, 00:31:46.262 "data_size": 7936 00:31:46.262 } 00:31:46.262 ] 00:31:46.262 }' 00:31:46.262 14:24:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:46.262 14:24:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:46.849 14:24:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:31:46.849 14:24:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:31:46.849 14:24:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:31:46.849 14:24:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:31:46.849 14:24:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:31:46.849 14:24:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # local name 00:31:46.849 14:24:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:31:46.849 14:24:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:31:47.108 [2024-07-15 14:24:32.880840] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:47.108 14:24:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:31:47.108 "name": "raid_bdev1", 00:31:47.108 "aliases": [ 00:31:47.108 "6792d0f3-e398-452b-b26a-a65ecfd09511" 00:31:47.108 ], 00:31:47.108 "product_name": "Raid Volume", 00:31:47.108 "block_size": 4096, 00:31:47.108 "num_blocks": 7936, 00:31:47.108 "uuid": "6792d0f3-e398-452b-b26a-a65ecfd09511", 00:31:47.108 "assigned_rate_limits": { 00:31:47.108 "rw_ios_per_sec": 0, 00:31:47.108 "rw_mbytes_per_sec": 0, 00:31:47.108 "r_mbytes_per_sec": 0, 00:31:47.108 "w_mbytes_per_sec": 0 00:31:47.108 }, 00:31:47.108 "claimed": false, 00:31:47.108 "zoned": false, 00:31:47.108 "supported_io_types": { 00:31:47.108 "read": true, 00:31:47.108 "write": true, 00:31:47.108 "unmap": false, 00:31:47.108 "flush": false, 00:31:47.108 "reset": true, 00:31:47.108 "nvme_admin": false, 00:31:47.108 "nvme_io": false, 00:31:47.108 "nvme_io_md": false, 00:31:47.108 "write_zeroes": true, 00:31:47.108 "zcopy": false, 00:31:47.108 "get_zone_info": false, 00:31:47.108 "zone_management": false, 00:31:47.108 "zone_append": false, 00:31:47.108 "compare": false, 00:31:47.108 "compare_and_write": false, 00:31:47.108 "abort": false, 00:31:47.108 "seek_hole": false, 00:31:47.108 "seek_data": false, 00:31:47.108 "copy": false, 00:31:47.108 "nvme_iov_md": false 00:31:47.108 }, 00:31:47.108 "memory_domains": [ 00:31:47.108 { 00:31:47.108 "dma_device_id": "system", 00:31:47.108 "dma_device_type": 1 00:31:47.108 }, 00:31:47.108 { 00:31:47.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:47.108 "dma_device_type": 2 00:31:47.108 }, 00:31:47.108 { 00:31:47.108 "dma_device_id": "system", 00:31:47.108 "dma_device_type": 1 00:31:47.108 }, 00:31:47.108 { 00:31:47.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:47.108 "dma_device_type": 2 00:31:47.108 } 00:31:47.108 ], 00:31:47.108 "driver_specific": { 00:31:47.108 "raid": { 00:31:47.108 "uuid": "6792d0f3-e398-452b-b26a-a65ecfd09511", 00:31:47.108 "strip_size_kb": 0, 00:31:47.108 "state": "online", 00:31:47.108 "raid_level": "raid1", 00:31:47.108 "superblock": true, 00:31:47.108 "num_base_bdevs": 2, 00:31:47.108 "num_base_bdevs_discovered": 2, 00:31:47.108 "num_base_bdevs_operational": 2, 00:31:47.108 "base_bdevs_list": [ 00:31:47.108 { 00:31:47.108 "name": "pt1", 00:31:47.108 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:47.108 "is_configured": true, 00:31:47.108 "data_offset": 256, 00:31:47.108 "data_size": 7936 00:31:47.108 }, 00:31:47.108 { 00:31:47.108 "name": "pt2", 00:31:47.108 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:47.108 "is_configured": true, 00:31:47.108 "data_offset": 256, 00:31:47.108 "data_size": 7936 00:31:47.108 } 00:31:47.108 ] 00:31:47.108 } 00:31:47.108 } 00:31:47.108 }' 00:31:47.108 14:24:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:31:47.108 14:24:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:31:47.108 pt2' 00:31:47.108 14:24:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:31:47.108 14:24:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:31:47.108 14:24:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:31:47.366 14:24:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:31:47.366 "name": "pt1", 00:31:47.366 "aliases": [ 00:31:47.366 "00000000-0000-0000-0000-000000000001" 00:31:47.366 ], 00:31:47.366 "product_name": "passthru", 00:31:47.366 "block_size": 4096, 00:31:47.366 "num_blocks": 8192, 00:31:47.366 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:47.366 "assigned_rate_limits": { 00:31:47.366 "rw_ios_per_sec": 0, 00:31:47.366 "rw_mbytes_per_sec": 0, 00:31:47.366 "r_mbytes_per_sec": 0, 00:31:47.366 "w_mbytes_per_sec": 0 00:31:47.366 }, 00:31:47.366 "claimed": true, 00:31:47.366 "claim_type": "exclusive_write", 00:31:47.366 "zoned": false, 00:31:47.366 "supported_io_types": { 00:31:47.366 "read": true, 00:31:47.366 "write": true, 00:31:47.366 "unmap": true, 00:31:47.366 "flush": true, 00:31:47.366 "reset": true, 00:31:47.366 "nvme_admin": false, 00:31:47.366 "nvme_io": false, 00:31:47.366 "nvme_io_md": false, 00:31:47.366 "write_zeroes": true, 00:31:47.366 "zcopy": true, 00:31:47.366 "get_zone_info": false, 00:31:47.366 "zone_management": false, 00:31:47.366 "zone_append": false, 00:31:47.366 "compare": false, 00:31:47.366 "compare_and_write": false, 00:31:47.366 "abort": true, 00:31:47.366 "seek_hole": false, 00:31:47.366 "seek_data": false, 00:31:47.366 "copy": true, 00:31:47.366 "nvme_iov_md": false 00:31:47.366 }, 00:31:47.366 "memory_domains": [ 00:31:47.366 { 00:31:47.366 "dma_device_id": "system", 00:31:47.366 "dma_device_type": 1 00:31:47.366 }, 00:31:47.366 { 00:31:47.366 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:47.366 "dma_device_type": 2 00:31:47.366 } 00:31:47.366 ], 00:31:47.366 "driver_specific": { 00:31:47.366 "passthru": { 00:31:47.366 "name": "pt1", 00:31:47.366 "base_bdev_name": "malloc1" 00:31:47.366 } 00:31:47.366 } 00:31:47.366 }' 00:31:47.366 14:24:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:47.366 14:24:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:47.366 14:24:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:31:47.366 14:24:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:47.366 14:24:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:47.366 14:24:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:31:47.623 14:24:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:47.623 14:24:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:47.623 14:24:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:47.623 14:24:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:47.623 14:24:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:47.623 14:24:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:47.623 14:24:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:31:47.623 14:24:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:31:47.623 14:24:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:31:47.881 14:24:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:31:47.881 "name": "pt2", 00:31:47.881 "aliases": [ 00:31:47.881 "00000000-0000-0000-0000-000000000002" 00:31:47.881 ], 00:31:47.881 "product_name": "passthru", 00:31:47.881 "block_size": 4096, 00:31:47.881 "num_blocks": 8192, 00:31:47.881 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:47.881 "assigned_rate_limits": { 00:31:47.881 "rw_ios_per_sec": 0, 00:31:47.881 "rw_mbytes_per_sec": 0, 00:31:47.881 "r_mbytes_per_sec": 0, 00:31:47.881 "w_mbytes_per_sec": 0 00:31:47.881 }, 00:31:47.881 "claimed": true, 00:31:47.881 "claim_type": "exclusive_write", 00:31:47.881 "zoned": false, 00:31:47.881 "supported_io_types": { 00:31:47.881 "read": true, 00:31:47.881 "write": true, 00:31:47.881 "unmap": true, 00:31:47.881 "flush": true, 00:31:47.881 "reset": true, 00:31:47.881 "nvme_admin": false, 00:31:47.881 "nvme_io": false, 00:31:47.881 "nvme_io_md": false, 00:31:47.881 "write_zeroes": true, 00:31:47.881 "zcopy": true, 00:31:47.881 "get_zone_info": false, 00:31:47.881 "zone_management": false, 00:31:47.881 "zone_append": false, 00:31:47.881 "compare": false, 00:31:47.881 "compare_and_write": false, 00:31:47.881 "abort": true, 00:31:47.881 "seek_hole": false, 00:31:47.881 "seek_data": false, 00:31:47.881 "copy": true, 00:31:47.881 "nvme_iov_md": false 00:31:47.881 }, 00:31:47.881 "memory_domains": [ 00:31:47.881 { 00:31:47.881 "dma_device_id": "system", 00:31:47.881 "dma_device_type": 1 00:31:47.881 }, 00:31:47.881 { 00:31:47.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:47.881 "dma_device_type": 2 00:31:47.881 } 00:31:47.881 ], 00:31:47.881 "driver_specific": { 00:31:47.881 "passthru": { 00:31:47.881 "name": "pt2", 00:31:47.881 "base_bdev_name": "malloc2" 00:31:47.881 } 00:31:47.881 } 00:31:47.881 }' 00:31:47.881 14:24:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:47.881 14:24:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:48.139 14:24:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:31:48.139 14:24:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:48.139 14:24:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:48.139 14:24:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:31:48.139 14:24:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:48.139 14:24:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:48.139 14:24:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:48.139 14:24:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:48.398 14:24:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:48.398 14:24:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:48.398 14:24:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:31:48.398 14:24:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:31:48.656 [2024-07-15 14:24:34.465042] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:48.656 14:24:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=6792d0f3-e398-452b-b26a-a65ecfd09511 00:31:48.656 14:24:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # '[' -z 6792d0f3-e398-452b-b26a-a65ecfd09511 ']' 00:31:48.656 14:24:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:31:48.914 [2024-07-15 14:24:34.732891] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:48.914 [2024-07-15 14:24:34.733181] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:48.914 [2024-07-15 14:24:34.733383] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:48.914 [2024-07-15 14:24:34.733537] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:48.914 [2024-07-15 14:24:34.733647] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:31:48.914 14:24:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:48.914 14:24:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:31:49.172 14:24:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:31:49.172 14:24:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:31:49.172 14:24:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:31:49.172 14:24:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:31:49.431 14:24:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:31:49.431 14:24:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:31:49.689 14:24:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:31:49.689 14:24:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:31:49.947 14:24:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:31:49.948 14:24:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:31:49.948 14:24:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@648 -- # local es=0 00:31:49.948 14:24:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:31:49.948 14:24:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:49.948 14:24:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:49.948 14:24:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:49.948 14:24:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:49.948 14:24:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:49.948 14:24:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:49.948 14:24:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:49.948 14:24:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:31:49.948 14:24:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:31:50.219 [2024-07-15 14:24:36.121268] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:31:50.219 [2024-07-15 14:24:36.123031] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:31:50.219 [2024-07-15 14:24:36.123255] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:31:50.219 [2024-07-15 14:24:36.123854] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:31:50.219 [2024-07-15 14:24:36.124127] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:50.219 [2024-07-15 14:24:36.124275] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state configuring 00:31:50.219 request: 00:31:50.219 { 00:31:50.219 "name": "raid_bdev1", 00:31:50.219 "raid_level": "raid1", 00:31:50.219 "base_bdevs": [ 00:31:50.219 "malloc1", 00:31:50.219 "malloc2" 00:31:50.219 ], 00:31:50.219 "superblock": false, 00:31:50.219 "method": "bdev_raid_create", 00:31:50.219 "req_id": 1 00:31:50.219 } 00:31:50.219 Got JSON-RPC error response 00:31:50.219 response: 00:31:50.219 { 00:31:50.219 "code": -17, 00:31:50.219 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:31:50.219 } 00:31:50.219 14:24:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@651 -- # es=1 00:31:50.219 14:24:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:50.219 14:24:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:50.219 14:24:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:50.219 14:24:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:50.219 14:24:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:31:50.527 14:24:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:31:50.527 14:24:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:31:50.527 14:24:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:31:50.785 [2024-07-15 14:24:36.669301] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:31:50.786 [2024-07-15 14:24:36.669780] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:50.786 [2024-07-15 14:24:36.670023] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:31:50.786 [2024-07-15 14:24:36.670243] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:50.786 [2024-07-15 14:24:36.672174] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:50.786 [2024-07-15 14:24:36.672426] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:31:50.786 [2024-07-15 14:24:36.672712] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:31:50.786 [2024-07-15 14:24:36.672911] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:31:50.786 pt1 00:31:50.786 14:24:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:31:50.786 14:24:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:50.786 14:24:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:31:50.786 14:24:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:50.786 14:24:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:50.786 14:24:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:50.786 14:24:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:50.786 14:24:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:50.786 14:24:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:50.786 14:24:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:50.786 14:24:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:50.786 14:24:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:51.044 14:24:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:51.044 "name": "raid_bdev1", 00:31:51.044 "uuid": "6792d0f3-e398-452b-b26a-a65ecfd09511", 00:31:51.044 "strip_size_kb": 0, 00:31:51.044 "state": "configuring", 00:31:51.044 "raid_level": "raid1", 00:31:51.044 "superblock": true, 00:31:51.044 "num_base_bdevs": 2, 00:31:51.044 "num_base_bdevs_discovered": 1, 00:31:51.044 "num_base_bdevs_operational": 2, 00:31:51.044 "base_bdevs_list": [ 00:31:51.044 { 00:31:51.044 "name": "pt1", 00:31:51.044 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:51.044 "is_configured": true, 00:31:51.044 "data_offset": 256, 00:31:51.044 "data_size": 7936 00:31:51.044 }, 00:31:51.044 { 00:31:51.044 "name": null, 00:31:51.044 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:51.044 "is_configured": false, 00:31:51.044 "data_offset": 256, 00:31:51.044 "data_size": 7936 00:31:51.044 } 00:31:51.044 ] 00:31:51.044 }' 00:31:51.044 14:24:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:51.044 14:24:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:51.610 14:24:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:31:51.610 14:24:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:31:51.610 14:24:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:31:51.610 14:24:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:31:52.175 [2024-07-15 14:24:37.885529] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:31:52.175 [2024-07-15 14:24:37.886206] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:52.175 [2024-07-15 14:24:37.886480] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:31:52.175 [2024-07-15 14:24:37.886714] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:52.175 [2024-07-15 14:24:37.887304] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:52.175 [2024-07-15 14:24:37.887583] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:31:52.175 [2024-07-15 14:24:37.887880] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:31:52.176 [2024-07-15 14:24:37.888026] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:52.176 [2024-07-15 14:24:37.888274] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:31:52.176 [2024-07-15 14:24:37.888403] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:31:52.176 [2024-07-15 14:24:37.888537] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:31:52.176 [2024-07-15 14:24:37.889122] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:31:52.176 [2024-07-15 14:24:37.889257] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:31:52.176 [2024-07-15 14:24:37.889513] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:52.176 pt2 00:31:52.176 14:24:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:31:52.176 14:24:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:31:52.176 14:24:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:52.176 14:24:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:52.176 14:24:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:52.176 14:24:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:52.176 14:24:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:52.176 14:24:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:52.176 14:24:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:52.176 14:24:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:52.176 14:24:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:52.176 14:24:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:52.176 14:24:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:52.176 14:24:37 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:52.176 14:24:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:52.176 "name": "raid_bdev1", 00:31:52.176 "uuid": "6792d0f3-e398-452b-b26a-a65ecfd09511", 00:31:52.176 "strip_size_kb": 0, 00:31:52.176 "state": "online", 00:31:52.176 "raid_level": "raid1", 00:31:52.176 "superblock": true, 00:31:52.176 "num_base_bdevs": 2, 00:31:52.176 "num_base_bdevs_discovered": 2, 00:31:52.176 "num_base_bdevs_operational": 2, 00:31:52.176 "base_bdevs_list": [ 00:31:52.176 { 00:31:52.176 "name": "pt1", 00:31:52.176 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:52.176 "is_configured": true, 00:31:52.176 "data_offset": 256, 00:31:52.176 "data_size": 7936 00:31:52.176 }, 00:31:52.176 { 00:31:52.176 "name": "pt2", 00:31:52.176 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:52.176 "is_configured": true, 00:31:52.176 "data_offset": 256, 00:31:52.176 "data_size": 7936 00:31:52.176 } 00:31:52.176 ] 00:31:52.176 }' 00:31:52.176 14:24:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:52.176 14:24:38 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:52.743 14:24:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:31:52.743 14:24:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:31:53.002 14:24:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:31:53.002 14:24:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:31:53.002 14:24:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:31:53.002 14:24:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # local name 00:31:53.002 14:24:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:31:53.002 14:24:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:31:53.002 [2024-07-15 14:24:38.961949] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:53.002 14:24:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:31:53.002 "name": "raid_bdev1", 00:31:53.002 "aliases": [ 00:31:53.002 "6792d0f3-e398-452b-b26a-a65ecfd09511" 00:31:53.002 ], 00:31:53.002 "product_name": "Raid Volume", 00:31:53.002 "block_size": 4096, 00:31:53.002 "num_blocks": 7936, 00:31:53.002 "uuid": "6792d0f3-e398-452b-b26a-a65ecfd09511", 00:31:53.002 "assigned_rate_limits": { 00:31:53.002 "rw_ios_per_sec": 0, 00:31:53.002 "rw_mbytes_per_sec": 0, 00:31:53.002 "r_mbytes_per_sec": 0, 00:31:53.002 "w_mbytes_per_sec": 0 00:31:53.002 }, 00:31:53.002 "claimed": false, 00:31:53.002 "zoned": false, 00:31:53.002 "supported_io_types": { 00:31:53.002 "read": true, 00:31:53.002 "write": true, 00:31:53.002 "unmap": false, 00:31:53.002 "flush": false, 00:31:53.002 "reset": true, 00:31:53.002 "nvme_admin": false, 00:31:53.002 "nvme_io": false, 00:31:53.002 "nvme_io_md": false, 00:31:53.002 "write_zeroes": true, 00:31:53.002 "zcopy": false, 00:31:53.002 "get_zone_info": false, 00:31:53.002 "zone_management": false, 00:31:53.002 "zone_append": false, 00:31:53.002 "compare": false, 00:31:53.002 "compare_and_write": false, 00:31:53.002 "abort": false, 00:31:53.002 "seek_hole": false, 00:31:53.002 "seek_data": false, 00:31:53.002 "copy": false, 00:31:53.002 "nvme_iov_md": false 00:31:53.002 }, 00:31:53.002 "memory_domains": [ 00:31:53.002 { 00:31:53.002 "dma_device_id": "system", 00:31:53.002 "dma_device_type": 1 00:31:53.002 }, 00:31:53.002 { 00:31:53.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:53.002 "dma_device_type": 2 00:31:53.002 }, 00:31:53.002 { 00:31:53.002 "dma_device_id": "system", 00:31:53.002 "dma_device_type": 1 00:31:53.002 }, 00:31:53.002 { 00:31:53.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:53.002 "dma_device_type": 2 00:31:53.002 } 00:31:53.002 ], 00:31:53.002 "driver_specific": { 00:31:53.002 "raid": { 00:31:53.002 "uuid": "6792d0f3-e398-452b-b26a-a65ecfd09511", 00:31:53.002 "strip_size_kb": 0, 00:31:53.002 "state": "online", 00:31:53.002 "raid_level": "raid1", 00:31:53.002 "superblock": true, 00:31:53.002 "num_base_bdevs": 2, 00:31:53.002 "num_base_bdevs_discovered": 2, 00:31:53.002 "num_base_bdevs_operational": 2, 00:31:53.002 "base_bdevs_list": [ 00:31:53.002 { 00:31:53.002 "name": "pt1", 00:31:53.002 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:53.002 "is_configured": true, 00:31:53.002 "data_offset": 256, 00:31:53.002 "data_size": 7936 00:31:53.002 }, 00:31:53.002 { 00:31:53.002 "name": "pt2", 00:31:53.002 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:53.002 "is_configured": true, 00:31:53.002 "data_offset": 256, 00:31:53.002 "data_size": 7936 00:31:53.002 } 00:31:53.002 ] 00:31:53.002 } 00:31:53.002 } 00:31:53.002 }' 00:31:53.002 14:24:38 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:31:53.260 14:24:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:31:53.260 pt2' 00:31:53.260 14:24:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:31:53.260 14:24:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:31:53.260 14:24:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:31:53.519 14:24:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:31:53.519 "name": "pt1", 00:31:53.519 "aliases": [ 00:31:53.519 "00000000-0000-0000-0000-000000000001" 00:31:53.519 ], 00:31:53.519 "product_name": "passthru", 00:31:53.519 "block_size": 4096, 00:31:53.519 "num_blocks": 8192, 00:31:53.519 "uuid": "00000000-0000-0000-0000-000000000001", 00:31:53.519 "assigned_rate_limits": { 00:31:53.519 "rw_ios_per_sec": 0, 00:31:53.519 "rw_mbytes_per_sec": 0, 00:31:53.519 "r_mbytes_per_sec": 0, 00:31:53.519 "w_mbytes_per_sec": 0 00:31:53.519 }, 00:31:53.519 "claimed": true, 00:31:53.519 "claim_type": "exclusive_write", 00:31:53.519 "zoned": false, 00:31:53.519 "supported_io_types": { 00:31:53.519 "read": true, 00:31:53.519 "write": true, 00:31:53.519 "unmap": true, 00:31:53.519 "flush": true, 00:31:53.519 "reset": true, 00:31:53.519 "nvme_admin": false, 00:31:53.519 "nvme_io": false, 00:31:53.519 "nvme_io_md": false, 00:31:53.519 "write_zeroes": true, 00:31:53.519 "zcopy": true, 00:31:53.519 "get_zone_info": false, 00:31:53.519 "zone_management": false, 00:31:53.519 "zone_append": false, 00:31:53.519 "compare": false, 00:31:53.519 "compare_and_write": false, 00:31:53.519 "abort": true, 00:31:53.519 "seek_hole": false, 00:31:53.519 "seek_data": false, 00:31:53.519 "copy": true, 00:31:53.519 "nvme_iov_md": false 00:31:53.519 }, 00:31:53.519 "memory_domains": [ 00:31:53.519 { 00:31:53.519 "dma_device_id": "system", 00:31:53.519 "dma_device_type": 1 00:31:53.519 }, 00:31:53.519 { 00:31:53.519 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:53.519 "dma_device_type": 2 00:31:53.519 } 00:31:53.519 ], 00:31:53.519 "driver_specific": { 00:31:53.519 "passthru": { 00:31:53.519 "name": "pt1", 00:31:53.519 "base_bdev_name": "malloc1" 00:31:53.519 } 00:31:53.519 } 00:31:53.519 }' 00:31:53.520 14:24:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:53.520 14:24:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:53.520 14:24:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:31:53.520 14:24:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:53.520 14:24:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:53.520 14:24:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:31:53.520 14:24:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:53.520 14:24:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:53.779 14:24:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:53.779 14:24:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:53.779 14:24:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:53.779 14:24:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:53.779 14:24:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:31:53.779 14:24:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:31:53.779 14:24:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:31:54.038 14:24:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:31:54.038 "name": "pt2", 00:31:54.038 "aliases": [ 00:31:54.038 "00000000-0000-0000-0000-000000000002" 00:31:54.038 ], 00:31:54.038 "product_name": "passthru", 00:31:54.038 "block_size": 4096, 00:31:54.038 "num_blocks": 8192, 00:31:54.038 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:54.038 "assigned_rate_limits": { 00:31:54.038 "rw_ios_per_sec": 0, 00:31:54.038 "rw_mbytes_per_sec": 0, 00:31:54.038 "r_mbytes_per_sec": 0, 00:31:54.038 "w_mbytes_per_sec": 0 00:31:54.038 }, 00:31:54.038 "claimed": true, 00:31:54.038 "claim_type": "exclusive_write", 00:31:54.038 "zoned": false, 00:31:54.038 "supported_io_types": { 00:31:54.038 "read": true, 00:31:54.038 "write": true, 00:31:54.038 "unmap": true, 00:31:54.038 "flush": true, 00:31:54.038 "reset": true, 00:31:54.038 "nvme_admin": false, 00:31:54.038 "nvme_io": false, 00:31:54.038 "nvme_io_md": false, 00:31:54.038 "write_zeroes": true, 00:31:54.038 "zcopy": true, 00:31:54.038 "get_zone_info": false, 00:31:54.038 "zone_management": false, 00:31:54.038 "zone_append": false, 00:31:54.038 "compare": false, 00:31:54.038 "compare_and_write": false, 00:31:54.038 "abort": true, 00:31:54.038 "seek_hole": false, 00:31:54.038 "seek_data": false, 00:31:54.038 "copy": true, 00:31:54.038 "nvme_iov_md": false 00:31:54.038 }, 00:31:54.038 "memory_domains": [ 00:31:54.038 { 00:31:54.038 "dma_device_id": "system", 00:31:54.038 "dma_device_type": 1 00:31:54.038 }, 00:31:54.038 { 00:31:54.038 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:54.038 "dma_device_type": 2 00:31:54.038 } 00:31:54.038 ], 00:31:54.038 "driver_specific": { 00:31:54.038 "passthru": { 00:31:54.038 "name": "pt2", 00:31:54.038 "base_bdev_name": "malloc2" 00:31:54.038 } 00:31:54.038 } 00:31:54.038 }' 00:31:54.038 14:24:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:54.038 14:24:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:54.038 14:24:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:31:54.038 14:24:39 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:54.038 14:24:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:54.297 14:24:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:31:54.297 14:24:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:54.297 14:24:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:54.297 14:24:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:54.297 14:24:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:54.297 14:24:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:54.297 14:24:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:54.297 14:24:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:31:54.297 14:24:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:31:54.555 [2024-07-15 14:24:40.494191] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:54.555 14:24:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@486 -- # '[' 6792d0f3-e398-452b-b26a-a65ecfd09511 '!=' 6792d0f3-e398-452b-b26a-a65ecfd09511 ']' 00:31:54.555 14:24:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:31:54.555 14:24:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@213 -- # case $1 in 00:31:54.555 14:24:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@214 -- # return 0 00:31:54.555 14:24:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:31:54.813 [2024-07-15 14:24:40.746064] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:31:54.813 14:24:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:54.813 14:24:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:54.813 14:24:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:54.814 14:24:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:54.814 14:24:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:54.814 14:24:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:31:54.814 14:24:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:54.814 14:24:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:54.814 14:24:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:54.814 14:24:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:54.814 14:24:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:54.814 14:24:40 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:55.071 14:24:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:55.072 "name": "raid_bdev1", 00:31:55.072 "uuid": "6792d0f3-e398-452b-b26a-a65ecfd09511", 00:31:55.072 "strip_size_kb": 0, 00:31:55.072 "state": "online", 00:31:55.072 "raid_level": "raid1", 00:31:55.072 "superblock": true, 00:31:55.072 "num_base_bdevs": 2, 00:31:55.072 "num_base_bdevs_discovered": 1, 00:31:55.072 "num_base_bdevs_operational": 1, 00:31:55.072 "base_bdevs_list": [ 00:31:55.072 { 00:31:55.072 "name": null, 00:31:55.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:55.072 "is_configured": false, 00:31:55.072 "data_offset": 256, 00:31:55.072 "data_size": 7936 00:31:55.072 }, 00:31:55.072 { 00:31:55.072 "name": "pt2", 00:31:55.072 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:55.072 "is_configured": true, 00:31:55.072 "data_offset": 256, 00:31:55.072 "data_size": 7936 00:31:55.072 } 00:31:55.072 ] 00:31:55.072 }' 00:31:55.072 14:24:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:55.072 14:24:41 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:56.007 14:24:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:31:56.007 [2024-07-15 14:24:41.930210] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:56.007 [2024-07-15 14:24:41.930273] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:56.007 [2024-07-15 14:24:41.930353] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:56.007 [2024-07-15 14:24:41.930395] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:56.007 [2024-07-15 14:24:41.930406] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:31:56.007 14:24:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:31:56.007 14:24:41 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:56.266 14:24:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:31:56.266 14:24:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:31:56.266 14:24:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:31:56.266 14:24:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:31:56.266 14:24:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:31:56.525 14:24:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:31:56.525 14:24:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:31:56.525 14:24:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:31:56.525 14:24:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:31:56.525 14:24:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@518 -- # i=1 00:31:56.525 14:24:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:31:56.782 [2024-07-15 14:24:42.702348] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:31:56.782 [2024-07-15 14:24:42.702824] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:56.782 [2024-07-15 14:24:42.702932] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:31:56.782 [2024-07-15 14:24:42.703023] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:56.782 [2024-07-15 14:24:42.704860] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:56.782 [2024-07-15 14:24:42.705009] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:31:56.782 [2024-07-15 14:24:42.705191] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:31:56.782 [2024-07-15 14:24:42.705261] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:56.782 [2024-07-15 14:24:42.705357] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:31:56.782 [2024-07-15 14:24:42.705371] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:31:56.782 [2024-07-15 14:24:42.705446] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:31:56.782 [2024-07-15 14:24:42.705671] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:31:56.782 [2024-07-15 14:24:42.705686] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:31:56.783 [2024-07-15 14:24:42.705808] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:56.783 pt2 00:31:56.783 14:24:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:56.783 14:24:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:56.783 14:24:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:56.783 14:24:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:56.783 14:24:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:56.783 14:24:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:31:56.783 14:24:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:56.783 14:24:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:56.783 14:24:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:56.783 14:24:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:56.783 14:24:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:56.783 14:24:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:57.041 14:24:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:57.041 "name": "raid_bdev1", 00:31:57.041 "uuid": "6792d0f3-e398-452b-b26a-a65ecfd09511", 00:31:57.041 "strip_size_kb": 0, 00:31:57.041 "state": "online", 00:31:57.041 "raid_level": "raid1", 00:31:57.041 "superblock": true, 00:31:57.041 "num_base_bdevs": 2, 00:31:57.041 "num_base_bdevs_discovered": 1, 00:31:57.041 "num_base_bdevs_operational": 1, 00:31:57.041 "base_bdevs_list": [ 00:31:57.041 { 00:31:57.041 "name": null, 00:31:57.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:57.041 "is_configured": false, 00:31:57.041 "data_offset": 256, 00:31:57.041 "data_size": 7936 00:31:57.041 }, 00:31:57.041 { 00:31:57.041 "name": "pt2", 00:31:57.041 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:57.041 "is_configured": true, 00:31:57.041 "data_offset": 256, 00:31:57.041 "data_size": 7936 00:31:57.041 } 00:31:57.041 ] 00:31:57.041 }' 00:31:57.041 14:24:42 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:57.041 14:24:42 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:57.974 14:24:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:31:57.974 [2024-07-15 14:24:43.894467] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:57.974 [2024-07-15 14:24:43.894505] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:57.974 [2024-07-15 14:24:43.894572] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:57.974 [2024-07-15 14:24:43.894611] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:57.974 [2024-07-15 14:24:43.894622] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:31:57.974 14:24:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:57.974 14:24:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:31:58.312 14:24:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:31:58.312 14:24:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:31:58.312 14:24:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:31:58.312 14:24:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:31:58.571 [2024-07-15 14:24:44.382534] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:31:58.571 [2024-07-15 14:24:44.383045] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:58.571 [2024-07-15 14:24:44.383186] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:31:58.571 [2024-07-15 14:24:44.383274] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:58.571 [2024-07-15 14:24:44.385124] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:58.571 [2024-07-15 14:24:44.385260] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:31:58.571 [2024-07-15 14:24:44.385430] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:31:58.571 [2024-07-15 14:24:44.385481] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:31:58.571 [2024-07-15 14:24:44.385621] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:31:58.571 [2024-07-15 14:24:44.385636] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:58.571 [2024-07-15 14:24:44.385649] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state configuring 00:31:58.571 [2024-07-15 14:24:44.385711] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:58.571 [2024-07-15 14:24:44.385785] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:31:58.571 [2024-07-15 14:24:44.385799] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:31:58.571 [2024-07-15 14:24:44.385879] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:31:58.571 [2024-07-15 14:24:44.386097] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:31:58.571 [2024-07-15 14:24:44.386111] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:31:58.571 [2024-07-15 14:24:44.386211] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:58.571 pt1 00:31:58.572 14:24:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:31:58.572 14:24:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:58.572 14:24:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:58.572 14:24:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:58.572 14:24:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:31:58.572 14:24:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:31:58.572 14:24:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:31:58.572 14:24:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:58.572 14:24:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:58.572 14:24:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:58.572 14:24:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:58.572 14:24:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:58.572 14:24:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:58.831 14:24:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:58.831 "name": "raid_bdev1", 00:31:58.831 "uuid": "6792d0f3-e398-452b-b26a-a65ecfd09511", 00:31:58.831 "strip_size_kb": 0, 00:31:58.831 "state": "online", 00:31:58.831 "raid_level": "raid1", 00:31:58.831 "superblock": true, 00:31:58.831 "num_base_bdevs": 2, 00:31:58.831 "num_base_bdevs_discovered": 1, 00:31:58.831 "num_base_bdevs_operational": 1, 00:31:58.831 "base_bdevs_list": [ 00:31:58.831 { 00:31:58.831 "name": null, 00:31:58.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:58.831 "is_configured": false, 00:31:58.831 "data_offset": 256, 00:31:58.831 "data_size": 7936 00:31:58.831 }, 00:31:58.831 { 00:31:58.831 "name": "pt2", 00:31:58.831 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:58.831 "is_configured": true, 00:31:58.831 "data_offset": 256, 00:31:58.831 "data_size": 7936 00:31:58.831 } 00:31:58.831 ] 00:31:58.831 }' 00:31:58.831 14:24:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:58.831 14:24:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:31:59.399 14:24:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:31:59.399 14:24:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:31:59.658 14:24:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:31:59.658 14:24:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:31:59.658 14:24:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:31:59.916 [2024-07-15 14:24:45.750915] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:59.916 14:24:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # '[' 6792d0f3-e398-452b-b26a-a65ecfd09511 '!=' 6792d0f3-e398-452b-b26a-a65ecfd09511 ']' 00:31:59.916 14:24:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@562 -- # killprocess 216517 00:31:59.916 14:24:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@948 -- # '[' -z 216517 ']' 00:31:59.916 14:24:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@952 -- # kill -0 216517 00:31:59.916 14:24:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@953 -- # uname 00:31:59.916 14:24:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:59.916 14:24:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 216517 00:31:59.916 14:24:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:59.916 14:24:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:59.916 14:24:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@966 -- # echo 'killing process with pid 216517' 00:31:59.916 killing process with pid 216517 00:31:59.916 14:24:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@967 -- # kill 216517 00:31:59.916 [2024-07-15 14:24:45.794848] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:59.916 [2024-07-15 14:24:45.794953] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:59.916 [2024-07-15 14:24:45.795005] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:59.916 [2024-07-15 14:24:45.795018] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:31:59.916 14:24:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # wait 216517 00:32:00.174 [2024-07-15 14:24:46.009106] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:01.550 14:24:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@564 -- # return 0 00:32:01.550 00:32:01.550 real 0m17.842s 00:32:01.550 user 0m32.235s 00:32:01.550 sys 0m2.094s 00:32:01.550 14:24:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:01.550 14:24:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:32:01.550 ************************************ 00:32:01.550 END TEST raid_superblock_test_4k 00:32:01.550 ************************************ 00:32:01.550 14:24:47 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:32:01.550 14:24:47 bdev_raid -- bdev/bdev_raid.sh@900 -- # '[' true = true ']' 00:32:01.550 14:24:47 bdev_raid -- bdev/bdev_raid.sh@901 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:32:01.550 14:24:47 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:32:01.550 14:24:47 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:01.550 14:24:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:01.550 ************************************ 00:32:01.550 START TEST raid_rebuild_test_sb_4k 00:32:01.550 ************************************ 00:32:01.551 14:24:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 true false true 00:32:01.551 14:24:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:32:01.551 14:24:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:32:01.551 14:24:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:32:01.551 14:24:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:32:01.551 14:24:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local verify=true 00:32:01.551 14:24:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:32:01.551 14:24:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:32:01.551 14:24:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:32:01.551 14:24:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:32:01.551 14:24:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:32:01.551 14:24:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:32:01.551 14:24:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:32:01.551 14:24:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:32:01.551 14:24:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:32:01.551 14:24:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:32:01.551 14:24:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:32:01.551 14:24:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local strip_size 00:32:01.551 14:24:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local create_arg 00:32:01.551 14:24:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:32:01.551 14:24:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local data_offset 00:32:01.551 14:24:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:32:01.551 14:24:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:32:01.551 14:24:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:32:01.551 14:24:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:32:01.551 14:24:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # raid_pid=217055 00:32:01.551 14:24:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # waitforlisten 217055 /var/tmp/spdk-raid.sock 00:32:01.551 14:24:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:32:01.551 14:24:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@829 -- # '[' -z 217055 ']' 00:32:01.551 14:24:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:32:01.551 14:24:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:01.551 14:24:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:32:01.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:32:01.551 14:24:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:01.551 14:24:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:01.551 [2024-07-15 14:24:47.309911] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:32:01.551 I/O size of 3145728 is greater than zero copy threshold (65536). 00:32:01.551 Zero copy mechanism will not be used. 00:32:01.551 [2024-07-15 14:24:47.310110] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid217055 ] 00:32:01.551 [2024-07-15 14:24:47.483270] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:01.810 [2024-07-15 14:24:47.756009] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:02.068 [2024-07-15 14:24:47.970982] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:02.636 14:24:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:02.636 14:24:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@862 -- # return 0 00:32:02.636 14:24:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:32:02.636 14:24:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:32:02.935 BaseBdev1_malloc 00:32:02.935 14:24:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:32:03.193 [2024-07-15 14:24:49.003697] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:32:03.193 [2024-07-15 14:24:49.004188] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:03.193 [2024-07-15 14:24:49.004320] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:32:03.193 [2024-07-15 14:24:49.004416] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:03.193 [2024-07-15 14:24:49.006288] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:03.193 [2024-07-15 14:24:49.006438] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:32:03.193 BaseBdev1 00:32:03.193 14:24:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:32:03.193 14:24:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:32:03.452 BaseBdev2_malloc 00:32:03.452 14:24:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:32:03.711 [2024-07-15 14:24:49.631916] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:32:03.711 [2024-07-15 14:24:49.632192] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:03.711 [2024-07-15 14:24:49.632298] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:32:03.711 [2024-07-15 14:24:49.632437] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:03.711 [2024-07-15 14:24:49.634304] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:03.711 [2024-07-15 14:24:49.634422] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:32:03.711 BaseBdev2 00:32:03.711 14:24:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b spare_malloc 00:32:04.278 spare_malloc 00:32:04.278 14:24:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:32:04.278 spare_delay 00:32:04.537 14:24:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:32:04.796 [2024-07-15 14:24:50.554470] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:32:04.796 [2024-07-15 14:24:50.554983] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:04.796 [2024-07-15 14:24:50.555106] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:32:04.796 [2024-07-15 14:24:50.555195] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:04.796 [2024-07-15 14:24:50.556995] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:04.796 [2024-07-15 14:24:50.557145] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:32:04.796 spare 00:32:04.796 14:24:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:32:04.796 [2024-07-15 14:24:50.786561] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:04.796 [2024-07-15 14:24:50.788116] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:04.796 [2024-07-15 14:24:50.788331] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:32:04.796 [2024-07-15 14:24:50.788347] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:32:04.796 [2024-07-15 14:24:50.788484] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:32:04.796 [2024-07-15 14:24:50.788747] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:32:04.796 [2024-07-15 14:24:50.788785] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:32:04.796 [2024-07-15 14:24:50.788897] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:05.055 14:24:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:32:05.055 14:24:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:05.055 14:24:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:05.055 14:24:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:05.055 14:24:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:05.055 14:24:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:05.055 14:24:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:05.055 14:24:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:05.055 14:24:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:05.055 14:24:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:05.055 14:24:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:05.055 14:24:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:05.312 14:24:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:05.312 "name": "raid_bdev1", 00:32:05.312 "uuid": "62849eff-7248-4763-9cb2-0b9c5d4cda4a", 00:32:05.312 "strip_size_kb": 0, 00:32:05.312 "state": "online", 00:32:05.312 "raid_level": "raid1", 00:32:05.312 "superblock": true, 00:32:05.312 "num_base_bdevs": 2, 00:32:05.312 "num_base_bdevs_discovered": 2, 00:32:05.312 "num_base_bdevs_operational": 2, 00:32:05.312 "base_bdevs_list": [ 00:32:05.312 { 00:32:05.312 "name": "BaseBdev1", 00:32:05.312 "uuid": "0254019d-f4df-5cac-8c46-fc7b5e380dc3", 00:32:05.312 "is_configured": true, 00:32:05.312 "data_offset": 256, 00:32:05.312 "data_size": 7936 00:32:05.312 }, 00:32:05.312 { 00:32:05.312 "name": "BaseBdev2", 00:32:05.312 "uuid": "3d6cb0e9-6716-5680-b1c9-ab0cf42941c7", 00:32:05.312 "is_configured": true, 00:32:05.312 "data_offset": 256, 00:32:05.312 "data_size": 7936 00:32:05.312 } 00:32:05.312 ] 00:32:05.312 }' 00:32:05.312 14:24:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:05.312 14:24:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:05.876 14:24:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:32:05.876 14:24:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:32:06.441 [2024-07-15 14:24:52.146894] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:06.441 14:24:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=7936 00:32:06.441 14:24:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:06.441 14:24:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:32:06.698 14:24:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@618 -- # data_offset=256 00:32:06.698 14:24:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:32:06.698 14:24:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:32:06.698 14:24:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:32:06.698 14:24:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:32:06.698 14:24:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:32:06.698 14:24:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:32:06.698 14:24:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:32:06.698 14:24:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:32:06.698 14:24:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:32:06.698 14:24:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:32:06.698 14:24:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:32:06.698 14:24:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:32:06.698 14:24:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:32:06.954 [2024-07-15 14:24:52.794917] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:32:06.954 /dev/nbd0 00:32:06.954 14:24:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:32:06.954 14:24:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:32:06.954 14:24:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:32:06.954 14:24:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@867 -- # local i 00:32:06.954 14:24:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:32:06.954 14:24:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:32:06.954 14:24:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:32:06.954 14:24:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # break 00:32:06.954 14:24:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:32:06.954 14:24:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:32:06.954 14:24:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:06.954 1+0 records in 00:32:06.954 1+0 records out 00:32:06.954 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000468277 s, 8.7 MB/s 00:32:06.954 14:24:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:06.954 14:24:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # size=4096 00:32:06.954 14:24:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:06.954 14:24:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:32:06.954 14:24:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # return 0 00:32:06.954 14:24:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:06.954 14:24:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:32:06.954 14:24:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:32:06.954 14:24:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:32:06.954 14:24:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:32:07.883 7936+0 records in 00:32:07.883 7936+0 records out 00:32:07.883 32505856 bytes (33 MB, 31 MiB) copied, 0.767461 s, 42.4 MB/s 00:32:07.883 14:24:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:32:07.883 14:24:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:32:07.883 14:24:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:32:07.883 14:24:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:07.883 14:24:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:32:07.883 14:24:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:07.883 14:24:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:32:08.140 14:24:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:32:08.140 [2024-07-15 14:24:53.954377] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:08.140 14:24:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:32:08.140 14:24:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:32:08.140 14:24:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:08.140 14:24:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:08.140 14:24:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:08.140 14:24:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:32:08.140 14:24:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:32:08.140 14:24:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:32:08.397 [2024-07-15 14:24:54.258218] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:32:08.397 14:24:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:08.397 14:24:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:08.398 14:24:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:08.398 14:24:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:08.398 14:24:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:08.398 14:24:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:32:08.398 14:24:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:08.398 14:24:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:08.398 14:24:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:08.398 14:24:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:08.398 14:24:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:08.398 14:24:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:08.656 14:24:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:08.656 "name": "raid_bdev1", 00:32:08.656 "uuid": "62849eff-7248-4763-9cb2-0b9c5d4cda4a", 00:32:08.656 "strip_size_kb": 0, 00:32:08.656 "state": "online", 00:32:08.656 "raid_level": "raid1", 00:32:08.656 "superblock": true, 00:32:08.656 "num_base_bdevs": 2, 00:32:08.656 "num_base_bdevs_discovered": 1, 00:32:08.656 "num_base_bdevs_operational": 1, 00:32:08.656 "base_bdevs_list": [ 00:32:08.656 { 00:32:08.656 "name": null, 00:32:08.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:08.656 "is_configured": false, 00:32:08.656 "data_offset": 256, 00:32:08.656 "data_size": 7936 00:32:08.656 }, 00:32:08.656 { 00:32:08.656 "name": "BaseBdev2", 00:32:08.656 "uuid": "3d6cb0e9-6716-5680-b1c9-ab0cf42941c7", 00:32:08.656 "is_configured": true, 00:32:08.656 "data_offset": 256, 00:32:08.656 "data_size": 7936 00:32:08.656 } 00:32:08.656 ] 00:32:08.656 }' 00:32:08.656 14:24:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:08.656 14:24:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:09.589 14:24:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:32:09.589 [2024-07-15 14:24:55.590251] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:09.846 [2024-07-15 14:24:55.609201] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d190 00:32:09.846 [2024-07-15 14:24:55.611149] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:09.846 14:24:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # sleep 1 00:32:10.779 14:24:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:10.779 14:24:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:10.779 14:24:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:10.779 14:24:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:10.779 14:24:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:10.779 14:24:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:10.779 14:24:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:11.038 14:24:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:11.038 "name": "raid_bdev1", 00:32:11.038 "uuid": "62849eff-7248-4763-9cb2-0b9c5d4cda4a", 00:32:11.038 "strip_size_kb": 0, 00:32:11.038 "state": "online", 00:32:11.038 "raid_level": "raid1", 00:32:11.038 "superblock": true, 00:32:11.038 "num_base_bdevs": 2, 00:32:11.038 "num_base_bdevs_discovered": 2, 00:32:11.038 "num_base_bdevs_operational": 2, 00:32:11.038 "process": { 00:32:11.038 "type": "rebuild", 00:32:11.038 "target": "spare", 00:32:11.038 "progress": { 00:32:11.038 "blocks": 3072, 00:32:11.038 "percent": 38 00:32:11.038 } 00:32:11.038 }, 00:32:11.038 "base_bdevs_list": [ 00:32:11.038 { 00:32:11.038 "name": "spare", 00:32:11.038 "uuid": "7cef08dc-eb11-5216-84ec-823eccc154d6", 00:32:11.038 "is_configured": true, 00:32:11.038 "data_offset": 256, 00:32:11.038 "data_size": 7936 00:32:11.038 }, 00:32:11.038 { 00:32:11.038 "name": "BaseBdev2", 00:32:11.038 "uuid": "3d6cb0e9-6716-5680-b1c9-ab0cf42941c7", 00:32:11.038 "is_configured": true, 00:32:11.038 "data_offset": 256, 00:32:11.038 "data_size": 7936 00:32:11.038 } 00:32:11.038 ] 00:32:11.038 }' 00:32:11.038 14:24:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:11.038 14:24:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:11.038 14:24:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:11.038 14:24:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:11.038 14:24:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:32:11.297 [2024-07-15 14:24:57.160554] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:11.297 [2024-07-15 14:24:57.221138] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:32:11.297 [2024-07-15 14:24:57.221344] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:11.297 [2024-07-15 14:24:57.221403] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:11.297 [2024-07-15 14:24:57.221534] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:32:11.297 14:24:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:11.297 14:24:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:11.297 14:24:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:11.297 14:24:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:11.297 14:24:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:11.297 14:24:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:32:11.297 14:24:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:11.297 14:24:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:11.297 14:24:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:11.297 14:24:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:11.297 14:24:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:11.297 14:24:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:11.556 14:24:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:11.556 "name": "raid_bdev1", 00:32:11.556 "uuid": "62849eff-7248-4763-9cb2-0b9c5d4cda4a", 00:32:11.556 "strip_size_kb": 0, 00:32:11.556 "state": "online", 00:32:11.556 "raid_level": "raid1", 00:32:11.556 "superblock": true, 00:32:11.556 "num_base_bdevs": 2, 00:32:11.556 "num_base_bdevs_discovered": 1, 00:32:11.556 "num_base_bdevs_operational": 1, 00:32:11.556 "base_bdevs_list": [ 00:32:11.556 { 00:32:11.556 "name": null, 00:32:11.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:11.556 "is_configured": false, 00:32:11.556 "data_offset": 256, 00:32:11.556 "data_size": 7936 00:32:11.556 }, 00:32:11.556 { 00:32:11.556 "name": "BaseBdev2", 00:32:11.556 "uuid": "3d6cb0e9-6716-5680-b1c9-ab0cf42941c7", 00:32:11.556 "is_configured": true, 00:32:11.556 "data_offset": 256, 00:32:11.556 "data_size": 7936 00:32:11.556 } 00:32:11.556 ] 00:32:11.556 }' 00:32:11.556 14:24:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:11.556 14:24:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:12.169 14:24:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:12.169 14:24:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:12.169 14:24:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:32:12.169 14:24:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:32:12.169 14:24:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:12.169 14:24:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:12.169 14:24:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:12.428 14:24:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:12.428 "name": "raid_bdev1", 00:32:12.428 "uuid": "62849eff-7248-4763-9cb2-0b9c5d4cda4a", 00:32:12.428 "strip_size_kb": 0, 00:32:12.428 "state": "online", 00:32:12.428 "raid_level": "raid1", 00:32:12.428 "superblock": true, 00:32:12.428 "num_base_bdevs": 2, 00:32:12.428 "num_base_bdevs_discovered": 1, 00:32:12.428 "num_base_bdevs_operational": 1, 00:32:12.428 "base_bdevs_list": [ 00:32:12.428 { 00:32:12.428 "name": null, 00:32:12.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:12.428 "is_configured": false, 00:32:12.428 "data_offset": 256, 00:32:12.428 "data_size": 7936 00:32:12.428 }, 00:32:12.428 { 00:32:12.428 "name": "BaseBdev2", 00:32:12.428 "uuid": "3d6cb0e9-6716-5680-b1c9-ab0cf42941c7", 00:32:12.428 "is_configured": true, 00:32:12.428 "data_offset": 256, 00:32:12.428 "data_size": 7936 00:32:12.428 } 00:32:12.428 ] 00:32:12.428 }' 00:32:12.428 14:24:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:12.687 14:24:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:32:12.687 14:24:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:12.687 14:24:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:32:12.687 14:24:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:32:12.945 [2024-07-15 14:24:58.695788] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:12.945 [2024-07-15 14:24:58.710085] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:32:12.945 [2024-07-15 14:24:58.711620] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:12.945 14:24:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # sleep 1 00:32:13.881 14:24:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:13.881 14:24:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:13.881 14:24:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:13.881 14:24:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:13.881 14:24:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:13.881 14:24:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:13.882 14:24:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:14.140 14:25:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:14.140 "name": "raid_bdev1", 00:32:14.140 "uuid": "62849eff-7248-4763-9cb2-0b9c5d4cda4a", 00:32:14.140 "strip_size_kb": 0, 00:32:14.140 "state": "online", 00:32:14.140 "raid_level": "raid1", 00:32:14.140 "superblock": true, 00:32:14.140 "num_base_bdevs": 2, 00:32:14.140 "num_base_bdevs_discovered": 2, 00:32:14.140 "num_base_bdevs_operational": 2, 00:32:14.140 "process": { 00:32:14.140 "type": "rebuild", 00:32:14.140 "target": "spare", 00:32:14.140 "progress": { 00:32:14.140 "blocks": 3072, 00:32:14.140 "percent": 38 00:32:14.140 } 00:32:14.140 }, 00:32:14.140 "base_bdevs_list": [ 00:32:14.140 { 00:32:14.140 "name": "spare", 00:32:14.140 "uuid": "7cef08dc-eb11-5216-84ec-823eccc154d6", 00:32:14.140 "is_configured": true, 00:32:14.140 "data_offset": 256, 00:32:14.140 "data_size": 7936 00:32:14.140 }, 00:32:14.140 { 00:32:14.140 "name": "BaseBdev2", 00:32:14.140 "uuid": "3d6cb0e9-6716-5680-b1c9-ab0cf42941c7", 00:32:14.140 "is_configured": true, 00:32:14.140 "data_offset": 256, 00:32:14.140 "data_size": 7936 00:32:14.140 } 00:32:14.140 ] 00:32:14.140 }' 00:32:14.140 14:25:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:14.140 14:25:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:14.140 14:25:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:14.140 14:25:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:14.140 14:25:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:32:14.140 14:25:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:32:14.140 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:32:14.140 14:25:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:32:14.140 14:25:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:32:14.140 14:25:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:32:14.140 14:25:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@705 -- # local timeout=1157 00:32:14.140 14:25:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:32:14.140 14:25:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:14.140 14:25:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:14.140 14:25:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:14.141 14:25:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:14.141 14:25:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:14.141 14:25:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:14.141 14:25:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:14.399 14:25:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:14.399 "name": "raid_bdev1", 00:32:14.399 "uuid": "62849eff-7248-4763-9cb2-0b9c5d4cda4a", 00:32:14.399 "strip_size_kb": 0, 00:32:14.399 "state": "online", 00:32:14.399 "raid_level": "raid1", 00:32:14.399 "superblock": true, 00:32:14.399 "num_base_bdevs": 2, 00:32:14.399 "num_base_bdevs_discovered": 2, 00:32:14.399 "num_base_bdevs_operational": 2, 00:32:14.399 "process": { 00:32:14.399 "type": "rebuild", 00:32:14.399 "target": "spare", 00:32:14.399 "progress": { 00:32:14.399 "blocks": 4096, 00:32:14.399 "percent": 51 00:32:14.399 } 00:32:14.399 }, 00:32:14.399 "base_bdevs_list": [ 00:32:14.399 { 00:32:14.399 "name": "spare", 00:32:14.399 "uuid": "7cef08dc-eb11-5216-84ec-823eccc154d6", 00:32:14.399 "is_configured": true, 00:32:14.399 "data_offset": 256, 00:32:14.399 "data_size": 7936 00:32:14.399 }, 00:32:14.399 { 00:32:14.399 "name": "BaseBdev2", 00:32:14.399 "uuid": "3d6cb0e9-6716-5680-b1c9-ab0cf42941c7", 00:32:14.399 "is_configured": true, 00:32:14.399 "data_offset": 256, 00:32:14.399 "data_size": 7936 00:32:14.399 } 00:32:14.399 ] 00:32:14.399 }' 00:32:14.399 14:25:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:14.657 14:25:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:14.657 14:25:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:14.657 14:25:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:14.657 14:25:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@710 -- # sleep 1 00:32:15.589 14:25:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:32:15.589 14:25:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:15.589 14:25:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:15.589 14:25:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:15.589 14:25:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:15.589 14:25:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:15.589 14:25:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:15.589 14:25:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:15.847 14:25:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:15.847 "name": "raid_bdev1", 00:32:15.847 "uuid": "62849eff-7248-4763-9cb2-0b9c5d4cda4a", 00:32:15.847 "strip_size_kb": 0, 00:32:15.847 "state": "online", 00:32:15.847 "raid_level": "raid1", 00:32:15.847 "superblock": true, 00:32:15.847 "num_base_bdevs": 2, 00:32:15.847 "num_base_bdevs_discovered": 2, 00:32:15.847 "num_base_bdevs_operational": 2, 00:32:15.847 "process": { 00:32:15.847 "type": "rebuild", 00:32:15.847 "target": "spare", 00:32:15.847 "progress": { 00:32:15.847 "blocks": 7680, 00:32:15.847 "percent": 96 00:32:15.847 } 00:32:15.847 }, 00:32:15.847 "base_bdevs_list": [ 00:32:15.847 { 00:32:15.847 "name": "spare", 00:32:15.847 "uuid": "7cef08dc-eb11-5216-84ec-823eccc154d6", 00:32:15.847 "is_configured": true, 00:32:15.847 "data_offset": 256, 00:32:15.847 "data_size": 7936 00:32:15.847 }, 00:32:15.847 { 00:32:15.847 "name": "BaseBdev2", 00:32:15.847 "uuid": "3d6cb0e9-6716-5680-b1c9-ab0cf42941c7", 00:32:15.847 "is_configured": true, 00:32:15.847 "data_offset": 256, 00:32:15.847 "data_size": 7936 00:32:15.847 } 00:32:15.847 ] 00:32:15.847 }' 00:32:15.847 14:25:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:15.847 14:25:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:15.847 14:25:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:15.847 [2024-07-15 14:25:01.830083] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:32:15.847 [2024-07-15 14:25:01.830293] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:32:15.847 [2024-07-15 14:25:01.830935] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:16.106 14:25:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:16.106 14:25:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@710 -- # sleep 1 00:32:17.067 14:25:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:32:17.067 14:25:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:17.067 14:25:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:17.067 14:25:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:17.067 14:25:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:17.067 14:25:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:17.067 14:25:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:17.067 14:25:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:17.330 14:25:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:17.330 "name": "raid_bdev1", 00:32:17.330 "uuid": "62849eff-7248-4763-9cb2-0b9c5d4cda4a", 00:32:17.330 "strip_size_kb": 0, 00:32:17.330 "state": "online", 00:32:17.330 "raid_level": "raid1", 00:32:17.330 "superblock": true, 00:32:17.330 "num_base_bdevs": 2, 00:32:17.330 "num_base_bdevs_discovered": 2, 00:32:17.330 "num_base_bdevs_operational": 2, 00:32:17.330 "base_bdevs_list": [ 00:32:17.330 { 00:32:17.330 "name": "spare", 00:32:17.330 "uuid": "7cef08dc-eb11-5216-84ec-823eccc154d6", 00:32:17.330 "is_configured": true, 00:32:17.330 "data_offset": 256, 00:32:17.330 "data_size": 7936 00:32:17.330 }, 00:32:17.330 { 00:32:17.330 "name": "BaseBdev2", 00:32:17.330 "uuid": "3d6cb0e9-6716-5680-b1c9-ab0cf42941c7", 00:32:17.330 "is_configured": true, 00:32:17.330 "data_offset": 256, 00:32:17.330 "data_size": 7936 00:32:17.330 } 00:32:17.330 ] 00:32:17.330 }' 00:32:17.330 14:25:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:17.330 14:25:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:32:17.330 14:25:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:17.330 14:25:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:32:17.330 14:25:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # break 00:32:17.330 14:25:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:17.330 14:25:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:17.330 14:25:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:32:17.330 14:25:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:32:17.330 14:25:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:17.330 14:25:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:17.330 14:25:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:17.589 14:25:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:17.589 "name": "raid_bdev1", 00:32:17.589 "uuid": "62849eff-7248-4763-9cb2-0b9c5d4cda4a", 00:32:17.589 "strip_size_kb": 0, 00:32:17.589 "state": "online", 00:32:17.589 "raid_level": "raid1", 00:32:17.589 "superblock": true, 00:32:17.589 "num_base_bdevs": 2, 00:32:17.589 "num_base_bdevs_discovered": 2, 00:32:17.589 "num_base_bdevs_operational": 2, 00:32:17.589 "base_bdevs_list": [ 00:32:17.589 { 00:32:17.589 "name": "spare", 00:32:17.589 "uuid": "7cef08dc-eb11-5216-84ec-823eccc154d6", 00:32:17.589 "is_configured": true, 00:32:17.589 "data_offset": 256, 00:32:17.589 "data_size": 7936 00:32:17.589 }, 00:32:17.589 { 00:32:17.589 "name": "BaseBdev2", 00:32:17.589 "uuid": "3d6cb0e9-6716-5680-b1c9-ab0cf42941c7", 00:32:17.589 "is_configured": true, 00:32:17.589 "data_offset": 256, 00:32:17.589 "data_size": 7936 00:32:17.589 } 00:32:17.589 ] 00:32:17.589 }' 00:32:17.589 14:25:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:17.589 14:25:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:32:17.589 14:25:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:17.589 14:25:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:32:17.589 14:25:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:32:17.589 14:25:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:17.589 14:25:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:17.589 14:25:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:17.589 14:25:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:17.589 14:25:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:17.589 14:25:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:17.589 14:25:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:17.589 14:25:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:17.589 14:25:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:17.589 14:25:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:17.589 14:25:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:18.155 14:25:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:18.155 "name": "raid_bdev1", 00:32:18.155 "uuid": "62849eff-7248-4763-9cb2-0b9c5d4cda4a", 00:32:18.155 "strip_size_kb": 0, 00:32:18.155 "state": "online", 00:32:18.155 "raid_level": "raid1", 00:32:18.155 "superblock": true, 00:32:18.155 "num_base_bdevs": 2, 00:32:18.155 "num_base_bdevs_discovered": 2, 00:32:18.155 "num_base_bdevs_operational": 2, 00:32:18.155 "base_bdevs_list": [ 00:32:18.155 { 00:32:18.155 "name": "spare", 00:32:18.155 "uuid": "7cef08dc-eb11-5216-84ec-823eccc154d6", 00:32:18.155 "is_configured": true, 00:32:18.155 "data_offset": 256, 00:32:18.155 "data_size": 7936 00:32:18.155 }, 00:32:18.155 { 00:32:18.155 "name": "BaseBdev2", 00:32:18.155 "uuid": "3d6cb0e9-6716-5680-b1c9-ab0cf42941c7", 00:32:18.155 "is_configured": true, 00:32:18.155 "data_offset": 256, 00:32:18.155 "data_size": 7936 00:32:18.155 } 00:32:18.155 ] 00:32:18.155 }' 00:32:18.155 14:25:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:18.155 14:25:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:18.723 14:25:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:32:18.982 [2024-07-15 14:25:04.760180] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:18.982 [2024-07-15 14:25:04.760417] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:18.982 [2024-07-15 14:25:04.760597] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:18.982 [2024-07-15 14:25:04.760777] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:18.982 [2024-07-15 14:25:04.760904] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:32:18.982 14:25:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:18.982 14:25:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # jq length 00:32:19.240 14:25:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:32:19.240 14:25:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:32:19.240 14:25:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:32:19.241 14:25:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:32:19.241 14:25:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:32:19.241 14:25:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:32:19.241 14:25:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:32:19.241 14:25:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:32:19.241 14:25:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:32:19.241 14:25:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:32:19.241 14:25:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:32:19.241 14:25:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:19.241 14:25:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:32:19.499 /dev/nbd0 00:32:19.499 14:25:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:32:19.499 14:25:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:32:19.499 14:25:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:32:19.499 14:25:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@867 -- # local i 00:32:19.499 14:25:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:32:19.499 14:25:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:32:19.499 14:25:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:32:19.499 14:25:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # break 00:32:19.499 14:25:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:32:19.499 14:25:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:32:19.499 14:25:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:19.499 1+0 records in 00:32:19.499 1+0 records out 00:32:19.499 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000501627 s, 8.2 MB/s 00:32:19.499 14:25:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:19.499 14:25:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # size=4096 00:32:19.499 14:25:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:19.499 14:25:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:32:19.499 14:25:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # return 0 00:32:19.499 14:25:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:19.499 14:25:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:19.499 14:25:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:32:19.758 /dev/nbd1 00:32:19.758 14:25:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:32:19.758 14:25:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:32:19.758 14:25:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:32:19.758 14:25:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@867 -- # local i 00:32:19.758 14:25:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:32:19.758 14:25:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:32:19.758 14:25:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:32:19.758 14:25:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # break 00:32:19.758 14:25:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:32:19.758 14:25:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:32:19.758 14:25:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:19.758 1+0 records in 00:32:19.758 1+0 records out 00:32:19.758 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000602161 s, 6.8 MB/s 00:32:19.758 14:25:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:19.758 14:25:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # size=4096 00:32:19.758 14:25:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:19.758 14:25:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:32:19.758 14:25:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # return 0 00:32:19.758 14:25:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:19.758 14:25:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:19.758 14:25:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:32:20.017 14:25:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:32:20.017 14:25:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:32:20.017 14:25:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:32:20.017 14:25:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:20.017 14:25:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:32:20.017 14:25:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:20.017 14:25:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:32:20.276 14:25:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:32:20.276 14:25:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:32:20.276 14:25:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:32:20.276 14:25:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:20.276 14:25:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:20.276 14:25:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:20.276 14:25:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:32:20.276 14:25:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:32:20.276 14:25:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:20.276 14:25:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:32:20.534 14:25:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:32:20.534 14:25:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:32:20.534 14:25:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:32:20.534 14:25:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:20.534 14:25:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:20.534 14:25:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:32:20.534 14:25:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:32:20.534 14:25:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:32:20.534 14:25:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:32:20.534 14:25:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:32:20.832 14:25:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:32:21.137 [2024-07-15 14:25:06.951785] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:32:21.137 [2024-07-15 14:25:06.952108] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:21.137 [2024-07-15 14:25:06.952219] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:32:21.138 [2024-07-15 14:25:06.952358] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:21.138 [2024-07-15 14:25:06.954192] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:21.138 [2024-07-15 14:25:06.954359] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:32:21.138 [2024-07-15 14:25:06.954600] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:32:21.138 [2024-07-15 14:25:06.954810] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:21.138 [2024-07-15 14:25:06.955037] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:21.138 spare 00:32:21.138 14:25:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:32:21.138 14:25:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:21.138 14:25:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:21.138 14:25:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:21.138 14:25:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:21.138 14:25:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:21.138 14:25:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:21.138 14:25:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:21.138 14:25:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:21.138 14:25:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:21.138 14:25:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:21.138 14:25:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:21.138 [2024-07-15 14:25:07.055252] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580 00:32:21.138 [2024-07-15 14:25:07.055497] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:32:21.138 [2024-07-15 14:25:07.055706] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1dc0 00:32:21.138 [2024-07-15 14:25:07.056204] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580 00:32:21.138 [2024-07-15 14:25:07.056340] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580 00:32:21.138 [2024-07-15 14:25:07.056589] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:21.396 14:25:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:21.396 "name": "raid_bdev1", 00:32:21.396 "uuid": "62849eff-7248-4763-9cb2-0b9c5d4cda4a", 00:32:21.396 "strip_size_kb": 0, 00:32:21.396 "state": "online", 00:32:21.396 "raid_level": "raid1", 00:32:21.396 "superblock": true, 00:32:21.396 "num_base_bdevs": 2, 00:32:21.396 "num_base_bdevs_discovered": 2, 00:32:21.396 "num_base_bdevs_operational": 2, 00:32:21.396 "base_bdevs_list": [ 00:32:21.396 { 00:32:21.396 "name": "spare", 00:32:21.396 "uuid": "7cef08dc-eb11-5216-84ec-823eccc154d6", 00:32:21.396 "is_configured": true, 00:32:21.396 "data_offset": 256, 00:32:21.396 "data_size": 7936 00:32:21.396 }, 00:32:21.396 { 00:32:21.396 "name": "BaseBdev2", 00:32:21.396 "uuid": "3d6cb0e9-6716-5680-b1c9-ab0cf42941c7", 00:32:21.396 "is_configured": true, 00:32:21.396 "data_offset": 256, 00:32:21.396 "data_size": 7936 00:32:21.396 } 00:32:21.396 ] 00:32:21.396 }' 00:32:21.396 14:25:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:21.396 14:25:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:22.333 14:25:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:22.333 14:25:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:22.333 14:25:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:32:22.333 14:25:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:32:22.333 14:25:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:22.333 14:25:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:22.333 14:25:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:22.333 14:25:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:22.333 "name": "raid_bdev1", 00:32:22.333 "uuid": "62849eff-7248-4763-9cb2-0b9c5d4cda4a", 00:32:22.333 "strip_size_kb": 0, 00:32:22.333 "state": "online", 00:32:22.333 "raid_level": "raid1", 00:32:22.333 "superblock": true, 00:32:22.333 "num_base_bdevs": 2, 00:32:22.333 "num_base_bdevs_discovered": 2, 00:32:22.333 "num_base_bdevs_operational": 2, 00:32:22.333 "base_bdevs_list": [ 00:32:22.333 { 00:32:22.333 "name": "spare", 00:32:22.333 "uuid": "7cef08dc-eb11-5216-84ec-823eccc154d6", 00:32:22.333 "is_configured": true, 00:32:22.333 "data_offset": 256, 00:32:22.333 "data_size": 7936 00:32:22.333 }, 00:32:22.333 { 00:32:22.333 "name": "BaseBdev2", 00:32:22.333 "uuid": "3d6cb0e9-6716-5680-b1c9-ab0cf42941c7", 00:32:22.333 "is_configured": true, 00:32:22.333 "data_offset": 256, 00:32:22.333 "data_size": 7936 00:32:22.333 } 00:32:22.333 ] 00:32:22.333 }' 00:32:22.333 14:25:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:22.333 14:25:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:32:22.333 14:25:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:22.591 14:25:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:32:22.591 14:25:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:22.591 14:25:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:32:22.849 14:25:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:32:22.849 14:25:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:32:23.108 [2024-07-15 14:25:08.872886] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:23.108 14:25:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:23.108 14:25:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:23.108 14:25:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:23.108 14:25:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:23.108 14:25:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:23.108 14:25:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:32:23.108 14:25:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:23.108 14:25:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:23.108 14:25:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:23.108 14:25:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:23.108 14:25:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:23.108 14:25:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:23.366 14:25:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:23.366 "name": "raid_bdev1", 00:32:23.366 "uuid": "62849eff-7248-4763-9cb2-0b9c5d4cda4a", 00:32:23.366 "strip_size_kb": 0, 00:32:23.366 "state": "online", 00:32:23.366 "raid_level": "raid1", 00:32:23.366 "superblock": true, 00:32:23.366 "num_base_bdevs": 2, 00:32:23.366 "num_base_bdevs_discovered": 1, 00:32:23.366 "num_base_bdevs_operational": 1, 00:32:23.366 "base_bdevs_list": [ 00:32:23.366 { 00:32:23.366 "name": null, 00:32:23.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:23.366 "is_configured": false, 00:32:23.366 "data_offset": 256, 00:32:23.366 "data_size": 7936 00:32:23.366 }, 00:32:23.366 { 00:32:23.366 "name": "BaseBdev2", 00:32:23.366 "uuid": "3d6cb0e9-6716-5680-b1c9-ab0cf42941c7", 00:32:23.366 "is_configured": true, 00:32:23.366 "data_offset": 256, 00:32:23.366 "data_size": 7936 00:32:23.366 } 00:32:23.366 ] 00:32:23.366 }' 00:32:23.366 14:25:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:23.366 14:25:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:24.300 14:25:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:32:24.300 [2024-07-15 14:25:10.229298] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:24.300 [2024-07-15 14:25:10.229866] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:32:24.300 [2024-07-15 14:25:10.230000] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:32:24.300 [2024-07-15 14:25:10.230570] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:24.300 [2024-07-15 14:25:10.245456] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1f60 00:32:24.300 [2024-07-15 14:25:10.260444] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:24.300 14:25:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # sleep 1 00:32:25.743 14:25:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:25.743 14:25:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:25.743 14:25:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:25.743 14:25:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:25.743 14:25:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:25.743 14:25:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:25.743 14:25:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:25.743 14:25:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:25.743 "name": "raid_bdev1", 00:32:25.743 "uuid": "62849eff-7248-4763-9cb2-0b9c5d4cda4a", 00:32:25.743 "strip_size_kb": 0, 00:32:25.743 "state": "online", 00:32:25.743 "raid_level": "raid1", 00:32:25.743 "superblock": true, 00:32:25.743 "num_base_bdevs": 2, 00:32:25.743 "num_base_bdevs_discovered": 2, 00:32:25.743 "num_base_bdevs_operational": 2, 00:32:25.743 "process": { 00:32:25.743 "type": "rebuild", 00:32:25.743 "target": "spare", 00:32:25.743 "progress": { 00:32:25.743 "blocks": 3072, 00:32:25.743 "percent": 38 00:32:25.743 } 00:32:25.743 }, 00:32:25.743 "base_bdevs_list": [ 00:32:25.743 { 00:32:25.743 "name": "spare", 00:32:25.743 "uuid": "7cef08dc-eb11-5216-84ec-823eccc154d6", 00:32:25.743 "is_configured": true, 00:32:25.743 "data_offset": 256, 00:32:25.743 "data_size": 7936 00:32:25.743 }, 00:32:25.743 { 00:32:25.743 "name": "BaseBdev2", 00:32:25.743 "uuid": "3d6cb0e9-6716-5680-b1c9-ab0cf42941c7", 00:32:25.743 "is_configured": true, 00:32:25.743 "data_offset": 256, 00:32:25.743 "data_size": 7936 00:32:25.744 } 00:32:25.744 ] 00:32:25.744 }' 00:32:25.744 14:25:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:25.744 14:25:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:25.744 14:25:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:25.744 14:25:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:25.744 14:25:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:32:26.002 [2024-07-15 14:25:11.979512] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:26.259 [2024-07-15 14:25:12.071996] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:32:26.259 [2024-07-15 14:25:12.072678] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:26.259 [2024-07-15 14:25:12.072878] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:26.259 [2024-07-15 14:25:12.072931] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:32:26.259 14:25:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:26.259 14:25:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:26.259 14:25:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:26.259 14:25:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:26.259 14:25:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:26.259 14:25:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:32:26.259 14:25:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:26.259 14:25:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:26.259 14:25:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:26.259 14:25:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:26.259 14:25:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:26.259 14:25:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:26.517 14:25:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:26.517 "name": "raid_bdev1", 00:32:26.517 "uuid": "62849eff-7248-4763-9cb2-0b9c5d4cda4a", 00:32:26.517 "strip_size_kb": 0, 00:32:26.517 "state": "online", 00:32:26.517 "raid_level": "raid1", 00:32:26.517 "superblock": true, 00:32:26.517 "num_base_bdevs": 2, 00:32:26.517 "num_base_bdevs_discovered": 1, 00:32:26.517 "num_base_bdevs_operational": 1, 00:32:26.517 "base_bdevs_list": [ 00:32:26.517 { 00:32:26.517 "name": null, 00:32:26.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:26.517 "is_configured": false, 00:32:26.517 "data_offset": 256, 00:32:26.517 "data_size": 7936 00:32:26.517 }, 00:32:26.517 { 00:32:26.517 "name": "BaseBdev2", 00:32:26.517 "uuid": "3d6cb0e9-6716-5680-b1c9-ab0cf42941c7", 00:32:26.517 "is_configured": true, 00:32:26.517 "data_offset": 256, 00:32:26.517 "data_size": 7936 00:32:26.517 } 00:32:26.517 ] 00:32:26.517 }' 00:32:26.517 14:25:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:26.517 14:25:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:27.084 14:25:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:32:27.342 [2024-07-15 14:25:13.334535] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:32:27.342 [2024-07-15 14:25:13.335178] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:27.342 [2024-07-15 14:25:13.335287] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:32:27.342 [2024-07-15 14:25:13.335376] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:27.342 [2024-07-15 14:25:13.335885] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:27.342 [2024-07-15 14:25:13.335987] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:32:27.342 [2024-07-15 14:25:13.336140] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:32:27.342 [2024-07-15 14:25:13.336159] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:32:27.342 [2024-07-15 14:25:13.336169] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:32:27.342 [2024-07-15 14:25:13.336269] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:27.601 [2024-07-15 14:25:13.350285] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c22a0 00:32:27.601 spare 00:32:27.601 [2024-07-15 14:25:13.351879] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:27.601 14:25:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # sleep 1 00:32:28.536 14:25:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:28.536 14:25:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:28.536 14:25:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:28.536 14:25:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:28.536 14:25:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:28.536 14:25:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:28.536 14:25:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:28.793 14:25:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:28.793 "name": "raid_bdev1", 00:32:28.793 "uuid": "62849eff-7248-4763-9cb2-0b9c5d4cda4a", 00:32:28.793 "strip_size_kb": 0, 00:32:28.793 "state": "online", 00:32:28.793 "raid_level": "raid1", 00:32:28.793 "superblock": true, 00:32:28.793 "num_base_bdevs": 2, 00:32:28.793 "num_base_bdevs_discovered": 2, 00:32:28.793 "num_base_bdevs_operational": 2, 00:32:28.793 "process": { 00:32:28.793 "type": "rebuild", 00:32:28.793 "target": "spare", 00:32:28.794 "progress": { 00:32:28.794 "blocks": 3072, 00:32:28.794 "percent": 38 00:32:28.794 } 00:32:28.794 }, 00:32:28.794 "base_bdevs_list": [ 00:32:28.794 { 00:32:28.794 "name": "spare", 00:32:28.794 "uuid": "7cef08dc-eb11-5216-84ec-823eccc154d6", 00:32:28.794 "is_configured": true, 00:32:28.794 "data_offset": 256, 00:32:28.794 "data_size": 7936 00:32:28.794 }, 00:32:28.794 { 00:32:28.794 "name": "BaseBdev2", 00:32:28.794 "uuid": "3d6cb0e9-6716-5680-b1c9-ab0cf42941c7", 00:32:28.794 "is_configured": true, 00:32:28.794 "data_offset": 256, 00:32:28.794 "data_size": 7936 00:32:28.794 } 00:32:28.794 ] 00:32:28.794 }' 00:32:28.794 14:25:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:28.794 14:25:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:28.794 14:25:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:28.794 14:25:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:28.794 14:25:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:32:29.051 [2024-07-15 14:25:15.002368] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:29.310 [2024-07-15 14:25:15.061821] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:32:29.310 [2024-07-15 14:25:15.062419] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:29.310 [2024-07-15 14:25:15.062558] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:29.310 [2024-07-15 14:25:15.062609] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:32:29.310 14:25:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:29.310 14:25:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:29.310 14:25:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:29.310 14:25:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:29.310 14:25:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:29.310 14:25:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:32:29.310 14:25:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:29.310 14:25:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:29.310 14:25:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:29.310 14:25:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:29.310 14:25:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:29.310 14:25:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:29.570 14:25:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:29.570 "name": "raid_bdev1", 00:32:29.570 "uuid": "62849eff-7248-4763-9cb2-0b9c5d4cda4a", 00:32:29.570 "strip_size_kb": 0, 00:32:29.570 "state": "online", 00:32:29.570 "raid_level": "raid1", 00:32:29.570 "superblock": true, 00:32:29.570 "num_base_bdevs": 2, 00:32:29.570 "num_base_bdevs_discovered": 1, 00:32:29.570 "num_base_bdevs_operational": 1, 00:32:29.570 "base_bdevs_list": [ 00:32:29.570 { 00:32:29.570 "name": null, 00:32:29.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:29.570 "is_configured": false, 00:32:29.570 "data_offset": 256, 00:32:29.570 "data_size": 7936 00:32:29.570 }, 00:32:29.570 { 00:32:29.570 "name": "BaseBdev2", 00:32:29.570 "uuid": "3d6cb0e9-6716-5680-b1c9-ab0cf42941c7", 00:32:29.570 "is_configured": true, 00:32:29.570 "data_offset": 256, 00:32:29.570 "data_size": 7936 00:32:29.570 } 00:32:29.570 ] 00:32:29.570 }' 00:32:29.570 14:25:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:29.570 14:25:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:30.134 14:25:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:30.134 14:25:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:30.134 14:25:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:32:30.134 14:25:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:32:30.134 14:25:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:30.134 14:25:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:30.134 14:25:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:30.698 14:25:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:30.698 "name": "raid_bdev1", 00:32:30.698 "uuid": "62849eff-7248-4763-9cb2-0b9c5d4cda4a", 00:32:30.698 "strip_size_kb": 0, 00:32:30.698 "state": "online", 00:32:30.698 "raid_level": "raid1", 00:32:30.698 "superblock": true, 00:32:30.698 "num_base_bdevs": 2, 00:32:30.698 "num_base_bdevs_discovered": 1, 00:32:30.698 "num_base_bdevs_operational": 1, 00:32:30.698 "base_bdevs_list": [ 00:32:30.698 { 00:32:30.698 "name": null, 00:32:30.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:30.698 "is_configured": false, 00:32:30.698 "data_offset": 256, 00:32:30.698 "data_size": 7936 00:32:30.698 }, 00:32:30.698 { 00:32:30.698 "name": "BaseBdev2", 00:32:30.698 "uuid": "3d6cb0e9-6716-5680-b1c9-ab0cf42941c7", 00:32:30.698 "is_configured": true, 00:32:30.698 "data_offset": 256, 00:32:30.698 "data_size": 7936 00:32:30.698 } 00:32:30.698 ] 00:32:30.698 }' 00:32:30.698 14:25:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:30.698 14:25:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:32:30.698 14:25:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:30.698 14:25:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:32:30.698 14:25:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:32:30.955 14:25:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:32:31.212 [2024-07-15 14:25:17.166995] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:32:31.212 [2024-07-15 14:25:17.167397] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:31.212 [2024-07-15 14:25:17.167500] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:32:31.212 [2024-07-15 14:25:17.167739] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:31.212 [2024-07-15 14:25:17.168334] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:31.212 [2024-07-15 14:25:17.168503] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:32:31.212 [2024-07-15 14:25:17.168775] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:32:31.212 [2024-07-15 14:25:17.168917] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:32:31.212 [2024-07-15 14:25:17.169043] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:32:31.212 BaseBdev1 00:32:31.212 14:25:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # sleep 1 00:32:32.584 14:25:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:32.584 14:25:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:32.584 14:25:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:32.584 14:25:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:32.584 14:25:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:32.584 14:25:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:32:32.584 14:25:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:32.584 14:25:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:32.584 14:25:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:32.584 14:25:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:32.584 14:25:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:32.584 14:25:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:32.584 14:25:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:32.584 "name": "raid_bdev1", 00:32:32.584 "uuid": "62849eff-7248-4763-9cb2-0b9c5d4cda4a", 00:32:32.584 "strip_size_kb": 0, 00:32:32.584 "state": "online", 00:32:32.584 "raid_level": "raid1", 00:32:32.584 "superblock": true, 00:32:32.584 "num_base_bdevs": 2, 00:32:32.584 "num_base_bdevs_discovered": 1, 00:32:32.584 "num_base_bdevs_operational": 1, 00:32:32.584 "base_bdevs_list": [ 00:32:32.584 { 00:32:32.584 "name": null, 00:32:32.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:32.584 "is_configured": false, 00:32:32.584 "data_offset": 256, 00:32:32.584 "data_size": 7936 00:32:32.584 }, 00:32:32.584 { 00:32:32.584 "name": "BaseBdev2", 00:32:32.584 "uuid": "3d6cb0e9-6716-5680-b1c9-ab0cf42941c7", 00:32:32.584 "is_configured": true, 00:32:32.584 "data_offset": 256, 00:32:32.584 "data_size": 7936 00:32:32.584 } 00:32:32.584 ] 00:32:32.584 }' 00:32:32.584 14:25:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:32.584 14:25:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:33.519 14:25:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:33.519 14:25:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:33.519 14:25:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:32:33.519 14:25:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:32:33.519 14:25:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:33.519 14:25:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:33.519 14:25:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:33.519 14:25:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:33.519 "name": "raid_bdev1", 00:32:33.519 "uuid": "62849eff-7248-4763-9cb2-0b9c5d4cda4a", 00:32:33.519 "strip_size_kb": 0, 00:32:33.519 "state": "online", 00:32:33.519 "raid_level": "raid1", 00:32:33.519 "superblock": true, 00:32:33.519 "num_base_bdevs": 2, 00:32:33.519 "num_base_bdevs_discovered": 1, 00:32:33.519 "num_base_bdevs_operational": 1, 00:32:33.519 "base_bdevs_list": [ 00:32:33.519 { 00:32:33.519 "name": null, 00:32:33.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:33.519 "is_configured": false, 00:32:33.519 "data_offset": 256, 00:32:33.519 "data_size": 7936 00:32:33.519 }, 00:32:33.519 { 00:32:33.519 "name": "BaseBdev2", 00:32:33.519 "uuid": "3d6cb0e9-6716-5680-b1c9-ab0cf42941c7", 00:32:33.519 "is_configured": true, 00:32:33.519 "data_offset": 256, 00:32:33.519 "data_size": 7936 00:32:33.519 } 00:32:33.519 ] 00:32:33.519 }' 00:32:33.519 14:25:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:33.519 14:25:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:32:33.519 14:25:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:33.519 14:25:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:32:33.519 14:25:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:32:33.519 14:25:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@648 -- # local es=0 00:32:33.519 14:25:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:32:33.519 14:25:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:33.520 14:25:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:33.520 14:25:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:33.778 14:25:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:33.778 14:25:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:33.778 14:25:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:33.778 14:25:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:33.779 14:25:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:32:33.779 14:25:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:32:33.779 [2024-07-15 14:25:19.759087] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:33.779 [2024-07-15 14:25:19.759494] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:32:33.779 [2024-07-15 14:25:19.759657] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:32:33.779 request: 00:32:33.779 { 00:32:33.779 "base_bdev": "BaseBdev1", 00:32:33.779 "raid_bdev": "raid_bdev1", 00:32:33.779 "method": "bdev_raid_add_base_bdev", 00:32:33.779 "req_id": 1 00:32:33.779 } 00:32:33.779 Got JSON-RPC error response 00:32:33.779 response: 00:32:33.779 { 00:32:33.779 "code": -22, 00:32:33.779 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:32:33.779 } 00:32:33.779 14:25:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@651 -- # es=1 00:32:33.779 14:25:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:33.779 14:25:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:33.779 14:25:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:34.037 14:25:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # sleep 1 00:32:34.972 14:25:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:34.972 14:25:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:34.972 14:25:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:34.972 14:25:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:34.972 14:25:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:34.972 14:25:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:32:34.972 14:25:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:34.972 14:25:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:34.972 14:25:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:34.972 14:25:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:34.972 14:25:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:34.972 14:25:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:35.231 14:25:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:35.231 "name": "raid_bdev1", 00:32:35.231 "uuid": "62849eff-7248-4763-9cb2-0b9c5d4cda4a", 00:32:35.231 "strip_size_kb": 0, 00:32:35.231 "state": "online", 00:32:35.231 "raid_level": "raid1", 00:32:35.231 "superblock": true, 00:32:35.231 "num_base_bdevs": 2, 00:32:35.231 "num_base_bdevs_discovered": 1, 00:32:35.231 "num_base_bdevs_operational": 1, 00:32:35.231 "base_bdevs_list": [ 00:32:35.231 { 00:32:35.231 "name": null, 00:32:35.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:35.231 "is_configured": false, 00:32:35.231 "data_offset": 256, 00:32:35.231 "data_size": 7936 00:32:35.231 }, 00:32:35.231 { 00:32:35.231 "name": "BaseBdev2", 00:32:35.231 "uuid": "3d6cb0e9-6716-5680-b1c9-ab0cf42941c7", 00:32:35.231 "is_configured": true, 00:32:35.231 "data_offset": 256, 00:32:35.231 "data_size": 7936 00:32:35.231 } 00:32:35.231 ] 00:32:35.231 }' 00:32:35.231 14:25:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:35.231 14:25:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:35.799 14:25:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:35.799 14:25:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:35.799 14:25:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:32:35.799 14:25:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:32:35.799 14:25:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:35.799 14:25:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:35.799 14:25:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:36.057 14:25:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:36.057 "name": "raid_bdev1", 00:32:36.057 "uuid": "62849eff-7248-4763-9cb2-0b9c5d4cda4a", 00:32:36.057 "strip_size_kb": 0, 00:32:36.057 "state": "online", 00:32:36.057 "raid_level": "raid1", 00:32:36.057 "superblock": true, 00:32:36.057 "num_base_bdevs": 2, 00:32:36.057 "num_base_bdevs_discovered": 1, 00:32:36.057 "num_base_bdevs_operational": 1, 00:32:36.057 "base_bdevs_list": [ 00:32:36.057 { 00:32:36.057 "name": null, 00:32:36.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:36.057 "is_configured": false, 00:32:36.057 "data_offset": 256, 00:32:36.057 "data_size": 7936 00:32:36.057 }, 00:32:36.057 { 00:32:36.057 "name": "BaseBdev2", 00:32:36.057 "uuid": "3d6cb0e9-6716-5680-b1c9-ab0cf42941c7", 00:32:36.057 "is_configured": true, 00:32:36.057 "data_offset": 256, 00:32:36.057 "data_size": 7936 00:32:36.057 } 00:32:36.057 ] 00:32:36.057 }' 00:32:36.057 14:25:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:36.315 14:25:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:32:36.315 14:25:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:36.315 14:25:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:32:36.315 14:25:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@782 -- # killprocess 217055 00:32:36.315 14:25:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@948 -- # '[' -z 217055 ']' 00:32:36.315 14:25:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@952 -- # kill -0 217055 00:32:36.315 14:25:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@953 -- # uname 00:32:36.315 14:25:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:36.315 14:25:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 217055 00:32:36.315 14:25:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:36.315 14:25:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:36.315 14:25:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@966 -- # echo 'killing process with pid 217055' 00:32:36.315 killing process with pid 217055 00:32:36.315 14:25:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@967 -- # kill 217055 00:32:36.315 Received shutdown signal, test time was about 60.000000 seconds 00:32:36.315 00:32:36.315 Latency(us) 00:32:36.315 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:36.315 =================================================================================================================== 00:32:36.315 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:32:36.315 14:25:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # wait 217055 00:32:36.315 [2024-07-15 14:25:22.148442] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:36.315 [2024-07-15 14:25:22.148545] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:36.315 [2024-07-15 14:25:22.148585] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:36.315 [2024-07-15 14:25:22.148597] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline 00:32:36.574 [2024-07-15 14:25:22.397673] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:37.949 14:25:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # return 0 00:32:37.949 00:32:37.949 real 0m36.268s 00:32:37.949 user 0m57.738s 00:32:37.949 sys 0m4.252s 00:32:37.949 14:25:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:37.949 14:25:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:32:37.949 ************************************ 00:32:37.949 END TEST raid_rebuild_test_sb_4k 00:32:37.949 ************************************ 00:32:37.949 14:25:23 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:32:37.949 14:25:23 bdev_raid -- bdev/bdev_raid.sh@904 -- # base_malloc_params='-m 32' 00:32:37.949 14:25:23 bdev_raid -- bdev/bdev_raid.sh@905 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:32:37.949 14:25:23 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:32:37.949 14:25:23 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:37.949 14:25:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:37.949 ************************************ 00:32:37.949 START TEST raid_state_function_test_sb_md_separate 00:32:37.949 ************************************ 00:32:37.949 14:25:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 true 00:32:37.949 14:25:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:32:37.949 14:25:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:32:37.949 14:25:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:32:37.949 14:25:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:32:37.949 14:25:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:32:37.949 14:25:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:32:37.949 14:25:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:32:37.949 14:25:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:32:37.949 14:25:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:32:37.949 14:25:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:32:37.949 14:25:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:32:37.949 14:25:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:32:37.949 14:25:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:32:37.949 14:25:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:32:37.949 14:25:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:32:37.949 14:25:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # local strip_size 00:32:37.949 14:25:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:32:37.949 14:25:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:32:37.949 14:25:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:32:37.949 14:25:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:32:37.949 14:25:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:32:37.949 14:25:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:32:37.949 14:25:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # raid_pid=217947 00:32:37.949 14:25:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:32:37.949 14:25:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 217947' 00:32:37.949 Process raid pid: 217947 00:32:37.949 14:25:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@246 -- # waitforlisten 217947 /var/tmp/spdk-raid.sock 00:32:37.949 14:25:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@829 -- # '[' -z 217947 ']' 00:32:37.949 14:25:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:32:37.949 14:25:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:37.949 14:25:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:32:37.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:32:37.949 14:25:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:37.949 14:25:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:37.949 [2024-07-15 14:25:23.639320] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:32:37.949 [2024-07-15 14:25:23.639674] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:37.949 [2024-07-15 14:25:23.802294] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:38.207 [2024-07-15 14:25:24.016157] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:38.466 [2024-07-15 14:25:24.215254] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:38.725 14:25:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:38.725 14:25:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@862 -- # return 0 00:32:38.725 14:25:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:32:38.985 [2024-07-15 14:25:24.861245] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:38.985 [2024-07-15 14:25:24.861620] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:38.985 [2024-07-15 14:25:24.861765] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:38.985 [2024-07-15 14:25:24.861913] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:38.985 14:25:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:32:38.985 14:25:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:38.985 14:25:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:38.985 14:25:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:38.985 14:25:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:38.985 14:25:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:38.985 14:25:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:38.985 14:25:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:38.985 14:25:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:38.985 14:25:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:38.985 14:25:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:38.985 14:25:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:39.245 14:25:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:39.245 "name": "Existed_Raid", 00:32:39.245 "uuid": "e88b031c-a8a3-4412-9c38-ef6cec24a05e", 00:32:39.245 "strip_size_kb": 0, 00:32:39.245 "state": "configuring", 00:32:39.245 "raid_level": "raid1", 00:32:39.245 "superblock": true, 00:32:39.245 "num_base_bdevs": 2, 00:32:39.245 "num_base_bdevs_discovered": 0, 00:32:39.245 "num_base_bdevs_operational": 2, 00:32:39.245 "base_bdevs_list": [ 00:32:39.245 { 00:32:39.245 "name": "BaseBdev1", 00:32:39.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:39.245 "is_configured": false, 00:32:39.245 "data_offset": 0, 00:32:39.245 "data_size": 0 00:32:39.245 }, 00:32:39.245 { 00:32:39.245 "name": "BaseBdev2", 00:32:39.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:39.245 "is_configured": false, 00:32:39.245 "data_offset": 0, 00:32:39.245 "data_size": 0 00:32:39.245 } 00:32:39.245 ] 00:32:39.245 }' 00:32:39.245 14:25:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:39.245 14:25:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:39.812 14:25:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:32:40.070 [2024-07-15 14:25:26.025314] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:40.070 [2024-07-15 14:25:26.025651] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:32:40.070 14:25:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:32:40.328 [2024-07-15 14:25:26.325379] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:40.328 [2024-07-15 14:25:26.325969] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:40.328 [2024-07-15 14:25:26.326105] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:40.328 [2024-07-15 14:25:26.326251] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:40.587 14:25:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:32:40.846 [2024-07-15 14:25:26.617974] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:40.846 BaseBdev1 00:32:40.846 14:25:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:32:40.846 14:25:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:32:40.846 14:25:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:32:40.846 14:25:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local i 00:32:40.846 14:25:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:32:40.846 14:25:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:32:40.846 14:25:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:41.104 14:25:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:32:41.363 [ 00:32:41.363 { 00:32:41.363 "name": "BaseBdev1", 00:32:41.363 "aliases": [ 00:32:41.363 "e142693a-4e52-4887-9699-b9a279ab088d" 00:32:41.363 ], 00:32:41.363 "product_name": "Malloc disk", 00:32:41.363 "block_size": 4096, 00:32:41.363 "num_blocks": 8192, 00:32:41.363 "uuid": "e142693a-4e52-4887-9699-b9a279ab088d", 00:32:41.363 "md_size": 32, 00:32:41.363 "md_interleave": false, 00:32:41.363 "dif_type": 0, 00:32:41.363 "assigned_rate_limits": { 00:32:41.363 "rw_ios_per_sec": 0, 00:32:41.363 "rw_mbytes_per_sec": 0, 00:32:41.363 "r_mbytes_per_sec": 0, 00:32:41.363 "w_mbytes_per_sec": 0 00:32:41.363 }, 00:32:41.363 "claimed": true, 00:32:41.363 "claim_type": "exclusive_write", 00:32:41.363 "zoned": false, 00:32:41.363 "supported_io_types": { 00:32:41.363 "read": true, 00:32:41.363 "write": true, 00:32:41.363 "unmap": true, 00:32:41.363 "flush": true, 00:32:41.363 "reset": true, 00:32:41.363 "nvme_admin": false, 00:32:41.363 "nvme_io": false, 00:32:41.363 "nvme_io_md": false, 00:32:41.363 "write_zeroes": true, 00:32:41.363 "zcopy": true, 00:32:41.363 "get_zone_info": false, 00:32:41.363 "zone_management": false, 00:32:41.363 "zone_append": false, 00:32:41.363 "compare": false, 00:32:41.363 "compare_and_write": false, 00:32:41.363 "abort": true, 00:32:41.363 "seek_hole": false, 00:32:41.363 "seek_data": false, 00:32:41.363 "copy": true, 00:32:41.363 "nvme_iov_md": false 00:32:41.363 }, 00:32:41.363 "memory_domains": [ 00:32:41.363 { 00:32:41.363 "dma_device_id": "system", 00:32:41.363 "dma_device_type": 1 00:32:41.363 }, 00:32:41.363 { 00:32:41.363 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:41.363 "dma_device_type": 2 00:32:41.363 } 00:32:41.363 ], 00:32:41.363 "driver_specific": {} 00:32:41.363 } 00:32:41.363 ] 00:32:41.363 14:25:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # return 0 00:32:41.363 14:25:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:32:41.363 14:25:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:41.363 14:25:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:41.363 14:25:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:41.363 14:25:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:41.363 14:25:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:41.363 14:25:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:41.363 14:25:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:41.363 14:25:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:41.363 14:25:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:41.363 14:25:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:41.363 14:25:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:41.621 14:25:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:41.621 "name": "Existed_Raid", 00:32:41.621 "uuid": "a159fc6f-3bd9-49d1-8dc4-d5867244126e", 00:32:41.621 "strip_size_kb": 0, 00:32:41.621 "state": "configuring", 00:32:41.621 "raid_level": "raid1", 00:32:41.621 "superblock": true, 00:32:41.621 "num_base_bdevs": 2, 00:32:41.621 "num_base_bdevs_discovered": 1, 00:32:41.621 "num_base_bdevs_operational": 2, 00:32:41.621 "base_bdevs_list": [ 00:32:41.621 { 00:32:41.621 "name": "BaseBdev1", 00:32:41.621 "uuid": "e142693a-4e52-4887-9699-b9a279ab088d", 00:32:41.621 "is_configured": true, 00:32:41.621 "data_offset": 256, 00:32:41.621 "data_size": 7936 00:32:41.621 }, 00:32:41.621 { 00:32:41.621 "name": "BaseBdev2", 00:32:41.621 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:41.621 "is_configured": false, 00:32:41.621 "data_offset": 0, 00:32:41.621 "data_size": 0 00:32:41.621 } 00:32:41.621 ] 00:32:41.621 }' 00:32:41.621 14:25:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:41.621 14:25:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:42.188 14:25:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:32:42.446 [2024-07-15 14:25:28.430249] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:42.446 [2024-07-15 14:25:28.430507] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:32:42.446 14:25:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:32:43.010 [2024-07-15 14:25:28.718327] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:43.010 [2024-07-15 14:25:28.720026] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:43.010 [2024-07-15 14:25:28.720508] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:43.010 14:25:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:32:43.010 14:25:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:32:43.010 14:25:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:32:43.010 14:25:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:43.010 14:25:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:43.010 14:25:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:43.010 14:25:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:43.010 14:25:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:43.010 14:25:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:43.010 14:25:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:43.010 14:25:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:43.010 14:25:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:43.010 14:25:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:43.010 14:25:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:43.269 14:25:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:43.269 "name": "Existed_Raid", 00:32:43.269 "uuid": "a36e7c1a-2b16-48af-a1c4-2d01f6b117ea", 00:32:43.269 "strip_size_kb": 0, 00:32:43.269 "state": "configuring", 00:32:43.269 "raid_level": "raid1", 00:32:43.269 "superblock": true, 00:32:43.269 "num_base_bdevs": 2, 00:32:43.269 "num_base_bdevs_discovered": 1, 00:32:43.269 "num_base_bdevs_operational": 2, 00:32:43.269 "base_bdevs_list": [ 00:32:43.269 { 00:32:43.269 "name": "BaseBdev1", 00:32:43.269 "uuid": "e142693a-4e52-4887-9699-b9a279ab088d", 00:32:43.269 "is_configured": true, 00:32:43.269 "data_offset": 256, 00:32:43.269 "data_size": 7936 00:32:43.269 }, 00:32:43.269 { 00:32:43.269 "name": "BaseBdev2", 00:32:43.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:43.269 "is_configured": false, 00:32:43.269 "data_offset": 0, 00:32:43.269 "data_size": 0 00:32:43.269 } 00:32:43.269 ] 00:32:43.269 }' 00:32:43.269 14:25:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:43.269 14:25:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:43.835 14:25:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:32:44.401 [2024-07-15 14:25:30.176459] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:44.401 [2024-07-15 14:25:30.176886] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:32:44.401 [2024-07-15 14:25:30.177019] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:32:44.401 [2024-07-15 14:25:30.177182] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:32:44.401 [2024-07-15 14:25:30.177383] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:32:44.401 [2024-07-15 14:25:30.177492] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:32:44.401 [2024-07-15 14:25:30.177664] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:44.401 BaseBdev2 00:32:44.401 14:25:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:32:44.401 14:25:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:32:44.401 14:25:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:32:44.401 14:25:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local i 00:32:44.401 14:25:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:32:44.401 14:25:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:32:44.401 14:25:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:44.660 14:25:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:32:44.919 [ 00:32:44.919 { 00:32:44.919 "name": "BaseBdev2", 00:32:44.919 "aliases": [ 00:32:44.919 "5b63f600-6356-41c4-9561-00acf175e459" 00:32:44.919 ], 00:32:44.919 "product_name": "Malloc disk", 00:32:44.919 "block_size": 4096, 00:32:44.919 "num_blocks": 8192, 00:32:44.919 "uuid": "5b63f600-6356-41c4-9561-00acf175e459", 00:32:44.919 "md_size": 32, 00:32:44.919 "md_interleave": false, 00:32:44.919 "dif_type": 0, 00:32:44.919 "assigned_rate_limits": { 00:32:44.919 "rw_ios_per_sec": 0, 00:32:44.919 "rw_mbytes_per_sec": 0, 00:32:44.919 "r_mbytes_per_sec": 0, 00:32:44.919 "w_mbytes_per_sec": 0 00:32:44.919 }, 00:32:44.919 "claimed": true, 00:32:44.919 "claim_type": "exclusive_write", 00:32:44.919 "zoned": false, 00:32:44.919 "supported_io_types": { 00:32:44.919 "read": true, 00:32:44.919 "write": true, 00:32:44.919 "unmap": true, 00:32:44.919 "flush": true, 00:32:44.919 "reset": true, 00:32:44.919 "nvme_admin": false, 00:32:44.919 "nvme_io": false, 00:32:44.919 "nvme_io_md": false, 00:32:44.919 "write_zeroes": true, 00:32:44.919 "zcopy": true, 00:32:44.919 "get_zone_info": false, 00:32:44.919 "zone_management": false, 00:32:44.919 "zone_append": false, 00:32:44.919 "compare": false, 00:32:44.919 "compare_and_write": false, 00:32:44.919 "abort": true, 00:32:44.919 "seek_hole": false, 00:32:44.919 "seek_data": false, 00:32:44.919 "copy": true, 00:32:44.919 "nvme_iov_md": false 00:32:44.919 }, 00:32:44.919 "memory_domains": [ 00:32:44.919 { 00:32:44.919 "dma_device_id": "system", 00:32:44.919 "dma_device_type": 1 00:32:44.919 }, 00:32:44.919 { 00:32:44.919 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:44.919 "dma_device_type": 2 00:32:44.919 } 00:32:44.919 ], 00:32:44.919 "driver_specific": {} 00:32:44.919 } 00:32:44.919 ] 00:32:44.919 14:25:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # return 0 00:32:44.919 14:25:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:32:44.919 14:25:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:32:44.919 14:25:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:32:44.919 14:25:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:44.919 14:25:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:44.919 14:25:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:44.919 14:25:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:44.919 14:25:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:44.919 14:25:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:44.919 14:25:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:44.919 14:25:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:44.919 14:25:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:44.919 14:25:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:44.919 14:25:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:45.178 14:25:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:45.178 "name": "Existed_Raid", 00:32:45.178 "uuid": "a36e7c1a-2b16-48af-a1c4-2d01f6b117ea", 00:32:45.178 "strip_size_kb": 0, 00:32:45.178 "state": "online", 00:32:45.178 "raid_level": "raid1", 00:32:45.178 "superblock": true, 00:32:45.178 "num_base_bdevs": 2, 00:32:45.178 "num_base_bdevs_discovered": 2, 00:32:45.178 "num_base_bdevs_operational": 2, 00:32:45.178 "base_bdevs_list": [ 00:32:45.178 { 00:32:45.178 "name": "BaseBdev1", 00:32:45.178 "uuid": "e142693a-4e52-4887-9699-b9a279ab088d", 00:32:45.178 "is_configured": true, 00:32:45.178 "data_offset": 256, 00:32:45.178 "data_size": 7936 00:32:45.178 }, 00:32:45.178 { 00:32:45.178 "name": "BaseBdev2", 00:32:45.178 "uuid": "5b63f600-6356-41c4-9561-00acf175e459", 00:32:45.178 "is_configured": true, 00:32:45.178 "data_offset": 256, 00:32:45.178 "data_size": 7936 00:32:45.178 } 00:32:45.178 ] 00:32:45.178 }' 00:32:45.178 14:25:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:45.178 14:25:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:45.746 14:25:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:32:45.746 14:25:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:32:45.746 14:25:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:32:45.746 14:25:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:32:45.746 14:25:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:32:45.746 14:25:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:32:45.746 14:25:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:32:45.746 14:25:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:32:46.005 [2024-07-15 14:25:31.896930] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:46.005 14:25:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:32:46.005 "name": "Existed_Raid", 00:32:46.005 "aliases": [ 00:32:46.005 "a36e7c1a-2b16-48af-a1c4-2d01f6b117ea" 00:32:46.005 ], 00:32:46.005 "product_name": "Raid Volume", 00:32:46.005 "block_size": 4096, 00:32:46.005 "num_blocks": 7936, 00:32:46.005 "uuid": "a36e7c1a-2b16-48af-a1c4-2d01f6b117ea", 00:32:46.005 "md_size": 32, 00:32:46.005 "md_interleave": false, 00:32:46.005 "dif_type": 0, 00:32:46.005 "assigned_rate_limits": { 00:32:46.005 "rw_ios_per_sec": 0, 00:32:46.005 "rw_mbytes_per_sec": 0, 00:32:46.005 "r_mbytes_per_sec": 0, 00:32:46.005 "w_mbytes_per_sec": 0 00:32:46.005 }, 00:32:46.005 "claimed": false, 00:32:46.005 "zoned": false, 00:32:46.005 "supported_io_types": { 00:32:46.005 "read": true, 00:32:46.005 "write": true, 00:32:46.005 "unmap": false, 00:32:46.005 "flush": false, 00:32:46.005 "reset": true, 00:32:46.005 "nvme_admin": false, 00:32:46.005 "nvme_io": false, 00:32:46.005 "nvme_io_md": false, 00:32:46.005 "write_zeroes": true, 00:32:46.005 "zcopy": false, 00:32:46.005 "get_zone_info": false, 00:32:46.005 "zone_management": false, 00:32:46.005 "zone_append": false, 00:32:46.005 "compare": false, 00:32:46.005 "compare_and_write": false, 00:32:46.005 "abort": false, 00:32:46.005 "seek_hole": false, 00:32:46.005 "seek_data": false, 00:32:46.005 "copy": false, 00:32:46.005 "nvme_iov_md": false 00:32:46.005 }, 00:32:46.005 "memory_domains": [ 00:32:46.005 { 00:32:46.005 "dma_device_id": "system", 00:32:46.005 "dma_device_type": 1 00:32:46.005 }, 00:32:46.005 { 00:32:46.005 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:46.005 "dma_device_type": 2 00:32:46.005 }, 00:32:46.005 { 00:32:46.005 "dma_device_id": "system", 00:32:46.005 "dma_device_type": 1 00:32:46.005 }, 00:32:46.005 { 00:32:46.005 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:46.005 "dma_device_type": 2 00:32:46.005 } 00:32:46.005 ], 00:32:46.005 "driver_specific": { 00:32:46.005 "raid": { 00:32:46.005 "uuid": "a36e7c1a-2b16-48af-a1c4-2d01f6b117ea", 00:32:46.005 "strip_size_kb": 0, 00:32:46.005 "state": "online", 00:32:46.005 "raid_level": "raid1", 00:32:46.005 "superblock": true, 00:32:46.005 "num_base_bdevs": 2, 00:32:46.005 "num_base_bdevs_discovered": 2, 00:32:46.005 "num_base_bdevs_operational": 2, 00:32:46.005 "base_bdevs_list": [ 00:32:46.005 { 00:32:46.005 "name": "BaseBdev1", 00:32:46.005 "uuid": "e142693a-4e52-4887-9699-b9a279ab088d", 00:32:46.005 "is_configured": true, 00:32:46.005 "data_offset": 256, 00:32:46.005 "data_size": 7936 00:32:46.005 }, 00:32:46.005 { 00:32:46.005 "name": "BaseBdev2", 00:32:46.006 "uuid": "5b63f600-6356-41c4-9561-00acf175e459", 00:32:46.006 "is_configured": true, 00:32:46.006 "data_offset": 256, 00:32:46.006 "data_size": 7936 00:32:46.006 } 00:32:46.006 ] 00:32:46.006 } 00:32:46.006 } 00:32:46.006 }' 00:32:46.006 14:25:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:46.006 14:25:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:32:46.006 BaseBdev2' 00:32:46.006 14:25:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:46.006 14:25:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:32:46.006 14:25:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:46.265 14:25:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:46.265 "name": "BaseBdev1", 00:32:46.265 "aliases": [ 00:32:46.265 "e142693a-4e52-4887-9699-b9a279ab088d" 00:32:46.265 ], 00:32:46.265 "product_name": "Malloc disk", 00:32:46.265 "block_size": 4096, 00:32:46.265 "num_blocks": 8192, 00:32:46.265 "uuid": "e142693a-4e52-4887-9699-b9a279ab088d", 00:32:46.265 "md_size": 32, 00:32:46.265 "md_interleave": false, 00:32:46.265 "dif_type": 0, 00:32:46.265 "assigned_rate_limits": { 00:32:46.265 "rw_ios_per_sec": 0, 00:32:46.265 "rw_mbytes_per_sec": 0, 00:32:46.265 "r_mbytes_per_sec": 0, 00:32:46.265 "w_mbytes_per_sec": 0 00:32:46.265 }, 00:32:46.265 "claimed": true, 00:32:46.265 "claim_type": "exclusive_write", 00:32:46.265 "zoned": false, 00:32:46.265 "supported_io_types": { 00:32:46.265 "read": true, 00:32:46.265 "write": true, 00:32:46.265 "unmap": true, 00:32:46.265 "flush": true, 00:32:46.265 "reset": true, 00:32:46.265 "nvme_admin": false, 00:32:46.265 "nvme_io": false, 00:32:46.265 "nvme_io_md": false, 00:32:46.265 "write_zeroes": true, 00:32:46.265 "zcopy": true, 00:32:46.265 "get_zone_info": false, 00:32:46.265 "zone_management": false, 00:32:46.265 "zone_append": false, 00:32:46.265 "compare": false, 00:32:46.265 "compare_and_write": false, 00:32:46.265 "abort": true, 00:32:46.265 "seek_hole": false, 00:32:46.265 "seek_data": false, 00:32:46.265 "copy": true, 00:32:46.265 "nvme_iov_md": false 00:32:46.265 }, 00:32:46.265 "memory_domains": [ 00:32:46.265 { 00:32:46.265 "dma_device_id": "system", 00:32:46.265 "dma_device_type": 1 00:32:46.265 }, 00:32:46.265 { 00:32:46.265 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:46.265 "dma_device_type": 2 00:32:46.265 } 00:32:46.265 ], 00:32:46.265 "driver_specific": {} 00:32:46.265 }' 00:32:46.523 14:25:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:46.523 14:25:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:46.523 14:25:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:32:46.523 14:25:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:46.523 14:25:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:46.523 14:25:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:32:46.523 14:25:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:46.523 14:25:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:46.782 14:25:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:32:46.782 14:25:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:46.782 14:25:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:46.782 14:25:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:32:46.782 14:25:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:46.782 14:25:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:32:46.782 14:25:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:47.041 14:25:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:47.041 "name": "BaseBdev2", 00:32:47.041 "aliases": [ 00:32:47.041 "5b63f600-6356-41c4-9561-00acf175e459" 00:32:47.041 ], 00:32:47.041 "product_name": "Malloc disk", 00:32:47.041 "block_size": 4096, 00:32:47.041 "num_blocks": 8192, 00:32:47.041 "uuid": "5b63f600-6356-41c4-9561-00acf175e459", 00:32:47.041 "md_size": 32, 00:32:47.041 "md_interleave": false, 00:32:47.041 "dif_type": 0, 00:32:47.041 "assigned_rate_limits": { 00:32:47.041 "rw_ios_per_sec": 0, 00:32:47.041 "rw_mbytes_per_sec": 0, 00:32:47.041 "r_mbytes_per_sec": 0, 00:32:47.041 "w_mbytes_per_sec": 0 00:32:47.041 }, 00:32:47.041 "claimed": true, 00:32:47.041 "claim_type": "exclusive_write", 00:32:47.041 "zoned": false, 00:32:47.041 "supported_io_types": { 00:32:47.041 "read": true, 00:32:47.041 "write": true, 00:32:47.041 "unmap": true, 00:32:47.041 "flush": true, 00:32:47.041 "reset": true, 00:32:47.041 "nvme_admin": false, 00:32:47.041 "nvme_io": false, 00:32:47.041 "nvme_io_md": false, 00:32:47.041 "write_zeroes": true, 00:32:47.041 "zcopy": true, 00:32:47.041 "get_zone_info": false, 00:32:47.041 "zone_management": false, 00:32:47.041 "zone_append": false, 00:32:47.041 "compare": false, 00:32:47.041 "compare_and_write": false, 00:32:47.041 "abort": true, 00:32:47.041 "seek_hole": false, 00:32:47.041 "seek_data": false, 00:32:47.041 "copy": true, 00:32:47.041 "nvme_iov_md": false 00:32:47.041 }, 00:32:47.041 "memory_domains": [ 00:32:47.041 { 00:32:47.041 "dma_device_id": "system", 00:32:47.041 "dma_device_type": 1 00:32:47.041 }, 00:32:47.041 { 00:32:47.041 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:47.041 "dma_device_type": 2 00:32:47.041 } 00:32:47.041 ], 00:32:47.041 "driver_specific": {} 00:32:47.041 }' 00:32:47.041 14:25:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:47.041 14:25:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:47.041 14:25:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:32:47.041 14:25:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:47.041 14:25:32 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:47.041 14:25:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:32:47.041 14:25:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:47.299 14:25:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:47.299 14:25:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:32:47.299 14:25:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:47.299 14:25:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:47.299 14:25:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:32:47.299 14:25:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:32:47.559 [2024-07-15 14:25:33.437345] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:32:47.559 14:25:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@275 -- # local expected_state 00:32:47.559 14:25:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:32:47.559 14:25:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # case $1 in 00:32:47.559 14:25:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@214 -- # return 0 00:32:47.559 14:25:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:32:47.559 14:25:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:32:47.559 14:25:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:47.559 14:25:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:47.559 14:25:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:47.559 14:25:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:47.559 14:25:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:32:47.559 14:25:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:47.559 14:25:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:47.559 14:25:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:47.559 14:25:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:47.559 14:25:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:47.559 14:25:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:47.823 14:25:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:47.823 "name": "Existed_Raid", 00:32:47.823 "uuid": "a36e7c1a-2b16-48af-a1c4-2d01f6b117ea", 00:32:47.823 "strip_size_kb": 0, 00:32:47.823 "state": "online", 00:32:47.823 "raid_level": "raid1", 00:32:47.823 "superblock": true, 00:32:47.823 "num_base_bdevs": 2, 00:32:47.823 "num_base_bdevs_discovered": 1, 00:32:47.823 "num_base_bdevs_operational": 1, 00:32:47.823 "base_bdevs_list": [ 00:32:47.823 { 00:32:47.823 "name": null, 00:32:47.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:47.823 "is_configured": false, 00:32:47.823 "data_offset": 256, 00:32:47.823 "data_size": 7936 00:32:47.823 }, 00:32:47.823 { 00:32:47.823 "name": "BaseBdev2", 00:32:47.823 "uuid": "5b63f600-6356-41c4-9561-00acf175e459", 00:32:47.823 "is_configured": true, 00:32:47.823 "data_offset": 256, 00:32:47.823 "data_size": 7936 00:32:47.823 } 00:32:47.824 ] 00:32:47.824 }' 00:32:47.824 14:25:33 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:47.824 14:25:33 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:48.758 14:25:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:32:48.758 14:25:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:32:48.758 14:25:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:32:48.758 14:25:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:48.758 14:25:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:32:48.758 14:25:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:32:48.758 14:25:34 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:32:49.017 [2024-07-15 14:25:34.943559] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:32:49.017 [2024-07-15 14:25:34.943684] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:49.275 [2024-07-15 14:25:35.036656] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:49.275 [2024-07-15 14:25:35.036730] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:49.275 [2024-07-15 14:25:35.036768] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:32:49.275 14:25:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:32:49.275 14:25:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:32:49.275 14:25:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:32:49.275 14:25:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:49.555 14:25:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:32:49.555 14:25:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:32:49.555 14:25:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:32:49.555 14:25:35 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@341 -- # killprocess 217947 00:32:49.555 14:25:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@948 -- # '[' -z 217947 ']' 00:32:49.555 14:25:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@952 -- # kill -0 217947 00:32:49.555 14:25:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@953 -- # uname 00:32:49.555 14:25:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:49.555 14:25:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 217947 00:32:49.555 14:25:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:49.555 14:25:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:49.555 14:25:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@966 -- # echo 'killing process with pid 217947' 00:32:49.555 killing process with pid 217947 00:32:49.555 14:25:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@967 -- # kill 217947 00:32:49.555 [2024-07-15 14:25:35.353179] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:49.555 [2024-07-15 14:25:35.353300] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:49.555 14:25:35 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # wait 217947 00:32:50.491 14:25:36 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@343 -- # return 0 00:32:50.491 00:32:50.491 real 0m12.905s 00:32:50.491 user 0m22.691s 00:32:50.491 sys 0m1.484s 00:32:50.491 14:25:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:50.491 14:25:36 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:50.491 ************************************ 00:32:50.491 END TEST raid_state_function_test_sb_md_separate 00:32:50.491 ************************************ 00:32:50.749 14:25:36 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:32:50.749 14:25:36 bdev_raid -- bdev/bdev_raid.sh@906 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:32:50.749 14:25:36 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:32:50.749 14:25:36 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:50.749 14:25:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:50.749 ************************************ 00:32:50.749 START TEST raid_superblock_test_md_separate 00:32:50.749 ************************************ 00:32:50.749 14:25:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 2 00:32:50.749 14:25:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:32:50.749 14:25:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:32:50.749 14:25:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:32:50.749 14:25:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:32:50.749 14:25:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:32:50.749 14:25:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:32:50.749 14:25:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:32:50.749 14:25:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:32:50.749 14:25:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:32:50.749 14:25:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local strip_size 00:32:50.749 14:25:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:32:50.749 14:25:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:32:50.749 14:25:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:32:50.749 14:25:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:32:50.749 14:25:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:32:50.749 14:25:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # raid_pid=218328 00:32:50.749 14:25:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # waitforlisten 218328 /var/tmp/spdk-raid.sock 00:32:50.749 14:25:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:32:50.749 14:25:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@829 -- # '[' -z 218328 ']' 00:32:50.749 14:25:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:32:50.749 14:25:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:50.749 14:25:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:32:50.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:32:50.749 14:25:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:50.749 14:25:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:50.749 [2024-07-15 14:25:36.601544] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:32:50.750 [2024-07-15 14:25:36.601773] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid218328 ] 00:32:51.008 [2024-07-15 14:25:36.765544] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:51.008 [2024-07-15 14:25:36.974440] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:51.267 [2024-07-15 14:25:37.171653] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:51.834 14:25:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:51.834 14:25:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@862 -- # return 0 00:32:51.834 14:25:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:32:51.834 14:25:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:32:51.834 14:25:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:32:51.834 14:25:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:32:51.834 14:25:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:32:51.834 14:25:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:32:51.834 14:25:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:32:51.834 14:25:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:32:51.834 14:25:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b malloc1 00:32:52.093 malloc1 00:32:52.093 14:25:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:32:52.352 [2024-07-15 14:25:38.248358] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:32:52.352 [2024-07-15 14:25:38.248830] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:52.352 [2024-07-15 14:25:38.248951] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:32:52.352 [2024-07-15 14:25:38.249033] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:52.352 [2024-07-15 14:25:38.250673] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:52.352 [2024-07-15 14:25:38.250814] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:32:52.352 pt1 00:32:52.352 14:25:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:32:52.352 14:25:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:32:52.352 14:25:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:32:52.352 14:25:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:32:52.352 14:25:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:32:52.352 14:25:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:32:52.352 14:25:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:32:52.352 14:25:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:32:52.352 14:25:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b malloc2 00:32:52.611 malloc2 00:32:52.611 14:25:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:32:52.870 [2024-07-15 14:25:38.770441] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:32:52.870 [2024-07-15 14:25:38.770675] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:52.870 [2024-07-15 14:25:38.770799] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:32:52.870 [2024-07-15 14:25:38.770888] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:52.870 [2024-07-15 14:25:38.772487] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:52.870 [2024-07-15 14:25:38.772598] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:32:52.870 pt2 00:32:52.870 14:25:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:32:52.870 14:25:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:32:52.870 14:25:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:32:53.129 [2024-07-15 14:25:39.006498] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:32:53.129 [2024-07-15 14:25:39.007977] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:32:53.129 [2024-07-15 14:25:39.008149] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:32:53.129 [2024-07-15 14:25:39.008166] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:32:53.129 [2024-07-15 14:25:39.008275] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:32:53.129 [2024-07-15 14:25:39.008368] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:32:53.129 [2024-07-15 14:25:39.008390] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:32:53.129 [2024-07-15 14:25:39.008480] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:53.129 14:25:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:32:53.129 14:25:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:53.129 14:25:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:53.129 14:25:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:53.129 14:25:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:53.129 14:25:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:53.129 14:25:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:53.129 14:25:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:53.129 14:25:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:53.129 14:25:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:53.129 14:25:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:53.129 14:25:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:53.388 14:25:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:53.388 "name": "raid_bdev1", 00:32:53.388 "uuid": "77241564-553c-4544-8180-44b443a150f5", 00:32:53.388 "strip_size_kb": 0, 00:32:53.388 "state": "online", 00:32:53.388 "raid_level": "raid1", 00:32:53.388 "superblock": true, 00:32:53.388 "num_base_bdevs": 2, 00:32:53.388 "num_base_bdevs_discovered": 2, 00:32:53.388 "num_base_bdevs_operational": 2, 00:32:53.388 "base_bdevs_list": [ 00:32:53.388 { 00:32:53.388 "name": "pt1", 00:32:53.388 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:53.388 "is_configured": true, 00:32:53.388 "data_offset": 256, 00:32:53.388 "data_size": 7936 00:32:53.388 }, 00:32:53.388 { 00:32:53.388 "name": "pt2", 00:32:53.388 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:53.388 "is_configured": true, 00:32:53.388 "data_offset": 256, 00:32:53.388 "data_size": 7936 00:32:53.388 } 00:32:53.388 ] 00:32:53.388 }' 00:32:53.388 14:25:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:53.388 14:25:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:53.956 14:25:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:32:53.956 14:25:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:32:53.956 14:25:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:32:53.956 14:25:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:32:53.956 14:25:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:32:53.956 14:25:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:32:53.956 14:25:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:32:53.956 14:25:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:32:54.215 [2024-07-15 14:25:40.102815] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:54.215 14:25:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:32:54.215 "name": "raid_bdev1", 00:32:54.215 "aliases": [ 00:32:54.215 "77241564-553c-4544-8180-44b443a150f5" 00:32:54.215 ], 00:32:54.215 "product_name": "Raid Volume", 00:32:54.215 "block_size": 4096, 00:32:54.215 "num_blocks": 7936, 00:32:54.215 "uuid": "77241564-553c-4544-8180-44b443a150f5", 00:32:54.215 "md_size": 32, 00:32:54.215 "md_interleave": false, 00:32:54.215 "dif_type": 0, 00:32:54.215 "assigned_rate_limits": { 00:32:54.215 "rw_ios_per_sec": 0, 00:32:54.215 "rw_mbytes_per_sec": 0, 00:32:54.215 "r_mbytes_per_sec": 0, 00:32:54.215 "w_mbytes_per_sec": 0 00:32:54.215 }, 00:32:54.215 "claimed": false, 00:32:54.215 "zoned": false, 00:32:54.215 "supported_io_types": { 00:32:54.215 "read": true, 00:32:54.215 "write": true, 00:32:54.215 "unmap": false, 00:32:54.215 "flush": false, 00:32:54.215 "reset": true, 00:32:54.215 "nvme_admin": false, 00:32:54.215 "nvme_io": false, 00:32:54.215 "nvme_io_md": false, 00:32:54.215 "write_zeroes": true, 00:32:54.215 "zcopy": false, 00:32:54.215 "get_zone_info": false, 00:32:54.215 "zone_management": false, 00:32:54.215 "zone_append": false, 00:32:54.215 "compare": false, 00:32:54.215 "compare_and_write": false, 00:32:54.215 "abort": false, 00:32:54.215 "seek_hole": false, 00:32:54.215 "seek_data": false, 00:32:54.215 "copy": false, 00:32:54.215 "nvme_iov_md": false 00:32:54.215 }, 00:32:54.215 "memory_domains": [ 00:32:54.215 { 00:32:54.215 "dma_device_id": "system", 00:32:54.215 "dma_device_type": 1 00:32:54.215 }, 00:32:54.215 { 00:32:54.215 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:54.215 "dma_device_type": 2 00:32:54.215 }, 00:32:54.215 { 00:32:54.215 "dma_device_id": "system", 00:32:54.215 "dma_device_type": 1 00:32:54.215 }, 00:32:54.215 { 00:32:54.215 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:54.215 "dma_device_type": 2 00:32:54.215 } 00:32:54.215 ], 00:32:54.215 "driver_specific": { 00:32:54.215 "raid": { 00:32:54.215 "uuid": "77241564-553c-4544-8180-44b443a150f5", 00:32:54.215 "strip_size_kb": 0, 00:32:54.215 "state": "online", 00:32:54.215 "raid_level": "raid1", 00:32:54.215 "superblock": true, 00:32:54.215 "num_base_bdevs": 2, 00:32:54.215 "num_base_bdevs_discovered": 2, 00:32:54.215 "num_base_bdevs_operational": 2, 00:32:54.215 "base_bdevs_list": [ 00:32:54.215 { 00:32:54.215 "name": "pt1", 00:32:54.215 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:54.215 "is_configured": true, 00:32:54.215 "data_offset": 256, 00:32:54.215 "data_size": 7936 00:32:54.215 }, 00:32:54.215 { 00:32:54.215 "name": "pt2", 00:32:54.215 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:54.215 "is_configured": true, 00:32:54.215 "data_offset": 256, 00:32:54.215 "data_size": 7936 00:32:54.215 } 00:32:54.215 ] 00:32:54.215 } 00:32:54.215 } 00:32:54.215 }' 00:32:54.215 14:25:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:54.215 14:25:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:32:54.215 pt2' 00:32:54.215 14:25:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:54.215 14:25:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:32:54.216 14:25:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:54.474 14:25:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:54.474 "name": "pt1", 00:32:54.474 "aliases": [ 00:32:54.474 "00000000-0000-0000-0000-000000000001" 00:32:54.474 ], 00:32:54.474 "product_name": "passthru", 00:32:54.474 "block_size": 4096, 00:32:54.474 "num_blocks": 8192, 00:32:54.474 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:54.474 "md_size": 32, 00:32:54.474 "md_interleave": false, 00:32:54.474 "dif_type": 0, 00:32:54.474 "assigned_rate_limits": { 00:32:54.474 "rw_ios_per_sec": 0, 00:32:54.474 "rw_mbytes_per_sec": 0, 00:32:54.474 "r_mbytes_per_sec": 0, 00:32:54.474 "w_mbytes_per_sec": 0 00:32:54.474 }, 00:32:54.474 "claimed": true, 00:32:54.474 "claim_type": "exclusive_write", 00:32:54.474 "zoned": false, 00:32:54.474 "supported_io_types": { 00:32:54.474 "read": true, 00:32:54.474 "write": true, 00:32:54.474 "unmap": true, 00:32:54.474 "flush": true, 00:32:54.474 "reset": true, 00:32:54.474 "nvme_admin": false, 00:32:54.474 "nvme_io": false, 00:32:54.474 "nvme_io_md": false, 00:32:54.474 "write_zeroes": true, 00:32:54.474 "zcopy": true, 00:32:54.474 "get_zone_info": false, 00:32:54.474 "zone_management": false, 00:32:54.474 "zone_append": false, 00:32:54.474 "compare": false, 00:32:54.474 "compare_and_write": false, 00:32:54.474 "abort": true, 00:32:54.474 "seek_hole": false, 00:32:54.474 "seek_data": false, 00:32:54.474 "copy": true, 00:32:54.474 "nvme_iov_md": false 00:32:54.474 }, 00:32:54.474 "memory_domains": [ 00:32:54.474 { 00:32:54.474 "dma_device_id": "system", 00:32:54.474 "dma_device_type": 1 00:32:54.474 }, 00:32:54.474 { 00:32:54.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:54.474 "dma_device_type": 2 00:32:54.474 } 00:32:54.474 ], 00:32:54.474 "driver_specific": { 00:32:54.474 "passthru": { 00:32:54.474 "name": "pt1", 00:32:54.474 "base_bdev_name": "malloc1" 00:32:54.474 } 00:32:54.474 } 00:32:54.474 }' 00:32:54.474 14:25:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:54.474 14:25:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:54.733 14:25:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:32:54.733 14:25:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:54.733 14:25:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:54.733 14:25:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:32:54.733 14:25:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:54.733 14:25:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:54.733 14:25:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:32:54.733 14:25:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:54.733 14:25:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:54.991 14:25:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:32:54.991 14:25:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:54.991 14:25:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:54.991 14:25:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:32:55.249 14:25:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:55.249 "name": "pt2", 00:32:55.249 "aliases": [ 00:32:55.249 "00000000-0000-0000-0000-000000000002" 00:32:55.249 ], 00:32:55.249 "product_name": "passthru", 00:32:55.249 "block_size": 4096, 00:32:55.249 "num_blocks": 8192, 00:32:55.249 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:55.249 "md_size": 32, 00:32:55.249 "md_interleave": false, 00:32:55.249 "dif_type": 0, 00:32:55.249 "assigned_rate_limits": { 00:32:55.249 "rw_ios_per_sec": 0, 00:32:55.249 "rw_mbytes_per_sec": 0, 00:32:55.249 "r_mbytes_per_sec": 0, 00:32:55.249 "w_mbytes_per_sec": 0 00:32:55.249 }, 00:32:55.249 "claimed": true, 00:32:55.249 "claim_type": "exclusive_write", 00:32:55.249 "zoned": false, 00:32:55.249 "supported_io_types": { 00:32:55.249 "read": true, 00:32:55.249 "write": true, 00:32:55.249 "unmap": true, 00:32:55.249 "flush": true, 00:32:55.249 "reset": true, 00:32:55.249 "nvme_admin": false, 00:32:55.249 "nvme_io": false, 00:32:55.249 "nvme_io_md": false, 00:32:55.249 "write_zeroes": true, 00:32:55.249 "zcopy": true, 00:32:55.249 "get_zone_info": false, 00:32:55.249 "zone_management": false, 00:32:55.249 "zone_append": false, 00:32:55.249 "compare": false, 00:32:55.249 "compare_and_write": false, 00:32:55.249 "abort": true, 00:32:55.249 "seek_hole": false, 00:32:55.249 "seek_data": false, 00:32:55.249 "copy": true, 00:32:55.249 "nvme_iov_md": false 00:32:55.249 }, 00:32:55.249 "memory_domains": [ 00:32:55.249 { 00:32:55.249 "dma_device_id": "system", 00:32:55.249 "dma_device_type": 1 00:32:55.249 }, 00:32:55.249 { 00:32:55.249 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:55.249 "dma_device_type": 2 00:32:55.249 } 00:32:55.249 ], 00:32:55.249 "driver_specific": { 00:32:55.249 "passthru": { 00:32:55.249 "name": "pt2", 00:32:55.249 "base_bdev_name": "malloc2" 00:32:55.249 } 00:32:55.249 } 00:32:55.249 }' 00:32:55.249 14:25:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:55.249 14:25:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:55.249 14:25:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:32:55.249 14:25:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:55.249 14:25:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:55.249 14:25:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:32:55.249 14:25:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:55.249 14:25:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:55.507 14:25:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:32:55.507 14:25:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:55.507 14:25:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:55.507 14:25:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:32:55.508 14:25:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:32:55.508 14:25:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:32:55.765 [2024-07-15 14:25:41.603000] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:55.765 14:25:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=77241564-553c-4544-8180-44b443a150f5 00:32:55.765 14:25:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # '[' -z 77241564-553c-4544-8180-44b443a150f5 ']' 00:32:55.765 14:25:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:32:56.074 [2024-07-15 14:25:41.894863] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:56.074 [2024-07-15 14:25:41.894905] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:56.074 [2024-07-15 14:25:41.894974] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:56.074 [2024-07-15 14:25:41.895024] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:56.075 [2024-07-15 14:25:41.895035] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:32:56.075 14:25:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:56.075 14:25:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:32:56.332 14:25:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:32:56.332 14:25:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:32:56.332 14:25:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:32:56.332 14:25:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:32:56.590 14:25:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:32:56.590 14:25:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:32:56.848 14:25:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:32:56.848 14:25:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:32:57.107 14:25:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:32:57.107 14:25:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:32:57.107 14:25:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@648 -- # local es=0 00:32:57.107 14:25:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:32:57.107 14:25:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:57.107 14:25:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:57.107 14:25:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:57.107 14:25:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:57.107 14:25:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:57.107 14:25:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:57.107 14:25:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:57.107 14:25:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:32:57.107 14:25:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:32:57.364 [2024-07-15 14:25:43.223051] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:32:57.364 [2024-07-15 14:25:43.224332] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:32:57.364 [2024-07-15 14:25:43.224397] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:32:57.364 [2024-07-15 14:25:43.224825] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:32:57.364 [2024-07-15 14:25:43.224938] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:57.364 [2024-07-15 14:25:43.224953] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state configuring 00:32:57.364 request: 00:32:57.364 { 00:32:57.364 "name": "raid_bdev1", 00:32:57.364 "raid_level": "raid1", 00:32:57.364 "base_bdevs": [ 00:32:57.364 "malloc1", 00:32:57.364 "malloc2" 00:32:57.364 ], 00:32:57.364 "superblock": false, 00:32:57.364 "method": "bdev_raid_create", 00:32:57.364 "req_id": 1 00:32:57.364 } 00:32:57.364 Got JSON-RPC error response 00:32:57.364 response: 00:32:57.364 { 00:32:57.364 "code": -17, 00:32:57.364 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:32:57.364 } 00:32:57.364 14:25:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@651 -- # es=1 00:32:57.364 14:25:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:57.364 14:25:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:57.364 14:25:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:57.364 14:25:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:32:57.364 14:25:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:57.619 14:25:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:32:57.619 14:25:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:32:57.619 14:25:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:32:57.876 [2024-07-15 14:25:43.743071] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:32:57.876 [2024-07-15 14:25:43.743315] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:57.876 [2024-07-15 14:25:43.743415] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:32:57.876 [2024-07-15 14:25:43.743500] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:57.876 [2024-07-15 14:25:43.745187] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:57.876 [2024-07-15 14:25:43.745321] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:32:57.876 [2024-07-15 14:25:43.745468] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:32:57.876 [2024-07-15 14:25:43.745535] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:32:57.876 pt1 00:32:57.876 14:25:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:32:57.876 14:25:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:57.876 14:25:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:57.876 14:25:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:57.876 14:25:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:57.876 14:25:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:57.876 14:25:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:57.876 14:25:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:57.876 14:25:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:57.876 14:25:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:57.876 14:25:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:57.876 14:25:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:58.132 14:25:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:58.132 "name": "raid_bdev1", 00:32:58.132 "uuid": "77241564-553c-4544-8180-44b443a150f5", 00:32:58.132 "strip_size_kb": 0, 00:32:58.132 "state": "configuring", 00:32:58.132 "raid_level": "raid1", 00:32:58.132 "superblock": true, 00:32:58.132 "num_base_bdevs": 2, 00:32:58.132 "num_base_bdevs_discovered": 1, 00:32:58.132 "num_base_bdevs_operational": 2, 00:32:58.132 "base_bdevs_list": [ 00:32:58.132 { 00:32:58.132 "name": "pt1", 00:32:58.132 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:58.132 "is_configured": true, 00:32:58.132 "data_offset": 256, 00:32:58.132 "data_size": 7936 00:32:58.132 }, 00:32:58.132 { 00:32:58.132 "name": null, 00:32:58.132 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:58.132 "is_configured": false, 00:32:58.132 "data_offset": 256, 00:32:58.132 "data_size": 7936 00:32:58.132 } 00:32:58.132 ] 00:32:58.132 }' 00:32:58.132 14:25:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:58.132 14:25:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:32:58.696 14:25:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:32:58.696 14:25:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:32:58.696 14:25:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:32:58.696 14:25:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:32:58.953 [2024-07-15 14:25:44.879221] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:32:58.953 [2024-07-15 14:25:44.879642] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:58.953 [2024-07-15 14:25:44.879782] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:32:58.953 [2024-07-15 14:25:44.879874] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:58.953 [2024-07-15 14:25:44.880151] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:58.953 [2024-07-15 14:25:44.880294] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:32:58.953 [2024-07-15 14:25:44.880444] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:32:58.953 [2024-07-15 14:25:44.880481] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:32:58.953 [2024-07-15 14:25:44.880544] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:32:58.953 [2024-07-15 14:25:44.880556] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:32:58.953 [2024-07-15 14:25:44.880633] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:32:58.953 [2024-07-15 14:25:44.880751] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:32:58.953 [2024-07-15 14:25:44.880766] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:32:58.953 [2024-07-15 14:25:44.880838] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:58.953 pt2 00:32:58.953 14:25:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:32:58.953 14:25:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:32:58.953 14:25:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:32:58.953 14:25:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:58.953 14:25:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:58.953 14:25:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:32:58.953 14:25:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:32:58.953 14:25:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:58.953 14:25:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:58.953 14:25:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:58.953 14:25:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:58.953 14:25:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:58.953 14:25:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:58.953 14:25:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:59.211 14:25:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:59.211 "name": "raid_bdev1", 00:32:59.211 "uuid": "77241564-553c-4544-8180-44b443a150f5", 00:32:59.211 "strip_size_kb": 0, 00:32:59.211 "state": "online", 00:32:59.211 "raid_level": "raid1", 00:32:59.211 "superblock": true, 00:32:59.211 "num_base_bdevs": 2, 00:32:59.211 "num_base_bdevs_discovered": 2, 00:32:59.211 "num_base_bdevs_operational": 2, 00:32:59.211 "base_bdevs_list": [ 00:32:59.211 { 00:32:59.211 "name": "pt1", 00:32:59.211 "uuid": "00000000-0000-0000-0000-000000000001", 00:32:59.211 "is_configured": true, 00:32:59.211 "data_offset": 256, 00:32:59.211 "data_size": 7936 00:32:59.211 }, 00:32:59.211 { 00:32:59.211 "name": "pt2", 00:32:59.211 "uuid": "00000000-0000-0000-0000-000000000002", 00:32:59.211 "is_configured": true, 00:32:59.211 "data_offset": 256, 00:32:59.211 "data_size": 7936 00:32:59.211 } 00:32:59.211 ] 00:32:59.211 }' 00:32:59.211 14:25:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:59.211 14:25:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:00.146 14:25:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:33:00.146 14:25:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:33:00.146 14:25:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:33:00.146 14:25:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:33:00.146 14:25:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:33:00.146 14:25:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:33:00.146 14:25:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:33:00.146 14:25:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:33:00.146 [2024-07-15 14:25:46.071522] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:00.146 14:25:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:33:00.146 "name": "raid_bdev1", 00:33:00.146 "aliases": [ 00:33:00.146 "77241564-553c-4544-8180-44b443a150f5" 00:33:00.146 ], 00:33:00.146 "product_name": "Raid Volume", 00:33:00.146 "block_size": 4096, 00:33:00.146 "num_blocks": 7936, 00:33:00.146 "uuid": "77241564-553c-4544-8180-44b443a150f5", 00:33:00.146 "md_size": 32, 00:33:00.146 "md_interleave": false, 00:33:00.146 "dif_type": 0, 00:33:00.146 "assigned_rate_limits": { 00:33:00.146 "rw_ios_per_sec": 0, 00:33:00.146 "rw_mbytes_per_sec": 0, 00:33:00.146 "r_mbytes_per_sec": 0, 00:33:00.146 "w_mbytes_per_sec": 0 00:33:00.146 }, 00:33:00.146 "claimed": false, 00:33:00.146 "zoned": false, 00:33:00.146 "supported_io_types": { 00:33:00.146 "read": true, 00:33:00.146 "write": true, 00:33:00.146 "unmap": false, 00:33:00.146 "flush": false, 00:33:00.146 "reset": true, 00:33:00.146 "nvme_admin": false, 00:33:00.146 "nvme_io": false, 00:33:00.146 "nvme_io_md": false, 00:33:00.146 "write_zeroes": true, 00:33:00.146 "zcopy": false, 00:33:00.146 "get_zone_info": false, 00:33:00.146 "zone_management": false, 00:33:00.146 "zone_append": false, 00:33:00.146 "compare": false, 00:33:00.146 "compare_and_write": false, 00:33:00.146 "abort": false, 00:33:00.146 "seek_hole": false, 00:33:00.146 "seek_data": false, 00:33:00.146 "copy": false, 00:33:00.146 "nvme_iov_md": false 00:33:00.146 }, 00:33:00.146 "memory_domains": [ 00:33:00.146 { 00:33:00.146 "dma_device_id": "system", 00:33:00.146 "dma_device_type": 1 00:33:00.146 }, 00:33:00.146 { 00:33:00.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:00.146 "dma_device_type": 2 00:33:00.146 }, 00:33:00.146 { 00:33:00.146 "dma_device_id": "system", 00:33:00.146 "dma_device_type": 1 00:33:00.146 }, 00:33:00.146 { 00:33:00.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:00.146 "dma_device_type": 2 00:33:00.146 } 00:33:00.146 ], 00:33:00.146 "driver_specific": { 00:33:00.146 "raid": { 00:33:00.146 "uuid": "77241564-553c-4544-8180-44b443a150f5", 00:33:00.146 "strip_size_kb": 0, 00:33:00.146 "state": "online", 00:33:00.146 "raid_level": "raid1", 00:33:00.146 "superblock": true, 00:33:00.146 "num_base_bdevs": 2, 00:33:00.146 "num_base_bdevs_discovered": 2, 00:33:00.146 "num_base_bdevs_operational": 2, 00:33:00.146 "base_bdevs_list": [ 00:33:00.146 { 00:33:00.146 "name": "pt1", 00:33:00.146 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:00.146 "is_configured": true, 00:33:00.146 "data_offset": 256, 00:33:00.146 "data_size": 7936 00:33:00.146 }, 00:33:00.146 { 00:33:00.146 "name": "pt2", 00:33:00.146 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:00.146 "is_configured": true, 00:33:00.146 "data_offset": 256, 00:33:00.146 "data_size": 7936 00:33:00.146 } 00:33:00.146 ] 00:33:00.146 } 00:33:00.146 } 00:33:00.146 }' 00:33:00.146 14:25:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:33:00.146 14:25:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:33:00.146 pt2' 00:33:00.146 14:25:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:00.146 14:25:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:00.146 14:25:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:33:00.720 14:25:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:00.720 "name": "pt1", 00:33:00.720 "aliases": [ 00:33:00.720 "00000000-0000-0000-0000-000000000001" 00:33:00.720 ], 00:33:00.720 "product_name": "passthru", 00:33:00.720 "block_size": 4096, 00:33:00.720 "num_blocks": 8192, 00:33:00.720 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:00.720 "md_size": 32, 00:33:00.720 "md_interleave": false, 00:33:00.720 "dif_type": 0, 00:33:00.720 "assigned_rate_limits": { 00:33:00.720 "rw_ios_per_sec": 0, 00:33:00.720 "rw_mbytes_per_sec": 0, 00:33:00.720 "r_mbytes_per_sec": 0, 00:33:00.720 "w_mbytes_per_sec": 0 00:33:00.720 }, 00:33:00.720 "claimed": true, 00:33:00.720 "claim_type": "exclusive_write", 00:33:00.720 "zoned": false, 00:33:00.720 "supported_io_types": { 00:33:00.720 "read": true, 00:33:00.720 "write": true, 00:33:00.720 "unmap": true, 00:33:00.720 "flush": true, 00:33:00.720 "reset": true, 00:33:00.720 "nvme_admin": false, 00:33:00.720 "nvme_io": false, 00:33:00.720 "nvme_io_md": false, 00:33:00.720 "write_zeroes": true, 00:33:00.720 "zcopy": true, 00:33:00.720 "get_zone_info": false, 00:33:00.720 "zone_management": false, 00:33:00.720 "zone_append": false, 00:33:00.720 "compare": false, 00:33:00.720 "compare_and_write": false, 00:33:00.721 "abort": true, 00:33:00.721 "seek_hole": false, 00:33:00.721 "seek_data": false, 00:33:00.721 "copy": true, 00:33:00.721 "nvme_iov_md": false 00:33:00.721 }, 00:33:00.721 "memory_domains": [ 00:33:00.721 { 00:33:00.721 "dma_device_id": "system", 00:33:00.721 "dma_device_type": 1 00:33:00.721 }, 00:33:00.721 { 00:33:00.721 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:00.721 "dma_device_type": 2 00:33:00.721 } 00:33:00.721 ], 00:33:00.721 "driver_specific": { 00:33:00.721 "passthru": { 00:33:00.721 "name": "pt1", 00:33:00.721 "base_bdev_name": "malloc1" 00:33:00.721 } 00:33:00.721 } 00:33:00.721 }' 00:33:00.721 14:25:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:00.721 14:25:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:00.721 14:25:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:33:00.721 14:25:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:00.721 14:25:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:00.721 14:25:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:33:00.721 14:25:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:00.721 14:25:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:00.993 14:25:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:33:00.993 14:25:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:00.993 14:25:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:00.993 14:25:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:33:00.993 14:25:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:00.993 14:25:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:33:00.993 14:25:46 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:01.250 14:25:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:01.250 "name": "pt2", 00:33:01.250 "aliases": [ 00:33:01.250 "00000000-0000-0000-0000-000000000002" 00:33:01.250 ], 00:33:01.250 "product_name": "passthru", 00:33:01.250 "block_size": 4096, 00:33:01.250 "num_blocks": 8192, 00:33:01.250 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:01.250 "md_size": 32, 00:33:01.250 "md_interleave": false, 00:33:01.250 "dif_type": 0, 00:33:01.250 "assigned_rate_limits": { 00:33:01.250 "rw_ios_per_sec": 0, 00:33:01.250 "rw_mbytes_per_sec": 0, 00:33:01.250 "r_mbytes_per_sec": 0, 00:33:01.250 "w_mbytes_per_sec": 0 00:33:01.250 }, 00:33:01.250 "claimed": true, 00:33:01.250 "claim_type": "exclusive_write", 00:33:01.250 "zoned": false, 00:33:01.250 "supported_io_types": { 00:33:01.250 "read": true, 00:33:01.250 "write": true, 00:33:01.250 "unmap": true, 00:33:01.250 "flush": true, 00:33:01.250 "reset": true, 00:33:01.250 "nvme_admin": false, 00:33:01.250 "nvme_io": false, 00:33:01.250 "nvme_io_md": false, 00:33:01.250 "write_zeroes": true, 00:33:01.250 "zcopy": true, 00:33:01.250 "get_zone_info": false, 00:33:01.250 "zone_management": false, 00:33:01.250 "zone_append": false, 00:33:01.250 "compare": false, 00:33:01.250 "compare_and_write": false, 00:33:01.250 "abort": true, 00:33:01.250 "seek_hole": false, 00:33:01.250 "seek_data": false, 00:33:01.250 "copy": true, 00:33:01.250 "nvme_iov_md": false 00:33:01.250 }, 00:33:01.250 "memory_domains": [ 00:33:01.250 { 00:33:01.250 "dma_device_id": "system", 00:33:01.250 "dma_device_type": 1 00:33:01.250 }, 00:33:01.250 { 00:33:01.250 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:01.250 "dma_device_type": 2 00:33:01.250 } 00:33:01.250 ], 00:33:01.250 "driver_specific": { 00:33:01.250 "passthru": { 00:33:01.250 "name": "pt2", 00:33:01.250 "base_bdev_name": "malloc2" 00:33:01.250 } 00:33:01.250 } 00:33:01.250 }' 00:33:01.250 14:25:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:01.250 14:25:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:01.250 14:25:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:33:01.250 14:25:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:01.506 14:25:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:01.506 14:25:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:33:01.506 14:25:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:01.506 14:25:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:01.506 14:25:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:33:01.506 14:25:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:01.506 14:25:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:01.762 14:25:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:33:01.762 14:25:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:33:01.762 14:25:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:33:02.020 [2024-07-15 14:25:47.810381] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:02.020 14:25:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@486 -- # '[' 77241564-553c-4544-8180-44b443a150f5 '!=' 77241564-553c-4544-8180-44b443a150f5 ']' 00:33:02.020 14:25:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:33:02.020 14:25:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@213 -- # case $1 in 00:33:02.020 14:25:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@214 -- # return 0 00:33:02.020 14:25:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:33:02.278 [2024-07-15 14:25:48.066230] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:33:02.278 14:25:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:02.278 14:25:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:02.278 14:25:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:02.278 14:25:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:02.278 14:25:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:02.278 14:25:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:33:02.278 14:25:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:02.278 14:25:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:02.278 14:25:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:02.278 14:25:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:02.278 14:25:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:02.278 14:25:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:02.536 14:25:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:02.536 "name": "raid_bdev1", 00:33:02.536 "uuid": "77241564-553c-4544-8180-44b443a150f5", 00:33:02.536 "strip_size_kb": 0, 00:33:02.536 "state": "online", 00:33:02.536 "raid_level": "raid1", 00:33:02.536 "superblock": true, 00:33:02.536 "num_base_bdevs": 2, 00:33:02.536 "num_base_bdevs_discovered": 1, 00:33:02.536 "num_base_bdevs_operational": 1, 00:33:02.536 "base_bdevs_list": [ 00:33:02.536 { 00:33:02.536 "name": null, 00:33:02.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:02.536 "is_configured": false, 00:33:02.536 "data_offset": 256, 00:33:02.537 "data_size": 7936 00:33:02.537 }, 00:33:02.537 { 00:33:02.537 "name": "pt2", 00:33:02.537 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:02.537 "is_configured": true, 00:33:02.537 "data_offset": 256, 00:33:02.537 "data_size": 7936 00:33:02.537 } 00:33:02.537 ] 00:33:02.537 }' 00:33:02.537 14:25:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:02.537 14:25:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:03.102 14:25:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:33:03.361 [2024-07-15 14:25:49.197159] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:03.361 [2024-07-15 14:25:49.197339] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:03.361 [2024-07-15 14:25:49.197552] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:03.361 [2024-07-15 14:25:49.197738] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:03.361 [2024-07-15 14:25:49.197859] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:33:03.361 14:25:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:03.361 14:25:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:33:03.619 14:25:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:33:03.619 14:25:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:33:03.619 14:25:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:33:03.619 14:25:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:33:03.619 14:25:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:33:03.876 14:25:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:33:03.876 14:25:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:33:03.876 14:25:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:33:03.876 14:25:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:33:03.876 14:25:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@518 -- # i=1 00:33:03.876 14:25:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:33:04.134 [2024-07-15 14:25:49.924061] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:33:04.134 [2024-07-15 14:25:49.924357] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:04.134 [2024-07-15 14:25:49.924516] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:33:04.134 [2024-07-15 14:25:49.924653] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:04.134 [2024-07-15 14:25:49.926320] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:04.134 [2024-07-15 14:25:49.926519] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:33:04.134 [2024-07-15 14:25:49.926758] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:33:04.134 [2024-07-15 14:25:49.926922] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:04.134 [2024-07-15 14:25:49.927096] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:33:04.134 [2024-07-15 14:25:49.927207] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:33:04.134 [2024-07-15 14:25:49.927334] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:33:04.134 [2024-07-15 14:25:49.927568] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:33:04.134 [2024-07-15 14:25:49.927694] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:33:04.134 [2024-07-15 14:25:49.927892] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:04.134 pt2 00:33:04.134 14:25:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:04.134 14:25:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:04.134 14:25:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:04.134 14:25:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:04.134 14:25:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:04.134 14:25:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:33:04.134 14:25:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:04.134 14:25:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:04.134 14:25:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:04.134 14:25:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:04.134 14:25:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:04.134 14:25:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:04.391 14:25:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:04.391 "name": "raid_bdev1", 00:33:04.391 "uuid": "77241564-553c-4544-8180-44b443a150f5", 00:33:04.391 "strip_size_kb": 0, 00:33:04.391 "state": "online", 00:33:04.391 "raid_level": "raid1", 00:33:04.391 "superblock": true, 00:33:04.391 "num_base_bdevs": 2, 00:33:04.391 "num_base_bdevs_discovered": 1, 00:33:04.391 "num_base_bdevs_operational": 1, 00:33:04.391 "base_bdevs_list": [ 00:33:04.391 { 00:33:04.391 "name": null, 00:33:04.391 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:04.391 "is_configured": false, 00:33:04.391 "data_offset": 256, 00:33:04.391 "data_size": 7936 00:33:04.391 }, 00:33:04.391 { 00:33:04.391 "name": "pt2", 00:33:04.391 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:04.391 "is_configured": true, 00:33:04.391 "data_offset": 256, 00:33:04.391 "data_size": 7936 00:33:04.391 } 00:33:04.391 ] 00:33:04.391 }' 00:33:04.391 14:25:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:04.391 14:25:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:04.956 14:25:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:33:05.214 [2024-07-15 14:25:51.020437] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:05.214 [2024-07-15 14:25:51.020793] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:05.214 [2024-07-15 14:25:51.021032] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:05.214 [2024-07-15 14:25:51.021228] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:05.214 [2024-07-15 14:25:51.021380] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:33:05.214 14:25:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:05.214 14:25:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:33:05.471 14:25:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:33:05.471 14:25:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:33:05.471 14:25:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:33:05.471 14:25:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:33:05.788 [2024-07-15 14:25:51.480468] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:33:05.788 [2024-07-15 14:25:51.480953] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:05.788 [2024-07-15 14:25:51.481166] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:33:05.788 [2024-07-15 14:25:51.481347] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:05.788 [2024-07-15 14:25:51.483296] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:05.788 [2024-07-15 14:25:51.483509] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:33:05.788 [2024-07-15 14:25:51.483786] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:33:05.788 [2024-07-15 14:25:51.483971] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:33:05.788 [2024-07-15 14:25:51.484201] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:33:05.788 [2024-07-15 14:25:51.484340] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:05.789 [2024-07-15 14:25:51.484508] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state configuring 00:33:05.789 [2024-07-15 14:25:51.484697] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:05.789 [2024-07-15 14:25:51.484934] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:33:05.789 [2024-07-15 14:25:51.485071] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:33:05.789 [2024-07-15 14:25:51.485233] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:33:05.789 [2024-07-15 14:25:51.485456] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:33:05.789 [2024-07-15 14:25:51.485595] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:33:05.789 [2024-07-15 14:25:51.485837] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:05.789 pt1 00:33:05.789 14:25:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:33:05.789 14:25:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:05.789 14:25:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:05.789 14:25:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:05.789 14:25:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:05.789 14:25:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:05.789 14:25:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:33:05.789 14:25:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:05.789 14:25:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:05.789 14:25:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:05.789 14:25:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:05.789 14:25:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:05.789 14:25:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:05.789 14:25:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:05.789 "name": "raid_bdev1", 00:33:05.789 "uuid": "77241564-553c-4544-8180-44b443a150f5", 00:33:05.789 "strip_size_kb": 0, 00:33:05.789 "state": "online", 00:33:05.789 "raid_level": "raid1", 00:33:05.789 "superblock": true, 00:33:05.789 "num_base_bdevs": 2, 00:33:05.789 "num_base_bdevs_discovered": 1, 00:33:05.789 "num_base_bdevs_operational": 1, 00:33:05.789 "base_bdevs_list": [ 00:33:05.789 { 00:33:05.789 "name": null, 00:33:05.789 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:05.789 "is_configured": false, 00:33:05.789 "data_offset": 256, 00:33:05.789 "data_size": 7936 00:33:05.789 }, 00:33:05.789 { 00:33:05.789 "name": "pt2", 00:33:05.789 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:05.789 "is_configured": true, 00:33:05.789 "data_offset": 256, 00:33:05.789 "data_size": 7936 00:33:05.789 } 00:33:05.789 ] 00:33:05.789 }' 00:33:05.789 14:25:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:05.789 14:25:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:06.738 14:25:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:33:06.738 14:25:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:33:06.738 14:25:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:33:06.738 14:25:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:33:06.738 14:25:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:33:06.995 [2024-07-15 14:25:52.897472] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:06.995 14:25:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # '[' 77241564-553c-4544-8180-44b443a150f5 '!=' 77241564-553c-4544-8180-44b443a150f5 ']' 00:33:06.995 14:25:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@562 -- # killprocess 218328 00:33:06.995 14:25:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@948 -- # '[' -z 218328 ']' 00:33:06.995 14:25:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@952 -- # kill -0 218328 00:33:06.995 14:25:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@953 -- # uname 00:33:06.995 14:25:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:06.995 14:25:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 218328 00:33:06.995 14:25:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:06.995 14:25:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:06.995 14:25:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@966 -- # echo 'killing process with pid 218328' 00:33:06.995 killing process with pid 218328 00:33:06.995 14:25:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@967 -- # kill 218328 00:33:06.995 [2024-07-15 14:25:52.936112] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:06.995 14:25:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # wait 218328 00:33:06.995 [2024-07-15 14:25:52.936397] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:06.995 [2024-07-15 14:25:52.936556] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:06.995 [2024-07-15 14:25:52.936656] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:33:07.254 [2024-07-15 14:25:53.106761] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:08.633 14:25:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@564 -- # return 0 00:33:08.633 00:33:08.633 real 0m17.678s 00:33:08.633 user 0m32.084s 00:33:08.633 sys 0m2.009s 00:33:08.633 14:25:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:08.633 14:25:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:08.633 ************************************ 00:33:08.633 END TEST raid_superblock_test_md_separate 00:33:08.633 ************************************ 00:33:08.633 14:25:54 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:33:08.633 14:25:54 bdev_raid -- bdev/bdev_raid.sh@907 -- # '[' true = true ']' 00:33:08.633 14:25:54 bdev_raid -- bdev/bdev_raid.sh@908 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:33:08.633 14:25:54 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:33:08.633 14:25:54 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:08.633 14:25:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:33:08.633 ************************************ 00:33:08.633 START TEST raid_rebuild_test_sb_md_separate 00:33:08.633 ************************************ 00:33:08.633 14:25:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 true false true 00:33:08.633 14:25:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:33:08.633 14:25:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:33:08.633 14:25:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:33:08.633 14:25:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:33:08.633 14:25:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local verify=true 00:33:08.633 14:25:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:33:08.633 14:25:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:33:08.633 14:25:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:33:08.633 14:25:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:33:08.633 14:25:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:33:08.633 14:25:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:33:08.633 14:25:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:33:08.633 14:25:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:33:08.633 14:25:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:33:08.633 14:25:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:33:08.633 14:25:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:33:08.633 14:25:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local strip_size 00:33:08.633 14:25:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local create_arg 00:33:08.633 14:25:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:33:08.633 14:25:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local data_offset 00:33:08.633 14:25:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:33:08.633 14:25:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:33:08.633 14:25:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:33:08.633 14:25:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:33:08.633 14:25:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # raid_pid=218859 00:33:08.633 14:25:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # waitforlisten 218859 /var/tmp/spdk-raid.sock 00:33:08.633 14:25:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:33:08.633 14:25:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@829 -- # '[' -z 218859 ']' 00:33:08.633 14:25:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:33:08.633 14:25:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:08.633 14:25:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:33:08.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:33:08.633 14:25:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:08.633 14:25:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:08.633 [2024-07-15 14:25:54.341582] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:33:08.633 [2024-07-15 14:25:54.341960] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid218859 ] 00:33:08.633 I/O size of 3145728 is greater than zero copy threshold (65536). 00:33:08.633 Zero copy mechanism will not be used. 00:33:08.633 [2024-07-15 14:25:54.500743] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:08.891 [2024-07-15 14:25:54.768919] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:09.149 [2024-07-15 14:25:54.970043] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:09.407 14:25:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:09.407 14:25:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@862 -- # return 0 00:33:09.407 14:25:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:33:09.407 14:25:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:33:09.665 BaseBdev1_malloc 00:33:09.665 14:25:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:33:09.924 [2024-07-15 14:25:55.917241] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:33:09.924 [2024-07-15 14:25:55.917858] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:09.924 [2024-07-15 14:25:55.918117] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:33:09.924 [2024-07-15 14:25:55.918327] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:09.924 [2024-07-15 14:25:55.920119] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:09.924 [2024-07-15 14:25:55.920365] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:33:09.924 BaseBdev1 00:33:10.183 14:25:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:33:10.183 14:25:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:33:10.441 BaseBdev2_malloc 00:33:10.441 14:25:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:33:10.699 [2024-07-15 14:25:56.484801] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:33:10.699 [2024-07-15 14:25:56.485279] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:10.699 [2024-07-15 14:25:56.485540] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:33:10.699 [2024-07-15 14:25:56.485799] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:10.699 [2024-07-15 14:25:56.487766] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:10.699 [2024-07-15 14:25:56.488014] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:33:10.699 BaseBdev2 00:33:10.699 14:25:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:33:10.957 spare_malloc 00:33:10.957 14:25:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:33:11.215 spare_delay 00:33:11.215 14:25:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:33:11.473 [2024-07-15 14:25:57.280316] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:33:11.473 [2024-07-15 14:25:57.280944] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:11.473 [2024-07-15 14:25:57.281199] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:33:11.473 [2024-07-15 14:25:57.281410] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:11.473 [2024-07-15 14:25:57.283162] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:11.473 [2024-07-15 14:25:57.283408] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:33:11.473 spare 00:33:11.473 14:25:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:33:11.733 [2024-07-15 14:25:57.560394] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:11.733 [2024-07-15 14:25:57.562129] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:11.733 [2024-07-15 14:25:57.562471] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:33:11.733 [2024-07-15 14:25:57.562600] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:33:11.733 [2024-07-15 14:25:57.562775] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:33:11.733 [2024-07-15 14:25:57.562975] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:33:11.733 [2024-07-15 14:25:57.563087] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:33:11.733 [2024-07-15 14:25:57.563290] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:11.733 14:25:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:33:11.733 14:25:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:11.733 14:25:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:11.733 14:25:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:11.733 14:25:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:11.733 14:25:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:11.733 14:25:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:11.733 14:25:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:11.733 14:25:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:11.733 14:25:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:11.733 14:25:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:11.733 14:25:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:11.991 14:25:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:11.991 "name": "raid_bdev1", 00:33:11.991 "uuid": "77af1b69-789f-4b55-9a80-de568cf0282f", 00:33:11.991 "strip_size_kb": 0, 00:33:11.991 "state": "online", 00:33:11.991 "raid_level": "raid1", 00:33:11.991 "superblock": true, 00:33:11.991 "num_base_bdevs": 2, 00:33:11.991 "num_base_bdevs_discovered": 2, 00:33:11.991 "num_base_bdevs_operational": 2, 00:33:11.991 "base_bdevs_list": [ 00:33:11.991 { 00:33:11.991 "name": "BaseBdev1", 00:33:11.991 "uuid": "b02a5ad7-6784-5469-a922-2c58598f0835", 00:33:11.991 "is_configured": true, 00:33:11.991 "data_offset": 256, 00:33:11.991 "data_size": 7936 00:33:11.991 }, 00:33:11.991 { 00:33:11.991 "name": "BaseBdev2", 00:33:11.991 "uuid": "965ae439-a2e8-5949-b028-9855444c775b", 00:33:11.991 "is_configured": true, 00:33:11.991 "data_offset": 256, 00:33:11.991 "data_size": 7936 00:33:11.991 } 00:33:11.991 ] 00:33:11.991 }' 00:33:11.991 14:25:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:11.991 14:25:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:12.559 14:25:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:33:12.559 14:25:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:33:12.818 [2024-07-15 14:25:58.799325] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:12.818 14:25:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=7936 00:33:13.076 14:25:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:13.076 14:25:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:33:13.335 14:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@618 -- # data_offset=256 00:33:13.335 14:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:33:13.335 14:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:33:13.335 14:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:33:13.335 14:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:33:13.335 14:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:33:13.335 14:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:33:13.335 14:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:33:13.335 14:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:33:13.335 14:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:33:13.335 14:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:33:13.335 14:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:33:13.335 14:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:33:13.335 14:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:33:13.593 [2024-07-15 14:25:59.339659] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:33:13.593 /dev/nbd0 00:33:13.593 14:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:33:13.593 14:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:33:13.593 14:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:33:13.593 14:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@867 -- # local i 00:33:13.593 14:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:33:13.593 14:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:33:13.593 14:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:33:13.593 14:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # break 00:33:13.593 14:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:33:13.593 14:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:33:13.593 14:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:13.593 1+0 records in 00:33:13.593 1+0 records out 00:33:13.593 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000349453 s, 11.7 MB/s 00:33:13.593 14:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:13.593 14:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # size=4096 00:33:13.593 14:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:13.593 14:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:33:13.593 14:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # return 0 00:33:13.593 14:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:13.593 14:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:33:13.593 14:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:33:13.594 14:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:33:13.594 14:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:33:14.160 7936+0 records in 00:33:14.160 7936+0 records out 00:33:14.160 32505856 bytes (33 MB, 31 MiB) copied, 0.587004 s, 55.4 MB/s 00:33:14.160 14:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:33:14.160 14:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:33:14.160 14:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:33:14.160 14:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:14.160 14:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:33:14.160 14:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:14.160 14:25:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:33:14.418 [2024-07-15 14:26:00.226533] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:14.418 14:26:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:33:14.418 14:26:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:33:14.418 14:26:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:33:14.418 14:26:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:14.418 14:26:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:14.418 14:26:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:14.418 14:26:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:33:14.418 14:26:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:33:14.418 14:26:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:33:14.675 [2024-07-15 14:26:00.486049] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:33:14.675 14:26:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:14.675 14:26:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:14.675 14:26:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:14.675 14:26:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:14.675 14:26:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:14.675 14:26:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:33:14.675 14:26:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:14.676 14:26:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:14.676 14:26:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:14.676 14:26:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:14.676 14:26:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:14.676 14:26:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:14.934 14:26:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:14.934 "name": "raid_bdev1", 00:33:14.934 "uuid": "77af1b69-789f-4b55-9a80-de568cf0282f", 00:33:14.934 "strip_size_kb": 0, 00:33:14.934 "state": "online", 00:33:14.934 "raid_level": "raid1", 00:33:14.934 "superblock": true, 00:33:14.934 "num_base_bdevs": 2, 00:33:14.934 "num_base_bdevs_discovered": 1, 00:33:14.934 "num_base_bdevs_operational": 1, 00:33:14.934 "base_bdevs_list": [ 00:33:14.934 { 00:33:14.934 "name": null, 00:33:14.934 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:14.934 "is_configured": false, 00:33:14.934 "data_offset": 256, 00:33:14.934 "data_size": 7936 00:33:14.934 }, 00:33:14.934 { 00:33:14.934 "name": "BaseBdev2", 00:33:14.934 "uuid": "965ae439-a2e8-5949-b028-9855444c775b", 00:33:14.934 "is_configured": true, 00:33:14.934 "data_offset": 256, 00:33:14.934 "data_size": 7936 00:33:14.934 } 00:33:14.934 ] 00:33:14.934 }' 00:33:14.934 14:26:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:14.934 14:26:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:15.501 14:26:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:33:15.759 [2024-07-15 14:26:01.738249] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:15.759 [2024-07-15 14:26:01.751826] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d190 00:33:15.759 [2024-07-15 14:26:01.753520] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:16.017 14:26:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # sleep 1 00:33:16.951 14:26:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:16.951 14:26:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:16.951 14:26:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:16.951 14:26:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:16.951 14:26:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:16.951 14:26:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:16.951 14:26:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:17.210 14:26:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:17.210 "name": "raid_bdev1", 00:33:17.210 "uuid": "77af1b69-789f-4b55-9a80-de568cf0282f", 00:33:17.210 "strip_size_kb": 0, 00:33:17.210 "state": "online", 00:33:17.210 "raid_level": "raid1", 00:33:17.210 "superblock": true, 00:33:17.210 "num_base_bdevs": 2, 00:33:17.210 "num_base_bdevs_discovered": 2, 00:33:17.210 "num_base_bdevs_operational": 2, 00:33:17.210 "process": { 00:33:17.210 "type": "rebuild", 00:33:17.210 "target": "spare", 00:33:17.210 "progress": { 00:33:17.210 "blocks": 3072, 00:33:17.210 "percent": 38 00:33:17.210 } 00:33:17.210 }, 00:33:17.210 "base_bdevs_list": [ 00:33:17.210 { 00:33:17.210 "name": "spare", 00:33:17.210 "uuid": "22d66ef1-472a-5628-a2e5-b920868db93d", 00:33:17.210 "is_configured": true, 00:33:17.210 "data_offset": 256, 00:33:17.210 "data_size": 7936 00:33:17.210 }, 00:33:17.210 { 00:33:17.210 "name": "BaseBdev2", 00:33:17.210 "uuid": "965ae439-a2e8-5949-b028-9855444c775b", 00:33:17.210 "is_configured": true, 00:33:17.210 "data_offset": 256, 00:33:17.210 "data_size": 7936 00:33:17.210 } 00:33:17.210 ] 00:33:17.210 }' 00:33:17.210 14:26:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:17.210 14:26:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:17.210 14:26:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:17.210 14:26:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:17.210 14:26:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:33:17.500 [2024-07-15 14:26:03.412007] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:17.500 [2024-07-15 14:26:03.463895] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:33:17.500 [2024-07-15 14:26:03.464464] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:17.500 [2024-07-15 14:26:03.464642] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:17.500 [2024-07-15 14:26:03.464692] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:33:17.765 14:26:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:17.765 14:26:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:17.765 14:26:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:17.765 14:26:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:17.765 14:26:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:17.765 14:26:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:33:17.765 14:26:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:17.765 14:26:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:17.765 14:26:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:17.765 14:26:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:17.765 14:26:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:17.765 14:26:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:18.022 14:26:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:18.022 "name": "raid_bdev1", 00:33:18.022 "uuid": "77af1b69-789f-4b55-9a80-de568cf0282f", 00:33:18.022 "strip_size_kb": 0, 00:33:18.022 "state": "online", 00:33:18.022 "raid_level": "raid1", 00:33:18.022 "superblock": true, 00:33:18.022 "num_base_bdevs": 2, 00:33:18.022 "num_base_bdevs_discovered": 1, 00:33:18.022 "num_base_bdevs_operational": 1, 00:33:18.022 "base_bdevs_list": [ 00:33:18.022 { 00:33:18.022 "name": null, 00:33:18.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:18.022 "is_configured": false, 00:33:18.022 "data_offset": 256, 00:33:18.022 "data_size": 7936 00:33:18.022 }, 00:33:18.022 { 00:33:18.022 "name": "BaseBdev2", 00:33:18.022 "uuid": "965ae439-a2e8-5949-b028-9855444c775b", 00:33:18.022 "is_configured": true, 00:33:18.022 "data_offset": 256, 00:33:18.022 "data_size": 7936 00:33:18.022 } 00:33:18.022 ] 00:33:18.022 }' 00:33:18.022 14:26:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:18.022 14:26:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:18.589 14:26:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:18.589 14:26:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:18.589 14:26:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:33:18.589 14:26:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:33:18.589 14:26:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:18.589 14:26:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:18.589 14:26:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:18.848 14:26:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:18.848 "name": "raid_bdev1", 00:33:18.848 "uuid": "77af1b69-789f-4b55-9a80-de568cf0282f", 00:33:18.848 "strip_size_kb": 0, 00:33:18.848 "state": "online", 00:33:18.848 "raid_level": "raid1", 00:33:18.848 "superblock": true, 00:33:18.848 "num_base_bdevs": 2, 00:33:18.848 "num_base_bdevs_discovered": 1, 00:33:18.848 "num_base_bdevs_operational": 1, 00:33:18.848 "base_bdevs_list": [ 00:33:18.848 { 00:33:18.848 "name": null, 00:33:18.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:18.848 "is_configured": false, 00:33:18.848 "data_offset": 256, 00:33:18.848 "data_size": 7936 00:33:18.848 }, 00:33:18.848 { 00:33:18.848 "name": "BaseBdev2", 00:33:18.848 "uuid": "965ae439-a2e8-5949-b028-9855444c775b", 00:33:18.848 "is_configured": true, 00:33:18.848 "data_offset": 256, 00:33:18.848 "data_size": 7936 00:33:18.848 } 00:33:18.848 ] 00:33:18.848 }' 00:33:18.848 14:26:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:18.848 14:26:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:33:18.848 14:26:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:18.848 14:26:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:33:18.848 14:26:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:33:19.106 [2024-07-15 14:26:05.075145] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:19.106 [2024-07-15 14:26:05.087961] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:33:19.106 [2024-07-15 14:26:05.089628] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:19.106 14:26:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # sleep 1 00:33:20.482 14:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:20.482 14:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:20.482 14:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:20.482 14:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:20.482 14:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:20.482 14:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:20.482 14:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:20.482 14:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:20.482 "name": "raid_bdev1", 00:33:20.482 "uuid": "77af1b69-789f-4b55-9a80-de568cf0282f", 00:33:20.482 "strip_size_kb": 0, 00:33:20.482 "state": "online", 00:33:20.482 "raid_level": "raid1", 00:33:20.482 "superblock": true, 00:33:20.482 "num_base_bdevs": 2, 00:33:20.482 "num_base_bdevs_discovered": 2, 00:33:20.482 "num_base_bdevs_operational": 2, 00:33:20.482 "process": { 00:33:20.482 "type": "rebuild", 00:33:20.482 "target": "spare", 00:33:20.482 "progress": { 00:33:20.482 "blocks": 3072, 00:33:20.482 "percent": 38 00:33:20.482 } 00:33:20.482 }, 00:33:20.482 "base_bdevs_list": [ 00:33:20.482 { 00:33:20.482 "name": "spare", 00:33:20.482 "uuid": "22d66ef1-472a-5628-a2e5-b920868db93d", 00:33:20.482 "is_configured": true, 00:33:20.482 "data_offset": 256, 00:33:20.482 "data_size": 7936 00:33:20.482 }, 00:33:20.482 { 00:33:20.482 "name": "BaseBdev2", 00:33:20.482 "uuid": "965ae439-a2e8-5949-b028-9855444c775b", 00:33:20.482 "is_configured": true, 00:33:20.482 "data_offset": 256, 00:33:20.482 "data_size": 7936 00:33:20.482 } 00:33:20.482 ] 00:33:20.482 }' 00:33:20.482 14:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:20.482 14:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:20.482 14:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:20.741 14:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:20.741 14:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:33:20.741 14:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:33:20.741 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:33:20.741 14:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:33:20.741 14:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:33:20.741 14:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:33:20.741 14:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@705 -- # local timeout=1223 00:33:20.741 14:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:33:20.741 14:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:20.741 14:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:20.741 14:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:20.741 14:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:20.741 14:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:20.741 14:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:20.741 14:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:20.741 14:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:20.741 "name": "raid_bdev1", 00:33:20.741 "uuid": "77af1b69-789f-4b55-9a80-de568cf0282f", 00:33:20.741 "strip_size_kb": 0, 00:33:20.741 "state": "online", 00:33:20.741 "raid_level": "raid1", 00:33:20.741 "superblock": true, 00:33:20.741 "num_base_bdevs": 2, 00:33:20.741 "num_base_bdevs_discovered": 2, 00:33:20.741 "num_base_bdevs_operational": 2, 00:33:20.741 "process": { 00:33:20.741 "type": "rebuild", 00:33:20.741 "target": "spare", 00:33:20.741 "progress": { 00:33:20.741 "blocks": 4096, 00:33:20.741 "percent": 51 00:33:20.741 } 00:33:20.741 }, 00:33:20.741 "base_bdevs_list": [ 00:33:20.741 { 00:33:20.741 "name": "spare", 00:33:20.741 "uuid": "22d66ef1-472a-5628-a2e5-b920868db93d", 00:33:20.741 "is_configured": true, 00:33:20.741 "data_offset": 256, 00:33:20.741 "data_size": 7936 00:33:20.741 }, 00:33:20.741 { 00:33:20.741 "name": "BaseBdev2", 00:33:20.741 "uuid": "965ae439-a2e8-5949-b028-9855444c775b", 00:33:20.741 "is_configured": true, 00:33:20.741 "data_offset": 256, 00:33:20.741 "data_size": 7936 00:33:20.741 } 00:33:20.741 ] 00:33:20.741 }' 00:33:20.741 14:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:21.000 14:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:21.000 14:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:21.000 14:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:21.000 14:26:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@710 -- # sleep 1 00:33:21.980 14:26:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:33:21.980 14:26:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:21.980 14:26:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:21.980 14:26:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:21.980 14:26:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:21.980 14:26:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:21.980 14:26:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:21.980 14:26:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:22.237 14:26:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:22.237 "name": "raid_bdev1", 00:33:22.237 "uuid": "77af1b69-789f-4b55-9a80-de568cf0282f", 00:33:22.237 "strip_size_kb": 0, 00:33:22.237 "state": "online", 00:33:22.237 "raid_level": "raid1", 00:33:22.237 "superblock": true, 00:33:22.237 "num_base_bdevs": 2, 00:33:22.237 "num_base_bdevs_discovered": 2, 00:33:22.237 "num_base_bdevs_operational": 2, 00:33:22.237 "process": { 00:33:22.237 "type": "rebuild", 00:33:22.237 "target": "spare", 00:33:22.237 "progress": { 00:33:22.237 "blocks": 7424, 00:33:22.237 "percent": 93 00:33:22.237 } 00:33:22.237 }, 00:33:22.237 "base_bdevs_list": [ 00:33:22.237 { 00:33:22.237 "name": "spare", 00:33:22.237 "uuid": "22d66ef1-472a-5628-a2e5-b920868db93d", 00:33:22.237 "is_configured": true, 00:33:22.237 "data_offset": 256, 00:33:22.237 "data_size": 7936 00:33:22.237 }, 00:33:22.237 { 00:33:22.237 "name": "BaseBdev2", 00:33:22.237 "uuid": "965ae439-a2e8-5949-b028-9855444c775b", 00:33:22.237 "is_configured": true, 00:33:22.237 "data_offset": 256, 00:33:22.237 "data_size": 7936 00:33:22.237 } 00:33:22.237 ] 00:33:22.237 }' 00:33:22.237 14:26:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:22.237 14:26:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:22.237 14:26:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:22.237 [2024-07-15 14:26:08.208481] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:33:22.237 [2024-07-15 14:26:08.208677] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:33:22.237 [2024-07-15 14:26:08.208940] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:22.237 14:26:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:22.237 14:26:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@710 -- # sleep 1 00:33:23.611 14:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:33:23.611 14:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:23.611 14:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:23.611 14:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:23.611 14:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:23.611 14:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:23.611 14:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:23.611 14:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:23.611 14:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:23.611 "name": "raid_bdev1", 00:33:23.611 "uuid": "77af1b69-789f-4b55-9a80-de568cf0282f", 00:33:23.611 "strip_size_kb": 0, 00:33:23.611 "state": "online", 00:33:23.611 "raid_level": "raid1", 00:33:23.611 "superblock": true, 00:33:23.611 "num_base_bdevs": 2, 00:33:23.611 "num_base_bdevs_discovered": 2, 00:33:23.611 "num_base_bdevs_operational": 2, 00:33:23.611 "base_bdevs_list": [ 00:33:23.611 { 00:33:23.611 "name": "spare", 00:33:23.611 "uuid": "22d66ef1-472a-5628-a2e5-b920868db93d", 00:33:23.611 "is_configured": true, 00:33:23.611 "data_offset": 256, 00:33:23.611 "data_size": 7936 00:33:23.611 }, 00:33:23.611 { 00:33:23.611 "name": "BaseBdev2", 00:33:23.611 "uuid": "965ae439-a2e8-5949-b028-9855444c775b", 00:33:23.611 "is_configured": true, 00:33:23.611 "data_offset": 256, 00:33:23.611 "data_size": 7936 00:33:23.611 } 00:33:23.611 ] 00:33:23.611 }' 00:33:23.611 14:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:23.611 14:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:33:23.611 14:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:23.869 14:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:33:23.869 14:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # break 00:33:23.869 14:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:23.869 14:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:23.869 14:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:33:23.869 14:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:33:23.869 14:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:23.869 14:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:23.869 14:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:24.127 14:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:24.127 "name": "raid_bdev1", 00:33:24.127 "uuid": "77af1b69-789f-4b55-9a80-de568cf0282f", 00:33:24.127 "strip_size_kb": 0, 00:33:24.127 "state": "online", 00:33:24.127 "raid_level": "raid1", 00:33:24.127 "superblock": true, 00:33:24.127 "num_base_bdevs": 2, 00:33:24.127 "num_base_bdevs_discovered": 2, 00:33:24.127 "num_base_bdevs_operational": 2, 00:33:24.127 "base_bdevs_list": [ 00:33:24.127 { 00:33:24.127 "name": "spare", 00:33:24.127 "uuid": "22d66ef1-472a-5628-a2e5-b920868db93d", 00:33:24.127 "is_configured": true, 00:33:24.127 "data_offset": 256, 00:33:24.127 "data_size": 7936 00:33:24.127 }, 00:33:24.127 { 00:33:24.127 "name": "BaseBdev2", 00:33:24.127 "uuid": "965ae439-a2e8-5949-b028-9855444c775b", 00:33:24.127 "is_configured": true, 00:33:24.127 "data_offset": 256, 00:33:24.127 "data_size": 7936 00:33:24.127 } 00:33:24.127 ] 00:33:24.127 }' 00:33:24.127 14:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:24.127 14:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:33:24.127 14:26:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:24.127 14:26:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:33:24.127 14:26:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:33:24.127 14:26:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:24.127 14:26:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:24.127 14:26:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:24.127 14:26:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:24.127 14:26:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:24.127 14:26:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:24.127 14:26:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:24.127 14:26:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:24.127 14:26:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:24.127 14:26:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:24.127 14:26:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:24.384 14:26:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:24.384 "name": "raid_bdev1", 00:33:24.384 "uuid": "77af1b69-789f-4b55-9a80-de568cf0282f", 00:33:24.384 "strip_size_kb": 0, 00:33:24.384 "state": "online", 00:33:24.384 "raid_level": "raid1", 00:33:24.384 "superblock": true, 00:33:24.384 "num_base_bdevs": 2, 00:33:24.384 "num_base_bdevs_discovered": 2, 00:33:24.384 "num_base_bdevs_operational": 2, 00:33:24.384 "base_bdevs_list": [ 00:33:24.384 { 00:33:24.384 "name": "spare", 00:33:24.384 "uuid": "22d66ef1-472a-5628-a2e5-b920868db93d", 00:33:24.384 "is_configured": true, 00:33:24.384 "data_offset": 256, 00:33:24.384 "data_size": 7936 00:33:24.384 }, 00:33:24.384 { 00:33:24.384 "name": "BaseBdev2", 00:33:24.384 "uuid": "965ae439-a2e8-5949-b028-9855444c775b", 00:33:24.384 "is_configured": true, 00:33:24.385 "data_offset": 256, 00:33:24.385 "data_size": 7936 00:33:24.385 } 00:33:24.385 ] 00:33:24.385 }' 00:33:24.385 14:26:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:24.385 14:26:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:24.952 14:26:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:33:25.210 [2024-07-15 14:26:11.173198] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:25.210 [2024-07-15 14:26:11.173423] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:25.210 [2024-07-15 14:26:11.173704] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:25.210 [2024-07-15 14:26:11.173902] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:25.210 [2024-07-15 14:26:11.174014] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:33:25.210 14:26:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:25.210 14:26:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # jq length 00:33:25.777 14:26:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:33:25.777 14:26:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:33:25.777 14:26:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:33:25.777 14:26:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:33:25.777 14:26:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:33:25.777 14:26:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:33:25.777 14:26:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:33:25.777 14:26:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:33:25.777 14:26:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:33:25.777 14:26:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:33:25.777 14:26:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:33:25.777 14:26:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:33:25.777 14:26:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:33:25.777 /dev/nbd0 00:33:25.777 14:26:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:33:25.777 14:26:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:33:25.777 14:26:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:33:25.777 14:26:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@867 -- # local i 00:33:25.777 14:26:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:33:25.777 14:26:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:33:25.777 14:26:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:33:25.777 14:26:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # break 00:33:25.777 14:26:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:33:25.777 14:26:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:33:25.777 14:26:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:25.777 1+0 records in 00:33:25.777 1+0 records out 00:33:25.777 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000390491 s, 10.5 MB/s 00:33:25.777 14:26:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:25.777 14:26:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # size=4096 00:33:25.777 14:26:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:25.777 14:26:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:33:25.777 14:26:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # return 0 00:33:25.777 14:26:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:25.777 14:26:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:33:25.777 14:26:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:33:26.345 /dev/nbd1 00:33:26.345 14:26:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:33:26.345 14:26:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:33:26.345 14:26:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:33:26.345 14:26:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@867 -- # local i 00:33:26.345 14:26:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:33:26.345 14:26:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:33:26.345 14:26:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:33:26.345 14:26:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # break 00:33:26.345 14:26:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:33:26.345 14:26:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:33:26.345 14:26:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:26.345 1+0 records in 00:33:26.345 1+0 records out 00:33:26.345 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000457134 s, 9.0 MB/s 00:33:26.345 14:26:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:26.345 14:26:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # size=4096 00:33:26.345 14:26:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:26.345 14:26:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:33:26.345 14:26:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # return 0 00:33:26.345 14:26:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:26.345 14:26:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:33:26.345 14:26:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:33:26.345 14:26:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:33:26.345 14:26:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:33:26.345 14:26:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:33:26.345 14:26:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:26.345 14:26:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:33:26.345 14:26:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:26.345 14:26:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:33:26.604 14:26:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:33:26.604 14:26:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:33:26.604 14:26:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:33:26.604 14:26:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:26.604 14:26:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:26.604 14:26:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:26.604 14:26:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:33:26.604 14:26:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:33:26.604 14:26:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:26.604 14:26:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:33:26.862 14:26:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:33:26.862 14:26:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:33:26.862 14:26:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:33:26.862 14:26:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:26.862 14:26:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:26.862 14:26:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:33:26.862 14:26:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:33:26.862 14:26:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:33:26.862 14:26:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:33:26.862 14:26:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:33:27.121 14:26:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:33:27.380 [2024-07-15 14:26:13.336293] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:33:27.380 [2024-07-15 14:26:13.336608] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:27.380 [2024-07-15 14:26:13.336711] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:33:27.380 [2024-07-15 14:26:13.336955] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:27.380 [2024-07-15 14:26:13.338577] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:27.380 [2024-07-15 14:26:13.338754] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:33:27.380 [2024-07-15 14:26:13.338956] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:33:27.380 [2024-07-15 14:26:13.339117] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:27.380 [2024-07-15 14:26:13.339347] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:27.380 spare 00:33:27.380 14:26:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:33:27.380 14:26:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:27.380 14:26:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:27.380 14:26:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:27.380 14:26:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:27.380 14:26:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:27.380 14:26:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:27.380 14:26:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:27.380 14:26:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:27.380 14:26:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:27.380 14:26:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:27.380 14:26:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:27.639 [2024-07-15 14:26:13.439522] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580 00:33:27.639 [2024-07-15 14:26:13.439781] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:33:27.639 [2024-07-15 14:26:13.439962] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1dc0 00:33:27.639 [2024-07-15 14:26:13.440253] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580 00:33:27.639 [2024-07-15 14:26:13.440356] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580 00:33:27.639 [2024-07-15 14:26:13.440565] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:27.896 14:26:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:27.896 "name": "raid_bdev1", 00:33:27.896 "uuid": "77af1b69-789f-4b55-9a80-de568cf0282f", 00:33:27.896 "strip_size_kb": 0, 00:33:27.896 "state": "online", 00:33:27.896 "raid_level": "raid1", 00:33:27.896 "superblock": true, 00:33:27.896 "num_base_bdevs": 2, 00:33:27.896 "num_base_bdevs_discovered": 2, 00:33:27.896 "num_base_bdevs_operational": 2, 00:33:27.896 "base_bdevs_list": [ 00:33:27.896 { 00:33:27.896 "name": "spare", 00:33:27.896 "uuid": "22d66ef1-472a-5628-a2e5-b920868db93d", 00:33:27.896 "is_configured": true, 00:33:27.896 "data_offset": 256, 00:33:27.896 "data_size": 7936 00:33:27.896 }, 00:33:27.896 { 00:33:27.896 "name": "BaseBdev2", 00:33:27.896 "uuid": "965ae439-a2e8-5949-b028-9855444c775b", 00:33:27.896 "is_configured": true, 00:33:27.896 "data_offset": 256, 00:33:27.896 "data_size": 7936 00:33:27.896 } 00:33:27.896 ] 00:33:27.896 }' 00:33:27.896 14:26:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:27.896 14:26:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:28.488 14:26:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:28.488 14:26:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:28.488 14:26:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:33:28.488 14:26:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:33:28.488 14:26:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:28.488 14:26:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:28.488 14:26:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:28.746 14:26:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:28.746 "name": "raid_bdev1", 00:33:28.746 "uuid": "77af1b69-789f-4b55-9a80-de568cf0282f", 00:33:28.746 "strip_size_kb": 0, 00:33:28.746 "state": "online", 00:33:28.746 "raid_level": "raid1", 00:33:28.746 "superblock": true, 00:33:28.746 "num_base_bdevs": 2, 00:33:28.746 "num_base_bdevs_discovered": 2, 00:33:28.746 "num_base_bdevs_operational": 2, 00:33:28.746 "base_bdevs_list": [ 00:33:28.746 { 00:33:28.746 "name": "spare", 00:33:28.746 "uuid": "22d66ef1-472a-5628-a2e5-b920868db93d", 00:33:28.746 "is_configured": true, 00:33:28.746 "data_offset": 256, 00:33:28.746 "data_size": 7936 00:33:28.746 }, 00:33:28.746 { 00:33:28.746 "name": "BaseBdev2", 00:33:28.746 "uuid": "965ae439-a2e8-5949-b028-9855444c775b", 00:33:28.746 "is_configured": true, 00:33:28.746 "data_offset": 256, 00:33:28.746 "data_size": 7936 00:33:28.746 } 00:33:28.746 ] 00:33:28.746 }' 00:33:28.746 14:26:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:28.746 14:26:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:33:28.746 14:26:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:28.746 14:26:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:33:28.746 14:26:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:28.746 14:26:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:33:29.003 14:26:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:33:29.003 14:26:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:33:29.260 [2024-07-15 14:26:15.165051] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:29.260 14:26:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:29.260 14:26:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:29.260 14:26:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:29.260 14:26:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:29.260 14:26:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:29.260 14:26:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:33:29.260 14:26:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:29.260 14:26:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:29.260 14:26:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:29.260 14:26:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:29.260 14:26:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:29.260 14:26:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:29.519 14:26:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:29.519 "name": "raid_bdev1", 00:33:29.519 "uuid": "77af1b69-789f-4b55-9a80-de568cf0282f", 00:33:29.519 "strip_size_kb": 0, 00:33:29.519 "state": "online", 00:33:29.519 "raid_level": "raid1", 00:33:29.519 "superblock": true, 00:33:29.519 "num_base_bdevs": 2, 00:33:29.519 "num_base_bdevs_discovered": 1, 00:33:29.519 "num_base_bdevs_operational": 1, 00:33:29.519 "base_bdevs_list": [ 00:33:29.519 { 00:33:29.519 "name": null, 00:33:29.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:29.519 "is_configured": false, 00:33:29.519 "data_offset": 256, 00:33:29.519 "data_size": 7936 00:33:29.519 }, 00:33:29.519 { 00:33:29.519 "name": "BaseBdev2", 00:33:29.519 "uuid": "965ae439-a2e8-5949-b028-9855444c775b", 00:33:29.519 "is_configured": true, 00:33:29.519 "data_offset": 256, 00:33:29.519 "data_size": 7936 00:33:29.519 } 00:33:29.519 ] 00:33:29.519 }' 00:33:29.519 14:26:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:29.519 14:26:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:30.087 14:26:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:33:30.347 [2024-07-15 14:26:16.297264] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:30.347 [2024-07-15 14:26:16.297624] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:33:30.347 [2024-07-15 14:26:16.297771] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:33:30.347 [2024-07-15 14:26:16.297872] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:30.347 [2024-07-15 14:26:16.310540] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1f60 00:33:30.347 [2024-07-15 14:26:16.312083] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:30.347 14:26:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # sleep 1 00:33:31.720 14:26:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:31.720 14:26:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:31.720 14:26:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:31.720 14:26:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:31.720 14:26:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:31.720 14:26:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:31.720 14:26:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:31.720 14:26:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:31.720 "name": "raid_bdev1", 00:33:31.720 "uuid": "77af1b69-789f-4b55-9a80-de568cf0282f", 00:33:31.720 "strip_size_kb": 0, 00:33:31.720 "state": "online", 00:33:31.720 "raid_level": "raid1", 00:33:31.720 "superblock": true, 00:33:31.720 "num_base_bdevs": 2, 00:33:31.720 "num_base_bdevs_discovered": 2, 00:33:31.720 "num_base_bdevs_operational": 2, 00:33:31.720 "process": { 00:33:31.720 "type": "rebuild", 00:33:31.720 "target": "spare", 00:33:31.720 "progress": { 00:33:31.720 "blocks": 3072, 00:33:31.720 "percent": 38 00:33:31.720 } 00:33:31.720 }, 00:33:31.720 "base_bdevs_list": [ 00:33:31.720 { 00:33:31.720 "name": "spare", 00:33:31.720 "uuid": "22d66ef1-472a-5628-a2e5-b920868db93d", 00:33:31.720 "is_configured": true, 00:33:31.720 "data_offset": 256, 00:33:31.720 "data_size": 7936 00:33:31.720 }, 00:33:31.720 { 00:33:31.720 "name": "BaseBdev2", 00:33:31.720 "uuid": "965ae439-a2e8-5949-b028-9855444c775b", 00:33:31.720 "is_configured": true, 00:33:31.720 "data_offset": 256, 00:33:31.720 "data_size": 7936 00:33:31.720 } 00:33:31.720 ] 00:33:31.720 }' 00:33:31.720 14:26:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:31.720 14:26:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:31.720 14:26:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:31.720 14:26:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:31.720 14:26:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:33:32.027 [2024-07-15 14:26:17.974654] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:32.286 [2024-07-15 14:26:18.021710] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:33:32.286 [2024-07-15 14:26:18.021972] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:32.286 [2024-07-15 14:26:18.022101] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:32.286 [2024-07-15 14:26:18.022155] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:33:32.286 14:26:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:32.286 14:26:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:32.286 14:26:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:32.286 14:26:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:32.286 14:26:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:32.286 14:26:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:33:32.286 14:26:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:32.286 14:26:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:32.286 14:26:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:32.286 14:26:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:32.286 14:26:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:32.286 14:26:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:32.558 14:26:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:32.558 "name": "raid_bdev1", 00:33:32.558 "uuid": "77af1b69-789f-4b55-9a80-de568cf0282f", 00:33:32.558 "strip_size_kb": 0, 00:33:32.558 "state": "online", 00:33:32.558 "raid_level": "raid1", 00:33:32.558 "superblock": true, 00:33:32.558 "num_base_bdevs": 2, 00:33:32.558 "num_base_bdevs_discovered": 1, 00:33:32.558 "num_base_bdevs_operational": 1, 00:33:32.558 "base_bdevs_list": [ 00:33:32.558 { 00:33:32.558 "name": null, 00:33:32.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:32.558 "is_configured": false, 00:33:32.558 "data_offset": 256, 00:33:32.558 "data_size": 7936 00:33:32.558 }, 00:33:32.558 { 00:33:32.558 "name": "BaseBdev2", 00:33:32.558 "uuid": "965ae439-a2e8-5949-b028-9855444c775b", 00:33:32.558 "is_configured": true, 00:33:32.558 "data_offset": 256, 00:33:32.558 "data_size": 7936 00:33:32.558 } 00:33:32.558 ] 00:33:32.558 }' 00:33:32.558 14:26:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:32.558 14:26:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:33.123 14:26:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:33:33.381 [2024-07-15 14:26:19.204826] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:33:33.381 [2024-07-15 14:26:19.204929] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:33.381 [2024-07-15 14:26:19.204968] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:33:33.381 [2024-07-15 14:26:19.204999] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:33.381 [2024-07-15 14:26:19.205248] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:33.381 [2024-07-15 14:26:19.205293] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:33:33.381 [2024-07-15 14:26:19.205380] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:33:33.381 [2024-07-15 14:26:19.205394] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:33:33.381 [2024-07-15 14:26:19.205403] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:33:33.381 [2024-07-15 14:26:19.205445] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:33.381 [2024-07-15 14:26:19.217366] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c22a0 00:33:33.381 spare 00:33:33.381 [2024-07-15 14:26:19.218834] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:33.381 14:26:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # sleep 1 00:33:34.315 14:26:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:34.315 14:26:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:34.315 14:26:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:34.315 14:26:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:34.315 14:26:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:34.315 14:26:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:34.315 14:26:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:34.573 14:26:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:34.573 "name": "raid_bdev1", 00:33:34.573 "uuid": "77af1b69-789f-4b55-9a80-de568cf0282f", 00:33:34.573 "strip_size_kb": 0, 00:33:34.573 "state": "online", 00:33:34.573 "raid_level": "raid1", 00:33:34.573 "superblock": true, 00:33:34.573 "num_base_bdevs": 2, 00:33:34.573 "num_base_bdevs_discovered": 2, 00:33:34.573 "num_base_bdevs_operational": 2, 00:33:34.573 "process": { 00:33:34.573 "type": "rebuild", 00:33:34.573 "target": "spare", 00:33:34.573 "progress": { 00:33:34.573 "blocks": 3072, 00:33:34.573 "percent": 38 00:33:34.573 } 00:33:34.573 }, 00:33:34.573 "base_bdevs_list": [ 00:33:34.573 { 00:33:34.573 "name": "spare", 00:33:34.573 "uuid": "22d66ef1-472a-5628-a2e5-b920868db93d", 00:33:34.573 "is_configured": true, 00:33:34.573 "data_offset": 256, 00:33:34.573 "data_size": 7936 00:33:34.573 }, 00:33:34.573 { 00:33:34.573 "name": "BaseBdev2", 00:33:34.573 "uuid": "965ae439-a2e8-5949-b028-9855444c775b", 00:33:34.573 "is_configured": true, 00:33:34.573 "data_offset": 256, 00:33:34.573 "data_size": 7936 00:33:34.573 } 00:33:34.573 ] 00:33:34.573 }' 00:33:34.573 14:26:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:34.573 14:26:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:34.573 14:26:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:34.832 14:26:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:34.832 14:26:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:33:35.090 [2024-07-15 14:26:20.846147] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:35.090 [2024-07-15 14:26:20.928170] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:33:35.090 [2024-07-15 14:26:20.928292] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:35.090 [2024-07-15 14:26:20.928310] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:35.090 [2024-07-15 14:26:20.928319] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:33:35.090 14:26:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:35.090 14:26:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:35.090 14:26:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:35.090 14:26:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:35.090 14:26:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:35.090 14:26:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:33:35.090 14:26:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:35.090 14:26:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:35.091 14:26:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:35.091 14:26:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:35.091 14:26:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:35.091 14:26:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:35.349 14:26:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:35.349 "name": "raid_bdev1", 00:33:35.349 "uuid": "77af1b69-789f-4b55-9a80-de568cf0282f", 00:33:35.349 "strip_size_kb": 0, 00:33:35.349 "state": "online", 00:33:35.349 "raid_level": "raid1", 00:33:35.349 "superblock": true, 00:33:35.349 "num_base_bdevs": 2, 00:33:35.349 "num_base_bdevs_discovered": 1, 00:33:35.349 "num_base_bdevs_operational": 1, 00:33:35.349 "base_bdevs_list": [ 00:33:35.349 { 00:33:35.349 "name": null, 00:33:35.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:35.349 "is_configured": false, 00:33:35.349 "data_offset": 256, 00:33:35.349 "data_size": 7936 00:33:35.349 }, 00:33:35.349 { 00:33:35.349 "name": "BaseBdev2", 00:33:35.349 "uuid": "965ae439-a2e8-5949-b028-9855444c775b", 00:33:35.349 "is_configured": true, 00:33:35.349 "data_offset": 256, 00:33:35.349 "data_size": 7936 00:33:35.349 } 00:33:35.349 ] 00:33:35.349 }' 00:33:35.349 14:26:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:35.349 14:26:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:36.361 14:26:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:36.361 14:26:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:36.361 14:26:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:33:36.361 14:26:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:33:36.361 14:26:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:36.361 14:26:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:36.361 14:26:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:36.361 14:26:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:36.361 "name": "raid_bdev1", 00:33:36.361 "uuid": "77af1b69-789f-4b55-9a80-de568cf0282f", 00:33:36.361 "strip_size_kb": 0, 00:33:36.361 "state": "online", 00:33:36.361 "raid_level": "raid1", 00:33:36.361 "superblock": true, 00:33:36.361 "num_base_bdevs": 2, 00:33:36.361 "num_base_bdevs_discovered": 1, 00:33:36.361 "num_base_bdevs_operational": 1, 00:33:36.361 "base_bdevs_list": [ 00:33:36.361 { 00:33:36.361 "name": null, 00:33:36.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:36.361 "is_configured": false, 00:33:36.361 "data_offset": 256, 00:33:36.361 "data_size": 7936 00:33:36.361 }, 00:33:36.361 { 00:33:36.361 "name": "BaseBdev2", 00:33:36.361 "uuid": "965ae439-a2e8-5949-b028-9855444c775b", 00:33:36.361 "is_configured": true, 00:33:36.361 "data_offset": 256, 00:33:36.361 "data_size": 7936 00:33:36.361 } 00:33:36.361 ] 00:33:36.361 }' 00:33:36.361 14:26:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:36.361 14:26:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:33:36.361 14:26:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:36.361 14:26:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:33:36.361 14:26:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:33:36.619 14:26:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:33:36.902 [2024-07-15 14:26:22.876222] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:33:36.902 [2024-07-15 14:26:22.876349] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:36.902 [2024-07-15 14:26:22.876391] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:33:36.902 [2024-07-15 14:26:22.876416] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:36.902 [2024-07-15 14:26:22.876612] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:36.902 [2024-07-15 14:26:22.876661] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:33:36.902 [2024-07-15 14:26:22.876782] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:33:36.902 [2024-07-15 14:26:22.876800] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:33:36.902 [2024-07-15 14:26:22.876808] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:33:36.902 BaseBdev1 00:33:36.902 14:26:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # sleep 1 00:33:38.275 14:26:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:38.275 14:26:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:38.275 14:26:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:38.275 14:26:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:38.275 14:26:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:38.275 14:26:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:33:38.275 14:26:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:38.275 14:26:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:38.275 14:26:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:38.275 14:26:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:38.275 14:26:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:38.275 14:26:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:38.275 14:26:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:38.275 "name": "raid_bdev1", 00:33:38.275 "uuid": "77af1b69-789f-4b55-9a80-de568cf0282f", 00:33:38.276 "strip_size_kb": 0, 00:33:38.276 "state": "online", 00:33:38.276 "raid_level": "raid1", 00:33:38.276 "superblock": true, 00:33:38.276 "num_base_bdevs": 2, 00:33:38.276 "num_base_bdevs_discovered": 1, 00:33:38.276 "num_base_bdevs_operational": 1, 00:33:38.276 "base_bdevs_list": [ 00:33:38.276 { 00:33:38.276 "name": null, 00:33:38.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:38.276 "is_configured": false, 00:33:38.276 "data_offset": 256, 00:33:38.276 "data_size": 7936 00:33:38.276 }, 00:33:38.276 { 00:33:38.276 "name": "BaseBdev2", 00:33:38.276 "uuid": "965ae439-a2e8-5949-b028-9855444c775b", 00:33:38.276 "is_configured": true, 00:33:38.276 "data_offset": 256, 00:33:38.276 "data_size": 7936 00:33:38.276 } 00:33:38.276 ] 00:33:38.276 }' 00:33:38.276 14:26:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:38.276 14:26:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:39.209 14:26:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:39.209 14:26:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:39.209 14:26:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:33:39.209 14:26:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:33:39.209 14:26:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:39.209 14:26:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:39.209 14:26:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:39.466 14:26:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:39.466 "name": "raid_bdev1", 00:33:39.466 "uuid": "77af1b69-789f-4b55-9a80-de568cf0282f", 00:33:39.466 "strip_size_kb": 0, 00:33:39.466 "state": "online", 00:33:39.466 "raid_level": "raid1", 00:33:39.466 "superblock": true, 00:33:39.466 "num_base_bdevs": 2, 00:33:39.466 "num_base_bdevs_discovered": 1, 00:33:39.466 "num_base_bdevs_operational": 1, 00:33:39.466 "base_bdevs_list": [ 00:33:39.466 { 00:33:39.466 "name": null, 00:33:39.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:39.466 "is_configured": false, 00:33:39.466 "data_offset": 256, 00:33:39.466 "data_size": 7936 00:33:39.466 }, 00:33:39.466 { 00:33:39.466 "name": "BaseBdev2", 00:33:39.466 "uuid": "965ae439-a2e8-5949-b028-9855444c775b", 00:33:39.466 "is_configured": true, 00:33:39.466 "data_offset": 256, 00:33:39.466 "data_size": 7936 00:33:39.466 } 00:33:39.466 ] 00:33:39.466 }' 00:33:39.466 14:26:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:39.466 14:26:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:33:39.466 14:26:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:39.466 14:26:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:33:39.466 14:26:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:33:39.466 14:26:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@648 -- # local es=0 00:33:39.466 14:26:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:33:39.466 14:26:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:39.466 14:26:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:39.466 14:26:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:39.466 14:26:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:39.466 14:26:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:39.466 14:26:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:39.466 14:26:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:39.466 14:26:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:33:39.466 14:26:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:33:39.724 [2024-07-15 14:26:25.657197] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:39.724 [2024-07-15 14:26:25.657505] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:33:39.724 [2024-07-15 14:26:25.657635] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:33:39.724 request: 00:33:39.724 { 00:33:39.724 "base_bdev": "BaseBdev1", 00:33:39.724 "raid_bdev": "raid_bdev1", 00:33:39.724 "method": "bdev_raid_add_base_bdev", 00:33:39.724 "req_id": 1 00:33:39.724 } 00:33:39.724 Got JSON-RPC error response 00:33:39.724 response: 00:33:39.724 { 00:33:39.724 "code": -22, 00:33:39.724 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:33:39.724 } 00:33:39.724 14:26:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@651 -- # es=1 00:33:39.724 14:26:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:39.724 14:26:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:39.724 14:26:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:39.724 14:26:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # sleep 1 00:33:41.098 14:26:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:33:41.098 14:26:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:41.098 14:26:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:41.098 14:26:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:41.098 14:26:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:41.098 14:26:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:33:41.098 14:26:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:41.098 14:26:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:41.098 14:26:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:41.098 14:26:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:41.098 14:26:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:41.098 14:26:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:41.098 14:26:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:41.098 "name": "raid_bdev1", 00:33:41.098 "uuid": "77af1b69-789f-4b55-9a80-de568cf0282f", 00:33:41.098 "strip_size_kb": 0, 00:33:41.098 "state": "online", 00:33:41.098 "raid_level": "raid1", 00:33:41.098 "superblock": true, 00:33:41.098 "num_base_bdevs": 2, 00:33:41.098 "num_base_bdevs_discovered": 1, 00:33:41.098 "num_base_bdevs_operational": 1, 00:33:41.098 "base_bdevs_list": [ 00:33:41.098 { 00:33:41.098 "name": null, 00:33:41.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:41.098 "is_configured": false, 00:33:41.098 "data_offset": 256, 00:33:41.098 "data_size": 7936 00:33:41.098 }, 00:33:41.098 { 00:33:41.098 "name": "BaseBdev2", 00:33:41.098 "uuid": "965ae439-a2e8-5949-b028-9855444c775b", 00:33:41.098 "is_configured": true, 00:33:41.098 "data_offset": 256, 00:33:41.098 "data_size": 7936 00:33:41.098 } 00:33:41.098 ] 00:33:41.098 }' 00:33:41.098 14:26:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:41.098 14:26:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:41.665 14:26:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:41.665 14:26:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:41.665 14:26:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:33:41.665 14:26:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:33:41.665 14:26:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:41.665 14:26:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:41.665 14:26:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:42.233 14:26:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:42.233 "name": "raid_bdev1", 00:33:42.233 "uuid": "77af1b69-789f-4b55-9a80-de568cf0282f", 00:33:42.233 "strip_size_kb": 0, 00:33:42.233 "state": "online", 00:33:42.233 "raid_level": "raid1", 00:33:42.233 "superblock": true, 00:33:42.233 "num_base_bdevs": 2, 00:33:42.233 "num_base_bdevs_discovered": 1, 00:33:42.233 "num_base_bdevs_operational": 1, 00:33:42.233 "base_bdevs_list": [ 00:33:42.233 { 00:33:42.233 "name": null, 00:33:42.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:42.233 "is_configured": false, 00:33:42.233 "data_offset": 256, 00:33:42.233 "data_size": 7936 00:33:42.233 }, 00:33:42.233 { 00:33:42.233 "name": "BaseBdev2", 00:33:42.233 "uuid": "965ae439-a2e8-5949-b028-9855444c775b", 00:33:42.233 "is_configured": true, 00:33:42.233 "data_offset": 256, 00:33:42.233 "data_size": 7936 00:33:42.233 } 00:33:42.233 ] 00:33:42.233 }' 00:33:42.233 14:26:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:42.233 14:26:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:33:42.233 14:26:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:42.233 14:26:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:33:42.233 14:26:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@782 -- # killprocess 218859 00:33:42.233 14:26:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@948 -- # '[' -z 218859 ']' 00:33:42.233 14:26:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@952 -- # kill -0 218859 00:33:42.233 14:26:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@953 -- # uname 00:33:42.233 14:26:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:42.233 14:26:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 218859 00:33:42.233 14:26:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:42.233 14:26:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:42.233 14:26:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@966 -- # echo 'killing process with pid 218859' 00:33:42.233 killing process with pid 218859 00:33:42.233 14:26:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@967 -- # kill 218859 00:33:42.233 Received shutdown signal, test time was about 60.000000 seconds 00:33:42.233 00:33:42.233 Latency(us) 00:33:42.233 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:42.233 =================================================================================================================== 00:33:42.233 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:33:42.233 14:26:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # wait 218859 00:33:42.233 [2024-07-15 14:26:28.100287] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:42.233 [2024-07-15 14:26:28.100467] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:42.233 [2024-07-15 14:26:28.100614] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:42.233 [2024-07-15 14:26:28.100663] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline 00:33:42.491 [2024-07-15 14:26:28.384190] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:43.866 14:26:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # return 0 00:33:43.866 00:33:43.866 real 0m35.291s 00:33:43.866 user 0m56.277s 00:33:43.866 sys 0m3.985s 00:33:43.866 14:26:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:43.866 14:26:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:33:43.866 ************************************ 00:33:43.866 END TEST raid_rebuild_test_sb_md_separate 00:33:43.866 ************************************ 00:33:43.866 14:26:29 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:33:43.866 14:26:29 bdev_raid -- bdev/bdev_raid.sh@911 -- # base_malloc_params='-m 32 -i' 00:33:43.866 14:26:29 bdev_raid -- bdev/bdev_raid.sh@912 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:33:43.866 14:26:29 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:33:43.866 14:26:29 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:43.866 14:26:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:33:43.866 ************************************ 00:33:43.866 START TEST raid_state_function_test_sb_md_interleaved 00:33:43.866 ************************************ 00:33:43.866 14:26:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 true 00:33:43.866 14:26:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:33:43.866 14:26:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:33:43.866 14:26:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:33:43.866 14:26:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:33:43.866 14:26:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:33:43.866 14:26:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:33:43.866 14:26:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:33:43.866 14:26:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:33:43.866 14:26:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:33:43.866 14:26:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:33:43.866 14:26:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:33:43.866 14:26:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:33:43.866 14:26:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:33:43.866 14:26:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:33:43.866 14:26:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:33:43.866 14:26:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # local strip_size 00:33:43.866 14:26:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:33:43.866 14:26:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:33:43.866 14:26:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:33:43.866 14:26:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:33:43.866 14:26:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:33:43.866 14:26:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:33:43.866 14:26:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # raid_pid=219746 00:33:43.866 14:26:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:33:43.866 Process raid pid: 219746 00:33:43.866 14:26:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 219746' 00:33:43.866 14:26:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@246 -- # waitforlisten 219746 /var/tmp/spdk-raid.sock 00:33:43.866 14:26:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@829 -- # '[' -z 219746 ']' 00:33:43.866 14:26:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:33:43.866 14:26:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:43.866 14:26:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:33:43.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:33:43.866 14:26:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:43.866 14:26:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:43.866 [2024-07-15 14:26:29.704496] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:33:43.866 [2024-07-15 14:26:29.704858] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:43.866 [2024-07-15 14:26:29.868409] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:44.433 [2024-07-15 14:26:30.177508] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:44.433 [2024-07-15 14:26:30.404747] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:44.999 14:26:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:44.999 14:26:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@862 -- # return 0 00:33:44.999 14:26:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:33:44.999 [2024-07-15 14:26:30.999875] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:33:44.999 [2024-07-15 14:26:31.000148] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:33:44.999 [2024-07-15 14:26:31.000274] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:33:44.999 [2024-07-15 14:26:31.000347] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:33:45.258 14:26:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:33:45.258 14:26:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:45.258 14:26:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:45.258 14:26:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:45.258 14:26:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:45.258 14:26:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:45.258 14:26:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:45.258 14:26:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:45.258 14:26:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:45.258 14:26:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:45.258 14:26:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:45.258 14:26:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:45.517 14:26:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:45.517 "name": "Existed_Raid", 00:33:45.517 "uuid": "78324a67-924a-4b1b-b936-fbb012adb8a0", 00:33:45.517 "strip_size_kb": 0, 00:33:45.517 "state": "configuring", 00:33:45.517 "raid_level": "raid1", 00:33:45.517 "superblock": true, 00:33:45.517 "num_base_bdevs": 2, 00:33:45.517 "num_base_bdevs_discovered": 0, 00:33:45.517 "num_base_bdevs_operational": 2, 00:33:45.517 "base_bdevs_list": [ 00:33:45.517 { 00:33:45.517 "name": "BaseBdev1", 00:33:45.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:45.517 "is_configured": false, 00:33:45.517 "data_offset": 0, 00:33:45.517 "data_size": 0 00:33:45.517 }, 00:33:45.517 { 00:33:45.517 "name": "BaseBdev2", 00:33:45.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:45.517 "is_configured": false, 00:33:45.517 "data_offset": 0, 00:33:45.517 "data_size": 0 00:33:45.517 } 00:33:45.517 ] 00:33:45.517 }' 00:33:45.517 14:26:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:45.517 14:26:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:46.085 14:26:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:33:46.344 [2024-07-15 14:26:32.240034] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:33:46.344 [2024-07-15 14:26:32.240283] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:33:46.344 14:26:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:33:46.603 [2024-07-15 14:26:32.540157] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:33:46.603 [2024-07-15 14:26:32.540381] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:33:46.603 [2024-07-15 14:26:32.540517] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:33:46.603 [2024-07-15 14:26:32.540589] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:33:46.603 14:26:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:33:46.862 [2024-07-15 14:26:32.853708] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:46.862 BaseBdev1 00:33:47.120 14:26:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:33:47.120 14:26:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:33:47.120 14:26:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:33:47.120 14:26:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local i 00:33:47.120 14:26:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:33:47.120 14:26:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:33:47.120 14:26:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:33:47.378 14:26:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:33:47.378 [ 00:33:47.378 { 00:33:47.378 "name": "BaseBdev1", 00:33:47.378 "aliases": [ 00:33:47.378 "e0277dcb-812e-469b-8e28-261d8368e21f" 00:33:47.378 ], 00:33:47.378 "product_name": "Malloc disk", 00:33:47.378 "block_size": 4128, 00:33:47.378 "num_blocks": 8192, 00:33:47.378 "uuid": "e0277dcb-812e-469b-8e28-261d8368e21f", 00:33:47.378 "md_size": 32, 00:33:47.378 "md_interleave": true, 00:33:47.378 "dif_type": 0, 00:33:47.378 "assigned_rate_limits": { 00:33:47.378 "rw_ios_per_sec": 0, 00:33:47.378 "rw_mbytes_per_sec": 0, 00:33:47.378 "r_mbytes_per_sec": 0, 00:33:47.378 "w_mbytes_per_sec": 0 00:33:47.378 }, 00:33:47.378 "claimed": true, 00:33:47.378 "claim_type": "exclusive_write", 00:33:47.378 "zoned": false, 00:33:47.378 "supported_io_types": { 00:33:47.378 "read": true, 00:33:47.378 "write": true, 00:33:47.378 "unmap": true, 00:33:47.378 "flush": true, 00:33:47.378 "reset": true, 00:33:47.378 "nvme_admin": false, 00:33:47.378 "nvme_io": false, 00:33:47.378 "nvme_io_md": false, 00:33:47.378 "write_zeroes": true, 00:33:47.378 "zcopy": true, 00:33:47.378 "get_zone_info": false, 00:33:47.378 "zone_management": false, 00:33:47.378 "zone_append": false, 00:33:47.378 "compare": false, 00:33:47.378 "compare_and_write": false, 00:33:47.378 "abort": true, 00:33:47.378 "seek_hole": false, 00:33:47.378 "seek_data": false, 00:33:47.378 "copy": true, 00:33:47.378 "nvme_iov_md": false 00:33:47.379 }, 00:33:47.379 "memory_domains": [ 00:33:47.379 { 00:33:47.379 "dma_device_id": "system", 00:33:47.379 "dma_device_type": 1 00:33:47.379 }, 00:33:47.379 { 00:33:47.379 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:47.379 "dma_device_type": 2 00:33:47.379 } 00:33:47.379 ], 00:33:47.379 "driver_specific": {} 00:33:47.379 } 00:33:47.379 ] 00:33:47.637 14:26:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # return 0 00:33:47.637 14:26:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:33:47.637 14:26:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:47.637 14:26:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:47.637 14:26:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:47.637 14:26:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:47.637 14:26:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:47.637 14:26:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:47.637 14:26:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:47.637 14:26:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:47.637 14:26:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:47.637 14:26:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:47.637 14:26:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:47.896 14:26:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:47.896 "name": "Existed_Raid", 00:33:47.896 "uuid": "b5690f96-f51e-4e90-b082-501be1ada5ce", 00:33:47.896 "strip_size_kb": 0, 00:33:47.896 "state": "configuring", 00:33:47.896 "raid_level": "raid1", 00:33:47.896 "superblock": true, 00:33:47.896 "num_base_bdevs": 2, 00:33:47.896 "num_base_bdevs_discovered": 1, 00:33:47.896 "num_base_bdevs_operational": 2, 00:33:47.896 "base_bdevs_list": [ 00:33:47.896 { 00:33:47.896 "name": "BaseBdev1", 00:33:47.896 "uuid": "e0277dcb-812e-469b-8e28-261d8368e21f", 00:33:47.896 "is_configured": true, 00:33:47.896 "data_offset": 256, 00:33:47.896 "data_size": 7936 00:33:47.896 }, 00:33:47.896 { 00:33:47.896 "name": "BaseBdev2", 00:33:47.896 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:47.896 "is_configured": false, 00:33:47.896 "data_offset": 0, 00:33:47.896 "data_size": 0 00:33:47.896 } 00:33:47.896 ] 00:33:47.896 }' 00:33:47.896 14:26:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:47.896 14:26:33 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:48.462 14:26:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:33:48.720 [2024-07-15 14:26:34.626216] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:33:48.720 [2024-07-15 14:26:34.626487] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:33:48.720 14:26:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:33:48.979 [2024-07-15 14:26:34.870316] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:48.979 [2024-07-15 14:26:34.872031] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:33:48.979 [2024-07-15 14:26:34.872214] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:33:48.979 14:26:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:33:48.979 14:26:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:33:48.979 14:26:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:33:48.979 14:26:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:48.979 14:26:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:48.979 14:26:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:48.979 14:26:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:48.979 14:26:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:48.979 14:26:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:48.979 14:26:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:48.979 14:26:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:48.979 14:26:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:48.979 14:26:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:48.979 14:26:34 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:49.237 14:26:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:49.237 "name": "Existed_Raid", 00:33:49.237 "uuid": "161d810d-4e8f-42d5-a963-c431540aeb8e", 00:33:49.237 "strip_size_kb": 0, 00:33:49.237 "state": "configuring", 00:33:49.237 "raid_level": "raid1", 00:33:49.237 "superblock": true, 00:33:49.237 "num_base_bdevs": 2, 00:33:49.237 "num_base_bdevs_discovered": 1, 00:33:49.237 "num_base_bdevs_operational": 2, 00:33:49.237 "base_bdevs_list": [ 00:33:49.237 { 00:33:49.237 "name": "BaseBdev1", 00:33:49.237 "uuid": "e0277dcb-812e-469b-8e28-261d8368e21f", 00:33:49.237 "is_configured": true, 00:33:49.237 "data_offset": 256, 00:33:49.237 "data_size": 7936 00:33:49.237 }, 00:33:49.237 { 00:33:49.237 "name": "BaseBdev2", 00:33:49.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:49.237 "is_configured": false, 00:33:49.237 "data_offset": 0, 00:33:49.237 "data_size": 0 00:33:49.237 } 00:33:49.237 ] 00:33:49.237 }' 00:33:49.237 14:26:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:49.237 14:26:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:49.867 14:26:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:33:50.433 [2024-07-15 14:26:36.135312] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:50.433 [2024-07-15 14:26:36.135820] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:33:50.433 [2024-07-15 14:26:36.135953] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:33:50.433 [2024-07-15 14:26:36.136106] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:33:50.433 [2024-07-15 14:26:36.136279] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:33:50.433 [2024-07-15 14:26:36.136400] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:33:50.433 [2024-07-15 14:26:36.136566] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:50.433 BaseBdev2 00:33:50.433 14:26:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:33:50.433 14:26:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:33:50.433 14:26:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:33:50.433 14:26:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local i 00:33:50.433 14:26:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:33:50.433 14:26:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:33:50.433 14:26:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:33:50.691 14:26:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:33:50.691 [ 00:33:50.691 { 00:33:50.691 "name": "BaseBdev2", 00:33:50.691 "aliases": [ 00:33:50.691 "ff238746-6aa7-445f-9009-918d4b71725f" 00:33:50.691 ], 00:33:50.691 "product_name": "Malloc disk", 00:33:50.691 "block_size": 4128, 00:33:50.691 "num_blocks": 8192, 00:33:50.691 "uuid": "ff238746-6aa7-445f-9009-918d4b71725f", 00:33:50.691 "md_size": 32, 00:33:50.691 "md_interleave": true, 00:33:50.691 "dif_type": 0, 00:33:50.691 "assigned_rate_limits": { 00:33:50.691 "rw_ios_per_sec": 0, 00:33:50.691 "rw_mbytes_per_sec": 0, 00:33:50.691 "r_mbytes_per_sec": 0, 00:33:50.691 "w_mbytes_per_sec": 0 00:33:50.691 }, 00:33:50.691 "claimed": true, 00:33:50.691 "claim_type": "exclusive_write", 00:33:50.691 "zoned": false, 00:33:50.691 "supported_io_types": { 00:33:50.691 "read": true, 00:33:50.691 "write": true, 00:33:50.691 "unmap": true, 00:33:50.691 "flush": true, 00:33:50.691 "reset": true, 00:33:50.691 "nvme_admin": false, 00:33:50.691 "nvme_io": false, 00:33:50.691 "nvme_io_md": false, 00:33:50.691 "write_zeroes": true, 00:33:50.691 "zcopy": true, 00:33:50.691 "get_zone_info": false, 00:33:50.691 "zone_management": false, 00:33:50.691 "zone_append": false, 00:33:50.691 "compare": false, 00:33:50.691 "compare_and_write": false, 00:33:50.691 "abort": true, 00:33:50.691 "seek_hole": false, 00:33:50.691 "seek_data": false, 00:33:50.691 "copy": true, 00:33:50.691 "nvme_iov_md": false 00:33:50.691 }, 00:33:50.691 "memory_domains": [ 00:33:50.691 { 00:33:50.691 "dma_device_id": "system", 00:33:50.691 "dma_device_type": 1 00:33:50.691 }, 00:33:50.691 { 00:33:50.691 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:50.691 "dma_device_type": 2 00:33:50.691 } 00:33:50.691 ], 00:33:50.691 "driver_specific": {} 00:33:50.691 } 00:33:50.691 ] 00:33:50.691 14:26:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # return 0 00:33:50.691 14:26:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:33:50.691 14:26:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:33:50.691 14:26:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:33:50.691 14:26:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:50.691 14:26:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:50.691 14:26:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:50.691 14:26:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:50.691 14:26:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:50.691 14:26:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:50.949 14:26:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:50.949 14:26:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:50.949 14:26:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:50.949 14:26:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:50.949 14:26:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:51.206 14:26:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:51.206 "name": "Existed_Raid", 00:33:51.206 "uuid": "161d810d-4e8f-42d5-a963-c431540aeb8e", 00:33:51.206 "strip_size_kb": 0, 00:33:51.206 "state": "online", 00:33:51.206 "raid_level": "raid1", 00:33:51.206 "superblock": true, 00:33:51.206 "num_base_bdevs": 2, 00:33:51.206 "num_base_bdevs_discovered": 2, 00:33:51.206 "num_base_bdevs_operational": 2, 00:33:51.206 "base_bdevs_list": [ 00:33:51.206 { 00:33:51.206 "name": "BaseBdev1", 00:33:51.206 "uuid": "e0277dcb-812e-469b-8e28-261d8368e21f", 00:33:51.206 "is_configured": true, 00:33:51.206 "data_offset": 256, 00:33:51.206 "data_size": 7936 00:33:51.206 }, 00:33:51.206 { 00:33:51.206 "name": "BaseBdev2", 00:33:51.206 "uuid": "ff238746-6aa7-445f-9009-918d4b71725f", 00:33:51.206 "is_configured": true, 00:33:51.206 "data_offset": 256, 00:33:51.206 "data_size": 7936 00:33:51.206 } 00:33:51.206 ] 00:33:51.206 }' 00:33:51.206 14:26:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:51.206 14:26:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:51.773 14:26:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:33:51.773 14:26:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:33:51.773 14:26:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:33:51.773 14:26:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:33:51.773 14:26:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:33:51.773 14:26:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:33:51.773 14:26:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:33:51.773 14:26:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:33:52.060 [2024-07-15 14:26:37.961394] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:52.060 14:26:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:33:52.060 "name": "Existed_Raid", 00:33:52.060 "aliases": [ 00:33:52.060 "161d810d-4e8f-42d5-a963-c431540aeb8e" 00:33:52.060 ], 00:33:52.060 "product_name": "Raid Volume", 00:33:52.060 "block_size": 4128, 00:33:52.060 "num_blocks": 7936, 00:33:52.060 "uuid": "161d810d-4e8f-42d5-a963-c431540aeb8e", 00:33:52.060 "md_size": 32, 00:33:52.060 "md_interleave": true, 00:33:52.060 "dif_type": 0, 00:33:52.060 "assigned_rate_limits": { 00:33:52.060 "rw_ios_per_sec": 0, 00:33:52.060 "rw_mbytes_per_sec": 0, 00:33:52.060 "r_mbytes_per_sec": 0, 00:33:52.060 "w_mbytes_per_sec": 0 00:33:52.060 }, 00:33:52.060 "claimed": false, 00:33:52.060 "zoned": false, 00:33:52.060 "supported_io_types": { 00:33:52.060 "read": true, 00:33:52.060 "write": true, 00:33:52.060 "unmap": false, 00:33:52.060 "flush": false, 00:33:52.060 "reset": true, 00:33:52.060 "nvme_admin": false, 00:33:52.060 "nvme_io": false, 00:33:52.060 "nvme_io_md": false, 00:33:52.060 "write_zeroes": true, 00:33:52.060 "zcopy": false, 00:33:52.060 "get_zone_info": false, 00:33:52.060 "zone_management": false, 00:33:52.060 "zone_append": false, 00:33:52.060 "compare": false, 00:33:52.060 "compare_and_write": false, 00:33:52.060 "abort": false, 00:33:52.060 "seek_hole": false, 00:33:52.060 "seek_data": false, 00:33:52.060 "copy": false, 00:33:52.060 "nvme_iov_md": false 00:33:52.060 }, 00:33:52.060 "memory_domains": [ 00:33:52.060 { 00:33:52.060 "dma_device_id": "system", 00:33:52.060 "dma_device_type": 1 00:33:52.060 }, 00:33:52.060 { 00:33:52.060 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:52.060 "dma_device_type": 2 00:33:52.060 }, 00:33:52.060 { 00:33:52.060 "dma_device_id": "system", 00:33:52.060 "dma_device_type": 1 00:33:52.060 }, 00:33:52.060 { 00:33:52.061 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:52.061 "dma_device_type": 2 00:33:52.061 } 00:33:52.061 ], 00:33:52.061 "driver_specific": { 00:33:52.061 "raid": { 00:33:52.061 "uuid": "161d810d-4e8f-42d5-a963-c431540aeb8e", 00:33:52.061 "strip_size_kb": 0, 00:33:52.061 "state": "online", 00:33:52.061 "raid_level": "raid1", 00:33:52.061 "superblock": true, 00:33:52.061 "num_base_bdevs": 2, 00:33:52.061 "num_base_bdevs_discovered": 2, 00:33:52.061 "num_base_bdevs_operational": 2, 00:33:52.061 "base_bdevs_list": [ 00:33:52.061 { 00:33:52.061 "name": "BaseBdev1", 00:33:52.061 "uuid": "e0277dcb-812e-469b-8e28-261d8368e21f", 00:33:52.061 "is_configured": true, 00:33:52.061 "data_offset": 256, 00:33:52.061 "data_size": 7936 00:33:52.061 }, 00:33:52.061 { 00:33:52.061 "name": "BaseBdev2", 00:33:52.061 "uuid": "ff238746-6aa7-445f-9009-918d4b71725f", 00:33:52.061 "is_configured": true, 00:33:52.061 "data_offset": 256, 00:33:52.061 "data_size": 7936 00:33:52.061 } 00:33:52.061 ] 00:33:52.061 } 00:33:52.061 } 00:33:52.061 }' 00:33:52.061 14:26:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:33:52.061 14:26:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:33:52.061 BaseBdev2' 00:33:52.061 14:26:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:52.061 14:26:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:52.061 14:26:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:33:52.331 14:26:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:52.331 "name": "BaseBdev1", 00:33:52.331 "aliases": [ 00:33:52.331 "e0277dcb-812e-469b-8e28-261d8368e21f" 00:33:52.331 ], 00:33:52.331 "product_name": "Malloc disk", 00:33:52.331 "block_size": 4128, 00:33:52.331 "num_blocks": 8192, 00:33:52.331 "uuid": "e0277dcb-812e-469b-8e28-261d8368e21f", 00:33:52.331 "md_size": 32, 00:33:52.331 "md_interleave": true, 00:33:52.331 "dif_type": 0, 00:33:52.331 "assigned_rate_limits": { 00:33:52.331 "rw_ios_per_sec": 0, 00:33:52.331 "rw_mbytes_per_sec": 0, 00:33:52.331 "r_mbytes_per_sec": 0, 00:33:52.331 "w_mbytes_per_sec": 0 00:33:52.331 }, 00:33:52.331 "claimed": true, 00:33:52.331 "claim_type": "exclusive_write", 00:33:52.331 "zoned": false, 00:33:52.331 "supported_io_types": { 00:33:52.331 "read": true, 00:33:52.331 "write": true, 00:33:52.331 "unmap": true, 00:33:52.331 "flush": true, 00:33:52.331 "reset": true, 00:33:52.331 "nvme_admin": false, 00:33:52.331 "nvme_io": false, 00:33:52.331 "nvme_io_md": false, 00:33:52.331 "write_zeroes": true, 00:33:52.331 "zcopy": true, 00:33:52.331 "get_zone_info": false, 00:33:52.331 "zone_management": false, 00:33:52.331 "zone_append": false, 00:33:52.331 "compare": false, 00:33:52.331 "compare_and_write": false, 00:33:52.331 "abort": true, 00:33:52.331 "seek_hole": false, 00:33:52.331 "seek_data": false, 00:33:52.331 "copy": true, 00:33:52.331 "nvme_iov_md": false 00:33:52.331 }, 00:33:52.331 "memory_domains": [ 00:33:52.331 { 00:33:52.331 "dma_device_id": "system", 00:33:52.331 "dma_device_type": 1 00:33:52.331 }, 00:33:52.331 { 00:33:52.331 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:52.331 "dma_device_type": 2 00:33:52.331 } 00:33:52.331 ], 00:33:52.331 "driver_specific": {} 00:33:52.331 }' 00:33:52.331 14:26:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:52.590 14:26:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:52.590 14:26:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:33:52.590 14:26:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:52.590 14:26:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:52.590 14:26:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:33:52.590 14:26:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:52.590 14:26:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:52.848 14:26:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:33:52.848 14:26:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:52.848 14:26:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:52.848 14:26:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:33:52.848 14:26:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:52.848 14:26:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:33:52.848 14:26:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:53.107 14:26:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:53.107 "name": "BaseBdev2", 00:33:53.107 "aliases": [ 00:33:53.107 "ff238746-6aa7-445f-9009-918d4b71725f" 00:33:53.107 ], 00:33:53.107 "product_name": "Malloc disk", 00:33:53.107 "block_size": 4128, 00:33:53.107 "num_blocks": 8192, 00:33:53.107 "uuid": "ff238746-6aa7-445f-9009-918d4b71725f", 00:33:53.107 "md_size": 32, 00:33:53.107 "md_interleave": true, 00:33:53.107 "dif_type": 0, 00:33:53.107 "assigned_rate_limits": { 00:33:53.107 "rw_ios_per_sec": 0, 00:33:53.107 "rw_mbytes_per_sec": 0, 00:33:53.107 "r_mbytes_per_sec": 0, 00:33:53.107 "w_mbytes_per_sec": 0 00:33:53.107 }, 00:33:53.107 "claimed": true, 00:33:53.107 "claim_type": "exclusive_write", 00:33:53.107 "zoned": false, 00:33:53.107 "supported_io_types": { 00:33:53.107 "read": true, 00:33:53.107 "write": true, 00:33:53.107 "unmap": true, 00:33:53.107 "flush": true, 00:33:53.107 "reset": true, 00:33:53.107 "nvme_admin": false, 00:33:53.107 "nvme_io": false, 00:33:53.107 "nvme_io_md": false, 00:33:53.107 "write_zeroes": true, 00:33:53.107 "zcopy": true, 00:33:53.107 "get_zone_info": false, 00:33:53.107 "zone_management": false, 00:33:53.107 "zone_append": false, 00:33:53.107 "compare": false, 00:33:53.107 "compare_and_write": false, 00:33:53.107 "abort": true, 00:33:53.107 "seek_hole": false, 00:33:53.107 "seek_data": false, 00:33:53.107 "copy": true, 00:33:53.107 "nvme_iov_md": false 00:33:53.107 }, 00:33:53.107 "memory_domains": [ 00:33:53.107 { 00:33:53.107 "dma_device_id": "system", 00:33:53.107 "dma_device_type": 1 00:33:53.107 }, 00:33:53.107 { 00:33:53.107 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:53.107 "dma_device_type": 2 00:33:53.107 } 00:33:53.107 ], 00:33:53.107 "driver_specific": {} 00:33:53.107 }' 00:33:53.107 14:26:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:53.107 14:26:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:53.107 14:26:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:33:53.107 14:26:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:53.365 14:26:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:53.365 14:26:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:33:53.365 14:26:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:53.365 14:26:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:53.365 14:26:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:33:53.365 14:26:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:53.365 14:26:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:53.365 14:26:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:33:53.365 14:26:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:33:53.624 [2024-07-15 14:26:39.581485] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:33:53.882 14:26:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@275 -- # local expected_state 00:33:53.882 14:26:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:33:53.882 14:26:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # case $1 in 00:33:53.882 14:26:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@214 -- # return 0 00:33:53.882 14:26:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:33:53.882 14:26:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:33:53.882 14:26:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:53.882 14:26:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:53.882 14:26:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:53.882 14:26:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:53.882 14:26:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:33:53.882 14:26:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:53.882 14:26:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:53.882 14:26:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:53.882 14:26:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:53.882 14:26:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:53.882 14:26:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:54.140 14:26:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:54.140 "name": "Existed_Raid", 00:33:54.140 "uuid": "161d810d-4e8f-42d5-a963-c431540aeb8e", 00:33:54.140 "strip_size_kb": 0, 00:33:54.140 "state": "online", 00:33:54.140 "raid_level": "raid1", 00:33:54.140 "superblock": true, 00:33:54.140 "num_base_bdevs": 2, 00:33:54.140 "num_base_bdevs_discovered": 1, 00:33:54.140 "num_base_bdevs_operational": 1, 00:33:54.140 "base_bdevs_list": [ 00:33:54.140 { 00:33:54.140 "name": null, 00:33:54.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:54.140 "is_configured": false, 00:33:54.140 "data_offset": 256, 00:33:54.140 "data_size": 7936 00:33:54.140 }, 00:33:54.140 { 00:33:54.140 "name": "BaseBdev2", 00:33:54.140 "uuid": "ff238746-6aa7-445f-9009-918d4b71725f", 00:33:54.140 "is_configured": true, 00:33:54.140 "data_offset": 256, 00:33:54.140 "data_size": 7936 00:33:54.140 } 00:33:54.140 ] 00:33:54.140 }' 00:33:54.140 14:26:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:54.140 14:26:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:54.706 14:26:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:33:54.706 14:26:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:33:54.706 14:26:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:33:54.706 14:26:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:54.964 14:26:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:33:54.964 14:26:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:33:54.964 14:26:40 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:33:55.252 [2024-07-15 14:26:41.115659] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:33:55.252 [2024-07-15 14:26:41.115946] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:55.252 [2024-07-15 14:26:41.201519] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:55.252 [2024-07-15 14:26:41.201759] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:55.252 [2024-07-15 14:26:41.201876] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:33:55.252 14:26:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:33:55.252 14:26:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:33:55.252 14:26:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:55.252 14:26:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:33:55.511 14:26:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:33:55.511 14:26:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:33:55.511 14:26:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:33:55.511 14:26:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@341 -- # killprocess 219746 00:33:55.511 14:26:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@948 -- # '[' -z 219746 ']' 00:33:55.511 14:26:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # kill -0 219746 00:33:55.511 14:26:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@953 -- # uname 00:33:55.511 14:26:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:55.511 14:26:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 219746 00:33:55.511 14:26:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:55.511 14:26:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:55.511 14:26:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@966 -- # echo 'killing process with pid 219746' 00:33:55.511 killing process with pid 219746 00:33:55.511 14:26:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@967 -- # kill 219746 00:33:55.511 [2024-07-15 14:26:41.492189] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:55.511 14:26:41 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # wait 219746 00:33:55.511 [2024-07-15 14:26:41.492528] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:56.907 14:26:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@343 -- # return 0 00:33:56.907 00:33:56.907 real 0m12.931s 00:33:56.907 user 0m22.665s 00:33:56.907 sys 0m1.592s 00:33:56.907 14:26:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:56.907 14:26:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:56.907 ************************************ 00:33:56.907 END TEST raid_state_function_test_sb_md_interleaved 00:33:56.907 ************************************ 00:33:56.907 14:26:42 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:33:56.907 14:26:42 bdev_raid -- bdev/bdev_raid.sh@913 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:33:56.907 14:26:42 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:33:56.907 14:26:42 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:56.907 14:26:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:33:56.907 ************************************ 00:33:56.907 START TEST raid_superblock_test_md_interleaved 00:33:56.907 ************************************ 00:33:56.907 14:26:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 2 00:33:56.907 14:26:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:33:56.907 14:26:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:33:56.907 14:26:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:33:56.907 14:26:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:33:56.907 14:26:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:33:56.907 14:26:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:33:56.907 14:26:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:33:56.907 14:26:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:33:56.907 14:26:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:33:56.907 14:26:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local strip_size 00:33:56.907 14:26:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:33:56.907 14:26:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:33:56.907 14:26:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:33:56.907 14:26:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:33:56.907 14:26:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:33:56.907 14:26:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # raid_pid=220128 00:33:56.907 14:26:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:33:56.907 14:26:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # waitforlisten 220128 /var/tmp/spdk-raid.sock 00:33:56.907 14:26:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@829 -- # '[' -z 220128 ']' 00:33:56.907 14:26:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:33:56.907 14:26:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:56.907 14:26:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:33:56.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:33:56.908 14:26:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:56.908 14:26:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:33:56.908 [2024-07-15 14:26:42.684212] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:33:56.908 [2024-07-15 14:26:42.684501] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid220128 ] 00:33:56.908 [2024-07-15 14:26:42.834283] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:57.166 [2024-07-15 14:26:43.048572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:57.424 [2024-07-15 14:26:43.241822] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:57.682 14:26:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:57.683 14:26:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@862 -- # return 0 00:33:57.683 14:26:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:33:57.683 14:26:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:33:57.683 14:26:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:33:57.683 14:26:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:33:57.683 14:26:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:33:57.683 14:26:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:33:57.683 14:26:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:33:57.683 14:26:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:33:57.683 14:26:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:33:57.941 malloc1 00:33:57.941 14:26:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:33:58.199 [2024-07-15 14:26:44.175417] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:33:58.199 [2024-07-15 14:26:44.175738] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:58.199 [2024-07-15 14:26:44.175956] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:33:58.199 [2024-07-15 14:26:44.176112] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:58.199 [2024-07-15 14:26:44.177945] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:58.199 [2024-07-15 14:26:44.178149] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:33:58.199 pt1 00:33:58.199 14:26:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:33:58.199 14:26:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:33:58.199 14:26:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:33:58.199 14:26:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:33:58.199 14:26:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:33:58.199 14:26:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:33:58.199 14:26:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:33:58.199 14:26:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:33:58.199 14:26:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:33:58.518 malloc2 00:33:58.519 14:26:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:33:58.789 [2024-07-15 14:26:44.734502] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:33:58.789 [2024-07-15 14:26:44.734857] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:58.789 [2024-07-15 14:26:44.734951] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:33:58.789 [2024-07-15 14:26:44.735175] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:58.789 [2024-07-15 14:26:44.736886] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:58.789 [2024-07-15 14:26:44.737068] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:33:58.789 pt2 00:33:58.789 14:26:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:33:58.789 14:26:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:33:58.789 14:26:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:33:59.048 [2024-07-15 14:26:44.978589] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:33:59.048 [2024-07-15 14:26:44.980326] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:59.048 [2024-07-15 14:26:44.980638] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:33:59.048 [2024-07-15 14:26:44.980791] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:33:59.048 [2024-07-15 14:26:44.980983] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:33:59.048 [2024-07-15 14:26:44.981195] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:33:59.048 [2024-07-15 14:26:44.981320] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:33:59.048 [2024-07-15 14:26:44.981489] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:59.048 14:26:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:33:59.048 14:26:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:59.048 14:26:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:59.048 14:26:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:33:59.048 14:26:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:33:59.048 14:26:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:33:59.048 14:26:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:59.048 14:26:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:59.048 14:26:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:59.048 14:26:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:59.048 14:26:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:59.048 14:26:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:59.306 14:26:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:59.306 "name": "raid_bdev1", 00:33:59.306 "uuid": "3c926d90-3b79-4033-9169-7c4fb45e9214", 00:33:59.306 "strip_size_kb": 0, 00:33:59.306 "state": "online", 00:33:59.306 "raid_level": "raid1", 00:33:59.306 "superblock": true, 00:33:59.306 "num_base_bdevs": 2, 00:33:59.306 "num_base_bdevs_discovered": 2, 00:33:59.306 "num_base_bdevs_operational": 2, 00:33:59.306 "base_bdevs_list": [ 00:33:59.306 { 00:33:59.306 "name": "pt1", 00:33:59.306 "uuid": "00000000-0000-0000-0000-000000000001", 00:33:59.306 "is_configured": true, 00:33:59.306 "data_offset": 256, 00:33:59.306 "data_size": 7936 00:33:59.306 }, 00:33:59.306 { 00:33:59.306 "name": "pt2", 00:33:59.306 "uuid": "00000000-0000-0000-0000-000000000002", 00:33:59.306 "is_configured": true, 00:33:59.306 "data_offset": 256, 00:33:59.306 "data_size": 7936 00:33:59.306 } 00:33:59.306 ] 00:33:59.306 }' 00:33:59.306 14:26:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:59.306 14:26:45 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:00.240 14:26:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:34:00.240 14:26:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:34:00.240 14:26:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:34:00.240 14:26:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:34:00.240 14:26:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:34:00.240 14:26:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:34:00.240 14:26:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:34:00.240 14:26:45 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:34:00.240 [2024-07-15 14:26:46.102927] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:00.240 14:26:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:34:00.240 "name": "raid_bdev1", 00:34:00.240 "aliases": [ 00:34:00.240 "3c926d90-3b79-4033-9169-7c4fb45e9214" 00:34:00.240 ], 00:34:00.240 "product_name": "Raid Volume", 00:34:00.240 "block_size": 4128, 00:34:00.240 "num_blocks": 7936, 00:34:00.240 "uuid": "3c926d90-3b79-4033-9169-7c4fb45e9214", 00:34:00.240 "md_size": 32, 00:34:00.240 "md_interleave": true, 00:34:00.240 "dif_type": 0, 00:34:00.240 "assigned_rate_limits": { 00:34:00.240 "rw_ios_per_sec": 0, 00:34:00.240 "rw_mbytes_per_sec": 0, 00:34:00.240 "r_mbytes_per_sec": 0, 00:34:00.240 "w_mbytes_per_sec": 0 00:34:00.240 }, 00:34:00.240 "claimed": false, 00:34:00.240 "zoned": false, 00:34:00.240 "supported_io_types": { 00:34:00.240 "read": true, 00:34:00.240 "write": true, 00:34:00.240 "unmap": false, 00:34:00.240 "flush": false, 00:34:00.240 "reset": true, 00:34:00.240 "nvme_admin": false, 00:34:00.240 "nvme_io": false, 00:34:00.240 "nvme_io_md": false, 00:34:00.240 "write_zeroes": true, 00:34:00.240 "zcopy": false, 00:34:00.241 "get_zone_info": false, 00:34:00.241 "zone_management": false, 00:34:00.241 "zone_append": false, 00:34:00.241 "compare": false, 00:34:00.241 "compare_and_write": false, 00:34:00.241 "abort": false, 00:34:00.241 "seek_hole": false, 00:34:00.241 "seek_data": false, 00:34:00.241 "copy": false, 00:34:00.241 "nvme_iov_md": false 00:34:00.241 }, 00:34:00.241 "memory_domains": [ 00:34:00.241 { 00:34:00.241 "dma_device_id": "system", 00:34:00.241 "dma_device_type": 1 00:34:00.241 }, 00:34:00.241 { 00:34:00.241 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:00.241 "dma_device_type": 2 00:34:00.241 }, 00:34:00.241 { 00:34:00.241 "dma_device_id": "system", 00:34:00.241 "dma_device_type": 1 00:34:00.241 }, 00:34:00.241 { 00:34:00.241 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:00.241 "dma_device_type": 2 00:34:00.241 } 00:34:00.241 ], 00:34:00.241 "driver_specific": { 00:34:00.241 "raid": { 00:34:00.241 "uuid": "3c926d90-3b79-4033-9169-7c4fb45e9214", 00:34:00.241 "strip_size_kb": 0, 00:34:00.241 "state": "online", 00:34:00.241 "raid_level": "raid1", 00:34:00.241 "superblock": true, 00:34:00.241 "num_base_bdevs": 2, 00:34:00.241 "num_base_bdevs_discovered": 2, 00:34:00.241 "num_base_bdevs_operational": 2, 00:34:00.241 "base_bdevs_list": [ 00:34:00.241 { 00:34:00.241 "name": "pt1", 00:34:00.241 "uuid": "00000000-0000-0000-0000-000000000001", 00:34:00.241 "is_configured": true, 00:34:00.241 "data_offset": 256, 00:34:00.241 "data_size": 7936 00:34:00.241 }, 00:34:00.241 { 00:34:00.241 "name": "pt2", 00:34:00.241 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:00.241 "is_configured": true, 00:34:00.241 "data_offset": 256, 00:34:00.241 "data_size": 7936 00:34:00.241 } 00:34:00.241 ] 00:34:00.241 } 00:34:00.241 } 00:34:00.241 }' 00:34:00.241 14:26:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:34:00.241 14:26:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:34:00.241 pt2' 00:34:00.241 14:26:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:34:00.241 14:26:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:34:00.241 14:26:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:34:00.499 14:26:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:34:00.499 "name": "pt1", 00:34:00.499 "aliases": [ 00:34:00.499 "00000000-0000-0000-0000-000000000001" 00:34:00.499 ], 00:34:00.499 "product_name": "passthru", 00:34:00.499 "block_size": 4128, 00:34:00.499 "num_blocks": 8192, 00:34:00.499 "uuid": "00000000-0000-0000-0000-000000000001", 00:34:00.499 "md_size": 32, 00:34:00.499 "md_interleave": true, 00:34:00.499 "dif_type": 0, 00:34:00.499 "assigned_rate_limits": { 00:34:00.499 "rw_ios_per_sec": 0, 00:34:00.499 "rw_mbytes_per_sec": 0, 00:34:00.499 "r_mbytes_per_sec": 0, 00:34:00.499 "w_mbytes_per_sec": 0 00:34:00.499 }, 00:34:00.499 "claimed": true, 00:34:00.500 "claim_type": "exclusive_write", 00:34:00.500 "zoned": false, 00:34:00.500 "supported_io_types": { 00:34:00.500 "read": true, 00:34:00.500 "write": true, 00:34:00.500 "unmap": true, 00:34:00.500 "flush": true, 00:34:00.500 "reset": true, 00:34:00.500 "nvme_admin": false, 00:34:00.500 "nvme_io": false, 00:34:00.500 "nvme_io_md": false, 00:34:00.500 "write_zeroes": true, 00:34:00.500 "zcopy": true, 00:34:00.500 "get_zone_info": false, 00:34:00.500 "zone_management": false, 00:34:00.500 "zone_append": false, 00:34:00.500 "compare": false, 00:34:00.500 "compare_and_write": false, 00:34:00.500 "abort": true, 00:34:00.500 "seek_hole": false, 00:34:00.500 "seek_data": false, 00:34:00.500 "copy": true, 00:34:00.500 "nvme_iov_md": false 00:34:00.500 }, 00:34:00.500 "memory_domains": [ 00:34:00.500 { 00:34:00.500 "dma_device_id": "system", 00:34:00.500 "dma_device_type": 1 00:34:00.500 }, 00:34:00.500 { 00:34:00.500 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:00.500 "dma_device_type": 2 00:34:00.500 } 00:34:00.500 ], 00:34:00.500 "driver_specific": { 00:34:00.500 "passthru": { 00:34:00.500 "name": "pt1", 00:34:00.500 "base_bdev_name": "malloc1" 00:34:00.500 } 00:34:00.500 } 00:34:00.500 }' 00:34:00.500 14:26:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:00.500 14:26:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:00.758 14:26:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:34:00.758 14:26:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:00.758 14:26:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:00.758 14:26:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:34:00.758 14:26:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:00.758 14:26:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:00.758 14:26:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:34:00.758 14:26:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:00.758 14:26:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:00.758 14:26:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:34:00.758 14:26:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:34:01.016 14:26:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:34:01.016 14:26:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:34:01.016 14:26:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:34:01.016 "name": "pt2", 00:34:01.016 "aliases": [ 00:34:01.016 "00000000-0000-0000-0000-000000000002" 00:34:01.016 ], 00:34:01.016 "product_name": "passthru", 00:34:01.016 "block_size": 4128, 00:34:01.016 "num_blocks": 8192, 00:34:01.016 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:01.016 "md_size": 32, 00:34:01.016 "md_interleave": true, 00:34:01.016 "dif_type": 0, 00:34:01.017 "assigned_rate_limits": { 00:34:01.017 "rw_ios_per_sec": 0, 00:34:01.017 "rw_mbytes_per_sec": 0, 00:34:01.017 "r_mbytes_per_sec": 0, 00:34:01.017 "w_mbytes_per_sec": 0 00:34:01.017 }, 00:34:01.017 "claimed": true, 00:34:01.017 "claim_type": "exclusive_write", 00:34:01.017 "zoned": false, 00:34:01.017 "supported_io_types": { 00:34:01.017 "read": true, 00:34:01.017 "write": true, 00:34:01.017 "unmap": true, 00:34:01.017 "flush": true, 00:34:01.017 "reset": true, 00:34:01.017 "nvme_admin": false, 00:34:01.017 "nvme_io": false, 00:34:01.017 "nvme_io_md": false, 00:34:01.017 "write_zeroes": true, 00:34:01.017 "zcopy": true, 00:34:01.017 "get_zone_info": false, 00:34:01.017 "zone_management": false, 00:34:01.017 "zone_append": false, 00:34:01.017 "compare": false, 00:34:01.017 "compare_and_write": false, 00:34:01.017 "abort": true, 00:34:01.017 "seek_hole": false, 00:34:01.017 "seek_data": false, 00:34:01.017 "copy": true, 00:34:01.017 "nvme_iov_md": false 00:34:01.017 }, 00:34:01.017 "memory_domains": [ 00:34:01.017 { 00:34:01.017 "dma_device_id": "system", 00:34:01.017 "dma_device_type": 1 00:34:01.017 }, 00:34:01.017 { 00:34:01.017 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:01.017 "dma_device_type": 2 00:34:01.017 } 00:34:01.017 ], 00:34:01.017 "driver_specific": { 00:34:01.017 "passthru": { 00:34:01.017 "name": "pt2", 00:34:01.017 "base_bdev_name": "malloc2" 00:34:01.017 } 00:34:01.017 } 00:34:01.017 }' 00:34:01.017 14:26:46 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:01.275 14:26:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:01.275 14:26:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:34:01.275 14:26:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:01.275 14:26:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:01.275 14:26:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:34:01.275 14:26:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:01.275 14:26:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:01.275 14:26:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:34:01.275 14:26:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:01.534 14:26:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:01.534 14:26:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:34:01.534 14:26:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:34:01.534 14:26:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:34:01.793 [2024-07-15 14:26:47.571163] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:01.793 14:26:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=3c926d90-3b79-4033-9169-7c4fb45e9214 00:34:01.793 14:26:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # '[' -z 3c926d90-3b79-4033-9169-7c4fb45e9214 ']' 00:34:01.793 14:26:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:34:02.051 [2024-07-15 14:26:47.810998] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:02.051 [2024-07-15 14:26:47.811251] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:02.051 [2024-07-15 14:26:47.811468] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:02.051 [2024-07-15 14:26:47.811638] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:02.051 [2024-07-15 14:26:47.811769] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:34:02.051 14:26:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:02.051 14:26:47 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:34:02.310 14:26:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:34:02.310 14:26:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:34:02.310 14:26:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:34:02.310 14:26:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:34:02.310 14:26:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:34:02.310 14:26:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:34:02.569 14:26:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:34:02.569 14:26:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:34:02.827 14:26:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:34:02.827 14:26:48 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:34:02.827 14:26:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@648 -- # local es=0 00:34:02.827 14:26:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:34:02.827 14:26:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:02.827 14:26:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:02.827 14:26:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:02.827 14:26:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:02.827 14:26:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:02.827 14:26:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:02.827 14:26:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:02.827 14:26:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:34:02.827 14:26:48 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:34:03.099 [2024-07-15 14:26:49.007179] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:34:03.099 [2024-07-15 14:26:49.008896] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:34:03.099 [2024-07-15 14:26:49.009099] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:34:03.099 [2024-07-15 14:26:49.009316] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:34:03.099 [2024-07-15 14:26:49.009476] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:03.099 [2024-07-15 14:26:49.009522] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state configuring 00:34:03.099 request: 00:34:03.099 { 00:34:03.099 "name": "raid_bdev1", 00:34:03.099 "raid_level": "raid1", 00:34:03.099 "base_bdevs": [ 00:34:03.099 "malloc1", 00:34:03.099 "malloc2" 00:34:03.099 ], 00:34:03.099 "superblock": false, 00:34:03.099 "method": "bdev_raid_create", 00:34:03.099 "req_id": 1 00:34:03.099 } 00:34:03.099 Got JSON-RPC error response 00:34:03.099 response: 00:34:03.099 { 00:34:03.099 "code": -17, 00:34:03.099 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:34:03.099 } 00:34:03.099 14:26:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@651 -- # es=1 00:34:03.099 14:26:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:03.099 14:26:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:03.099 14:26:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:03.099 14:26:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:03.099 14:26:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:34:03.363 14:26:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:34:03.363 14:26:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:34:03.363 14:26:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:34:03.621 [2024-07-15 14:26:49.567260] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:34:03.621 [2024-07-15 14:26:49.567682] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:03.621 [2024-07-15 14:26:49.567882] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:34:03.621 [2024-07-15 14:26:49.568016] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:03.621 [2024-07-15 14:26:49.569885] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:03.621 [2024-07-15 14:26:49.570121] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:34:03.621 [2024-07-15 14:26:49.570317] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:34:03.621 [2024-07-15 14:26:49.570499] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:34:03.621 pt1 00:34:03.621 14:26:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:34:03.621 14:26:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:03.621 14:26:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:03.621 14:26:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:34:03.621 14:26:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:34:03.621 14:26:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:34:03.621 14:26:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:03.621 14:26:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:03.621 14:26:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:03.621 14:26:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:03.621 14:26:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:03.621 14:26:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:03.878 14:26:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:03.878 "name": "raid_bdev1", 00:34:03.878 "uuid": "3c926d90-3b79-4033-9169-7c4fb45e9214", 00:34:03.878 "strip_size_kb": 0, 00:34:03.878 "state": "configuring", 00:34:03.878 "raid_level": "raid1", 00:34:03.878 "superblock": true, 00:34:03.878 "num_base_bdevs": 2, 00:34:03.878 "num_base_bdevs_discovered": 1, 00:34:03.878 "num_base_bdevs_operational": 2, 00:34:03.878 "base_bdevs_list": [ 00:34:03.878 { 00:34:03.878 "name": "pt1", 00:34:03.878 "uuid": "00000000-0000-0000-0000-000000000001", 00:34:03.878 "is_configured": true, 00:34:03.878 "data_offset": 256, 00:34:03.878 "data_size": 7936 00:34:03.878 }, 00:34:03.878 { 00:34:03.878 "name": null, 00:34:03.878 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:03.878 "is_configured": false, 00:34:03.878 "data_offset": 256, 00:34:03.878 "data_size": 7936 00:34:03.878 } 00:34:03.878 ] 00:34:03.878 }' 00:34:03.878 14:26:49 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:03.878 14:26:49 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:04.812 14:26:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:34:04.812 14:26:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:34:04.812 14:26:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:34:04.812 14:26:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:34:04.812 [2024-07-15 14:26:50.719421] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:34:04.812 [2024-07-15 14:26:50.719684] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:04.812 [2024-07-15 14:26:50.719792] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:34:04.812 [2024-07-15 14:26:50.720014] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:04.812 [2024-07-15 14:26:50.720214] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:04.812 [2024-07-15 14:26:50.720394] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:34:04.812 [2024-07-15 14:26:50.720562] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:34:04.812 [2024-07-15 14:26:50.720705] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:34:04.812 [2024-07-15 14:26:50.720914] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:34:04.812 [2024-07-15 14:26:50.721036] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:34:04.812 [2024-07-15 14:26:50.721164] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:34:04.812 [2024-07-15 14:26:50.721348] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:34:04.812 [2024-07-15 14:26:50.721459] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:34:04.812 [2024-07-15 14:26:50.721602] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:04.812 pt2 00:34:04.812 14:26:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:34:04.812 14:26:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:34:04.812 14:26:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:34:04.812 14:26:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:04.812 14:26:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:04.812 14:26:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:34:04.812 14:26:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:34:04.812 14:26:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:34:04.812 14:26:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:04.812 14:26:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:04.812 14:26:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:04.812 14:26:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:04.812 14:26:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:04.812 14:26:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:05.070 14:26:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:05.070 "name": "raid_bdev1", 00:34:05.070 "uuid": "3c926d90-3b79-4033-9169-7c4fb45e9214", 00:34:05.070 "strip_size_kb": 0, 00:34:05.070 "state": "online", 00:34:05.070 "raid_level": "raid1", 00:34:05.070 "superblock": true, 00:34:05.070 "num_base_bdevs": 2, 00:34:05.070 "num_base_bdevs_discovered": 2, 00:34:05.071 "num_base_bdevs_operational": 2, 00:34:05.071 "base_bdevs_list": [ 00:34:05.071 { 00:34:05.071 "name": "pt1", 00:34:05.071 "uuid": "00000000-0000-0000-0000-000000000001", 00:34:05.071 "is_configured": true, 00:34:05.071 "data_offset": 256, 00:34:05.071 "data_size": 7936 00:34:05.071 }, 00:34:05.071 { 00:34:05.071 "name": "pt2", 00:34:05.071 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:05.071 "is_configured": true, 00:34:05.071 "data_offset": 256, 00:34:05.071 "data_size": 7936 00:34:05.071 } 00:34:05.071 ] 00:34:05.071 }' 00:34:05.071 14:26:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:05.071 14:26:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:05.636 14:26:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:34:05.636 14:26:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:34:05.636 14:26:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:34:05.636 14:26:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:34:05.636 14:26:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:34:05.636 14:26:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:34:05.636 14:26:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:34:05.636 14:26:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:34:05.894 [2024-07-15 14:26:51.831731] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:05.894 14:26:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:34:05.894 "name": "raid_bdev1", 00:34:05.894 "aliases": [ 00:34:05.894 "3c926d90-3b79-4033-9169-7c4fb45e9214" 00:34:05.894 ], 00:34:05.894 "product_name": "Raid Volume", 00:34:05.894 "block_size": 4128, 00:34:05.894 "num_blocks": 7936, 00:34:05.894 "uuid": "3c926d90-3b79-4033-9169-7c4fb45e9214", 00:34:05.894 "md_size": 32, 00:34:05.894 "md_interleave": true, 00:34:05.894 "dif_type": 0, 00:34:05.894 "assigned_rate_limits": { 00:34:05.894 "rw_ios_per_sec": 0, 00:34:05.894 "rw_mbytes_per_sec": 0, 00:34:05.894 "r_mbytes_per_sec": 0, 00:34:05.894 "w_mbytes_per_sec": 0 00:34:05.894 }, 00:34:05.894 "claimed": false, 00:34:05.894 "zoned": false, 00:34:05.894 "supported_io_types": { 00:34:05.894 "read": true, 00:34:05.894 "write": true, 00:34:05.894 "unmap": false, 00:34:05.894 "flush": false, 00:34:05.894 "reset": true, 00:34:05.894 "nvme_admin": false, 00:34:05.894 "nvme_io": false, 00:34:05.894 "nvme_io_md": false, 00:34:05.894 "write_zeroes": true, 00:34:05.894 "zcopy": false, 00:34:05.894 "get_zone_info": false, 00:34:05.894 "zone_management": false, 00:34:05.894 "zone_append": false, 00:34:05.894 "compare": false, 00:34:05.894 "compare_and_write": false, 00:34:05.894 "abort": false, 00:34:05.894 "seek_hole": false, 00:34:05.894 "seek_data": false, 00:34:05.894 "copy": false, 00:34:05.894 "nvme_iov_md": false 00:34:05.894 }, 00:34:05.894 "memory_domains": [ 00:34:05.894 { 00:34:05.894 "dma_device_id": "system", 00:34:05.894 "dma_device_type": 1 00:34:05.894 }, 00:34:05.894 { 00:34:05.894 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:05.894 "dma_device_type": 2 00:34:05.894 }, 00:34:05.894 { 00:34:05.894 "dma_device_id": "system", 00:34:05.894 "dma_device_type": 1 00:34:05.894 }, 00:34:05.894 { 00:34:05.894 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:05.894 "dma_device_type": 2 00:34:05.894 } 00:34:05.894 ], 00:34:05.894 "driver_specific": { 00:34:05.894 "raid": { 00:34:05.894 "uuid": "3c926d90-3b79-4033-9169-7c4fb45e9214", 00:34:05.894 "strip_size_kb": 0, 00:34:05.894 "state": "online", 00:34:05.894 "raid_level": "raid1", 00:34:05.894 "superblock": true, 00:34:05.894 "num_base_bdevs": 2, 00:34:05.894 "num_base_bdevs_discovered": 2, 00:34:05.894 "num_base_bdevs_operational": 2, 00:34:05.894 "base_bdevs_list": [ 00:34:05.894 { 00:34:05.894 "name": "pt1", 00:34:05.894 "uuid": "00000000-0000-0000-0000-000000000001", 00:34:05.894 "is_configured": true, 00:34:05.894 "data_offset": 256, 00:34:05.894 "data_size": 7936 00:34:05.894 }, 00:34:05.894 { 00:34:05.894 "name": "pt2", 00:34:05.894 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:05.894 "is_configured": true, 00:34:05.894 "data_offset": 256, 00:34:05.894 "data_size": 7936 00:34:05.894 } 00:34:05.894 ] 00:34:05.894 } 00:34:05.894 } 00:34:05.894 }' 00:34:05.894 14:26:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:34:05.894 14:26:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:34:05.894 pt2' 00:34:05.894 14:26:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:34:05.894 14:26:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:34:05.894 14:26:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:34:06.153 14:26:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:34:06.153 "name": "pt1", 00:34:06.153 "aliases": [ 00:34:06.153 "00000000-0000-0000-0000-000000000001" 00:34:06.153 ], 00:34:06.153 "product_name": "passthru", 00:34:06.153 "block_size": 4128, 00:34:06.153 "num_blocks": 8192, 00:34:06.153 "uuid": "00000000-0000-0000-0000-000000000001", 00:34:06.153 "md_size": 32, 00:34:06.153 "md_interleave": true, 00:34:06.153 "dif_type": 0, 00:34:06.153 "assigned_rate_limits": { 00:34:06.153 "rw_ios_per_sec": 0, 00:34:06.153 "rw_mbytes_per_sec": 0, 00:34:06.153 "r_mbytes_per_sec": 0, 00:34:06.153 "w_mbytes_per_sec": 0 00:34:06.153 }, 00:34:06.153 "claimed": true, 00:34:06.153 "claim_type": "exclusive_write", 00:34:06.153 "zoned": false, 00:34:06.153 "supported_io_types": { 00:34:06.153 "read": true, 00:34:06.153 "write": true, 00:34:06.153 "unmap": true, 00:34:06.153 "flush": true, 00:34:06.153 "reset": true, 00:34:06.153 "nvme_admin": false, 00:34:06.153 "nvme_io": false, 00:34:06.153 "nvme_io_md": false, 00:34:06.153 "write_zeroes": true, 00:34:06.153 "zcopy": true, 00:34:06.153 "get_zone_info": false, 00:34:06.153 "zone_management": false, 00:34:06.153 "zone_append": false, 00:34:06.153 "compare": false, 00:34:06.153 "compare_and_write": false, 00:34:06.153 "abort": true, 00:34:06.153 "seek_hole": false, 00:34:06.153 "seek_data": false, 00:34:06.153 "copy": true, 00:34:06.153 "nvme_iov_md": false 00:34:06.153 }, 00:34:06.153 "memory_domains": [ 00:34:06.153 { 00:34:06.153 "dma_device_id": "system", 00:34:06.153 "dma_device_type": 1 00:34:06.153 }, 00:34:06.153 { 00:34:06.153 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:06.153 "dma_device_type": 2 00:34:06.153 } 00:34:06.153 ], 00:34:06.153 "driver_specific": { 00:34:06.153 "passthru": { 00:34:06.153 "name": "pt1", 00:34:06.153 "base_bdev_name": "malloc1" 00:34:06.153 } 00:34:06.153 } 00:34:06.153 }' 00:34:06.153 14:26:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:06.412 14:26:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:06.412 14:26:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:34:06.412 14:26:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:06.412 14:26:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:06.412 14:26:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:34:06.412 14:26:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:06.412 14:26:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:06.670 14:26:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:34:06.670 14:26:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:06.670 14:26:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:06.670 14:26:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:34:06.670 14:26:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:34:06.670 14:26:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:34:06.670 14:26:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:34:06.929 14:26:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:34:06.929 "name": "pt2", 00:34:06.929 "aliases": [ 00:34:06.929 "00000000-0000-0000-0000-000000000002" 00:34:06.929 ], 00:34:06.929 "product_name": "passthru", 00:34:06.929 "block_size": 4128, 00:34:06.929 "num_blocks": 8192, 00:34:06.929 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:06.929 "md_size": 32, 00:34:06.929 "md_interleave": true, 00:34:06.929 "dif_type": 0, 00:34:06.929 "assigned_rate_limits": { 00:34:06.929 "rw_ios_per_sec": 0, 00:34:06.929 "rw_mbytes_per_sec": 0, 00:34:06.929 "r_mbytes_per_sec": 0, 00:34:06.929 "w_mbytes_per_sec": 0 00:34:06.929 }, 00:34:06.929 "claimed": true, 00:34:06.929 "claim_type": "exclusive_write", 00:34:06.929 "zoned": false, 00:34:06.929 "supported_io_types": { 00:34:06.929 "read": true, 00:34:06.929 "write": true, 00:34:06.929 "unmap": true, 00:34:06.929 "flush": true, 00:34:06.929 "reset": true, 00:34:06.929 "nvme_admin": false, 00:34:06.929 "nvme_io": false, 00:34:06.929 "nvme_io_md": false, 00:34:06.929 "write_zeroes": true, 00:34:06.929 "zcopy": true, 00:34:06.929 "get_zone_info": false, 00:34:06.929 "zone_management": false, 00:34:06.929 "zone_append": false, 00:34:06.929 "compare": false, 00:34:06.929 "compare_and_write": false, 00:34:06.929 "abort": true, 00:34:06.929 "seek_hole": false, 00:34:06.929 "seek_data": false, 00:34:06.929 "copy": true, 00:34:06.929 "nvme_iov_md": false 00:34:06.929 }, 00:34:06.929 "memory_domains": [ 00:34:06.929 { 00:34:06.929 "dma_device_id": "system", 00:34:06.929 "dma_device_type": 1 00:34:06.929 }, 00:34:06.929 { 00:34:06.929 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:06.929 "dma_device_type": 2 00:34:06.929 } 00:34:06.929 ], 00:34:06.929 "driver_specific": { 00:34:06.929 "passthru": { 00:34:06.929 "name": "pt2", 00:34:06.929 "base_bdev_name": "malloc2" 00:34:06.929 } 00:34:06.929 } 00:34:06.929 }' 00:34:06.929 14:26:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:06.929 14:26:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:06.929 14:26:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:34:06.929 14:26:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:07.187 14:26:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:07.187 14:26:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:34:07.187 14:26:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:07.187 14:26:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:07.187 14:26:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:34:07.187 14:26:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:07.188 14:26:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:07.188 14:26:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:34:07.188 14:26:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:34:07.188 14:26:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:34:07.516 [2024-07-15 14:26:53.424027] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:07.516 14:26:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@486 -- # '[' 3c926d90-3b79-4033-9169-7c4fb45e9214 '!=' 3c926d90-3b79-4033-9169-7c4fb45e9214 ']' 00:34:07.516 14:26:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:34:07.516 14:26:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@213 -- # case $1 in 00:34:07.516 14:26:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@214 -- # return 0 00:34:07.516 14:26:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:34:07.774 [2024-07-15 14:26:53.707922] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:34:07.774 14:26:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:07.774 14:26:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:07.774 14:26:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:07.774 14:26:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:34:07.774 14:26:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:34:07.774 14:26:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:34:07.774 14:26:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:07.774 14:26:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:07.774 14:26:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:07.774 14:26:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:07.774 14:26:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:07.774 14:26:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:08.339 14:26:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:08.339 "name": "raid_bdev1", 00:34:08.339 "uuid": "3c926d90-3b79-4033-9169-7c4fb45e9214", 00:34:08.339 "strip_size_kb": 0, 00:34:08.339 "state": "online", 00:34:08.339 "raid_level": "raid1", 00:34:08.339 "superblock": true, 00:34:08.339 "num_base_bdevs": 2, 00:34:08.339 "num_base_bdevs_discovered": 1, 00:34:08.339 "num_base_bdevs_operational": 1, 00:34:08.339 "base_bdevs_list": [ 00:34:08.339 { 00:34:08.339 "name": null, 00:34:08.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:08.339 "is_configured": false, 00:34:08.339 "data_offset": 256, 00:34:08.339 "data_size": 7936 00:34:08.339 }, 00:34:08.339 { 00:34:08.339 "name": "pt2", 00:34:08.339 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:08.339 "is_configured": true, 00:34:08.339 "data_offset": 256, 00:34:08.339 "data_size": 7936 00:34:08.339 } 00:34:08.339 ] 00:34:08.339 }' 00:34:08.339 14:26:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:08.339 14:26:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:08.904 14:26:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:34:08.904 [2024-07-15 14:26:54.856071] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:08.904 [2024-07-15 14:26:54.856315] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:08.904 [2024-07-15 14:26:54.856477] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:08.904 [2024-07-15 14:26:54.856642] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:08.904 [2024-07-15 14:26:54.856775] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:34:08.904 14:26:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:08.904 14:26:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:34:09.163 14:26:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:34:09.163 14:26:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:34:09.163 14:26:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:34:09.163 14:26:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:34:09.163 14:26:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:34:09.421 14:26:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:34:09.421 14:26:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:34:09.421 14:26:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:34:09.421 14:26:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:34:09.421 14:26:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@518 -- # i=1 00:34:09.421 14:26:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:34:09.680 [2024-07-15 14:26:55.628274] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:34:09.680 [2024-07-15 14:26:55.628763] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:09.680 [2024-07-15 14:26:55.628992] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:34:09.680 [2024-07-15 14:26:55.629173] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:09.680 [2024-07-15 14:26:55.631110] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:09.680 [2024-07-15 14:26:55.631352] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:34:09.680 [2024-07-15 14:26:55.631565] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:34:09.680 [2024-07-15 14:26:55.631782] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:34:09.680 [2024-07-15 14:26:55.632027] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:34:09.680 [2024-07-15 14:26:55.632160] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:34:09.680 [2024-07-15 14:26:55.632263] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:34:09.680 [2024-07-15 14:26:55.632476] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:34:09.680 [2024-07-15 14:26:55.632613] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:34:09.680 [2024-07-15 14:26:55.632866] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:09.680 pt2 00:34:09.680 14:26:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:09.680 14:26:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:09.680 14:26:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:09.680 14:26:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:34:09.680 14:26:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:34:09.680 14:26:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:34:09.680 14:26:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:09.680 14:26:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:09.680 14:26:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:09.680 14:26:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:09.680 14:26:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:09.680 14:26:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:09.938 14:26:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:09.938 "name": "raid_bdev1", 00:34:09.938 "uuid": "3c926d90-3b79-4033-9169-7c4fb45e9214", 00:34:09.938 "strip_size_kb": 0, 00:34:09.938 "state": "online", 00:34:09.938 "raid_level": "raid1", 00:34:09.938 "superblock": true, 00:34:09.938 "num_base_bdevs": 2, 00:34:09.938 "num_base_bdevs_discovered": 1, 00:34:09.938 "num_base_bdevs_operational": 1, 00:34:09.938 "base_bdevs_list": [ 00:34:09.938 { 00:34:09.938 "name": null, 00:34:09.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:09.938 "is_configured": false, 00:34:09.938 "data_offset": 256, 00:34:09.938 "data_size": 7936 00:34:09.938 }, 00:34:09.938 { 00:34:09.938 "name": "pt2", 00:34:09.938 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:09.938 "is_configured": true, 00:34:09.938 "data_offset": 256, 00:34:09.938 "data_size": 7936 00:34:09.938 } 00:34:09.938 ] 00:34:09.938 }' 00:34:09.938 14:26:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:09.938 14:26:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:10.505 14:26:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:34:10.764 [2024-07-15 14:26:56.717185] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:10.764 [2024-07-15 14:26:56.717276] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:10.764 [2024-07-15 14:26:56.717364] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:10.764 [2024-07-15 14:26:56.717414] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:10.764 [2024-07-15 14:26:56.717427] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:34:10.764 14:26:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:10.764 14:26:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:34:11.023 14:26:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:34:11.023 14:26:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:34:11.023 14:26:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:34:11.023 14:26:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:34:11.282 [2024-07-15 14:26:57.189328] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:34:11.282 [2024-07-15 14:26:57.189514] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:11.282 [2024-07-15 14:26:57.189570] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:34:11.282 [2024-07-15 14:26:57.189619] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:11.282 [2024-07-15 14:26:57.191400] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:11.282 [2024-07-15 14:26:57.191476] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:34:11.282 [2024-07-15 14:26:57.191558] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:34:11.282 [2024-07-15 14:26:57.191626] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:34:11.282 [2024-07-15 14:26:57.191747] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:34:11.282 [2024-07-15 14:26:57.191764] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:11.282 [2024-07-15 14:26:57.191783] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state configuring 00:34:11.282 [2024-07-15 14:26:57.191851] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:34:11.282 [2024-07-15 14:26:57.191929] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:34:11.282 [2024-07-15 14:26:57.191944] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:34:11.282 [2024-07-15 14:26:57.191994] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:34:11.282 [2024-07-15 14:26:57.192056] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:34:11.282 [2024-07-15 14:26:57.192071] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:34:11.282 [2024-07-15 14:26:57.192114] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:11.282 pt1 00:34:11.282 14:26:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:34:11.282 14:26:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:11.282 14:26:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:11.282 14:26:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:11.282 14:26:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:34:11.282 14:26:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:34:11.282 14:26:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:34:11.282 14:26:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:11.282 14:26:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:11.282 14:26:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:11.282 14:26:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:11.282 14:26:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:11.283 14:26:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:11.541 14:26:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:11.541 "name": "raid_bdev1", 00:34:11.541 "uuid": "3c926d90-3b79-4033-9169-7c4fb45e9214", 00:34:11.541 "strip_size_kb": 0, 00:34:11.541 "state": "online", 00:34:11.541 "raid_level": "raid1", 00:34:11.541 "superblock": true, 00:34:11.541 "num_base_bdevs": 2, 00:34:11.541 "num_base_bdevs_discovered": 1, 00:34:11.541 "num_base_bdevs_operational": 1, 00:34:11.541 "base_bdevs_list": [ 00:34:11.541 { 00:34:11.541 "name": null, 00:34:11.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:11.541 "is_configured": false, 00:34:11.541 "data_offset": 256, 00:34:11.541 "data_size": 7936 00:34:11.541 }, 00:34:11.541 { 00:34:11.541 "name": "pt2", 00:34:11.541 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:11.541 "is_configured": true, 00:34:11.541 "data_offset": 256, 00:34:11.541 "data_size": 7936 00:34:11.541 } 00:34:11.541 ] 00:34:11.541 }' 00:34:11.541 14:26:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:11.541 14:26:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:12.108 14:26:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:34:12.108 14:26:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:34:12.444 14:26:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:34:12.444 14:26:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:34:12.444 14:26:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:34:12.702 [2024-07-15 14:26:58.569717] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:12.702 14:26:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # '[' 3c926d90-3b79-4033-9169-7c4fb45e9214 '!=' 3c926d90-3b79-4033-9169-7c4fb45e9214 ']' 00:34:12.702 14:26:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@562 -- # killprocess 220128 00:34:12.702 14:26:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@948 -- # '[' -z 220128 ']' 00:34:12.702 14:26:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@952 -- # kill -0 220128 00:34:12.702 14:26:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@953 -- # uname 00:34:12.702 14:26:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:12.702 14:26:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 220128 00:34:12.702 14:26:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:12.702 14:26:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:12.702 14:26:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@966 -- # echo 'killing process with pid 220128' 00:34:12.702 killing process with pid 220128 00:34:12.702 14:26:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@967 -- # kill 220128 00:34:12.702 14:26:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # wait 220128 00:34:12.702 [2024-07-15 14:26:58.618818] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:34:12.703 [2024-07-15 14:26:58.618981] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:12.703 [2024-07-15 14:26:58.619039] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:12.703 [2024-07-15 14:26:58.619051] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:34:12.961 [2024-07-15 14:26:58.781947] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:14.338 14:26:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@564 -- # return 0 00:34:14.338 00:34:14.338 real 0m17.292s 00:34:14.338 user 0m31.234s 00:34:14.338 sys 0m2.124s 00:34:14.338 14:26:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:14.338 14:26:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:14.338 ************************************ 00:34:14.338 END TEST raid_superblock_test_md_interleaved 00:34:14.338 ************************************ 00:34:14.338 14:26:59 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:34:14.338 14:26:59 bdev_raid -- bdev/bdev_raid.sh@914 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:34:14.338 14:26:59 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:34:14.338 14:26:59 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:14.338 14:26:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:34:14.338 ************************************ 00:34:14.338 START TEST raid_rebuild_test_sb_md_interleaved 00:34:14.338 ************************************ 00:34:14.338 14:26:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 true false false 00:34:14.338 14:26:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:34:14.338 14:26:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:34:14.338 14:26:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:34:14.338 14:26:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:34:14.338 14:26:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local verify=false 00:34:14.338 14:26:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:34:14.338 14:27:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:34:14.338 14:27:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:34:14.338 14:27:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:34:14.338 14:27:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:34:14.338 14:27:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:34:14.338 14:27:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:34:14.338 14:27:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:34:14.338 14:27:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:34:14.338 14:27:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:34:14.338 14:27:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:34:14.338 14:27:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local strip_size 00:34:14.338 14:27:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local create_arg 00:34:14.338 14:27:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:34:14.338 14:27:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local data_offset 00:34:14.338 14:27:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:34:14.338 14:27:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:34:14.338 14:27:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:34:14.338 14:27:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:34:14.338 14:27:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # raid_pid=220652 00:34:14.338 14:27:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # waitforlisten 220652 /var/tmp/spdk-raid.sock 00:34:14.338 14:27:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@829 -- # '[' -z 220652 ']' 00:34:14.338 14:27:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:34:14.338 14:27:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:34:14.338 14:27:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:14.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:34:14.338 14:27:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:34:14.338 14:27:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:14.338 14:27:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:14.338 [2024-07-15 14:27:00.048129] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:34:14.338 I/O size of 3145728 is greater than zero copy threshold (65536). 00:34:14.338 Zero copy mechanism will not be used. 00:34:14.338 [2024-07-15 14:27:00.048333] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid220652 ] 00:34:14.338 [2024-07-15 14:27:00.212338] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:14.597 [2024-07-15 14:27:00.434007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:14.855 [2024-07-15 14:27:00.665415] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:15.113 14:27:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:15.113 14:27:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@862 -- # return 0 00:34:15.113 14:27:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:34:15.113 14:27:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:34:15.370 BaseBdev1_malloc 00:34:15.370 14:27:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:34:15.628 [2024-07-15 14:27:01.557701] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:34:15.628 [2024-07-15 14:27:01.558222] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:15.628 [2024-07-15 14:27:01.558341] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:34:15.628 [2024-07-15 14:27:01.558432] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:15.628 [2024-07-15 14:27:01.560165] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:15.628 [2024-07-15 14:27:01.560299] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:34:15.628 BaseBdev1 00:34:15.628 14:27:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:34:15.628 14:27:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:34:15.885 BaseBdev2_malloc 00:34:16.143 14:27:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:34:16.401 [2024-07-15 14:27:02.218774] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:34:16.401 [2024-07-15 14:27:02.219199] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:16.401 [2024-07-15 14:27:02.219445] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:34:16.401 [2024-07-15 14:27:02.219673] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:16.401 [2024-07-15 14:27:02.221452] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:16.401 [2024-07-15 14:27:02.221783] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:34:16.401 BaseBdev2 00:34:16.401 14:27:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:34:16.659 spare_malloc 00:34:16.659 14:27:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:34:16.916 spare_delay 00:34:16.916 14:27:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:34:17.174 [2024-07-15 14:27:03.022902] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:34:17.174 [2024-07-15 14:27:03.023638] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:17.174 [2024-07-15 14:27:03.023988] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:34:17.174 [2024-07-15 14:27:03.024210] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:17.174 [2024-07-15 14:27:03.026107] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:17.174 [2024-07-15 14:27:03.026349] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:34:17.174 spare 00:34:17.174 14:27:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:34:17.432 [2024-07-15 14:27:03.259097] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:17.432 [2024-07-15 14:27:03.260776] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:17.432 [2024-07-15 14:27:03.261119] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:34:17.432 [2024-07-15 14:27:03.261274] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:34:17.432 [2024-07-15 14:27:03.261438] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:34:17.432 [2024-07-15 14:27:03.261651] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:34:17.432 [2024-07-15 14:27:03.261771] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:34:17.432 [2024-07-15 14:27:03.261939] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:17.432 14:27:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:34:17.432 14:27:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:17.432 14:27:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:17.432 14:27:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:34:17.432 14:27:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:34:17.432 14:27:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:34:17.432 14:27:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:17.432 14:27:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:17.432 14:27:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:17.432 14:27:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:17.432 14:27:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:17.432 14:27:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:17.690 14:27:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:17.690 "name": "raid_bdev1", 00:34:17.690 "uuid": "16dd4cf7-ebfc-4119-973b-7a9abacff64a", 00:34:17.690 "strip_size_kb": 0, 00:34:17.690 "state": "online", 00:34:17.690 "raid_level": "raid1", 00:34:17.690 "superblock": true, 00:34:17.690 "num_base_bdevs": 2, 00:34:17.690 "num_base_bdevs_discovered": 2, 00:34:17.690 "num_base_bdevs_operational": 2, 00:34:17.690 "base_bdevs_list": [ 00:34:17.690 { 00:34:17.690 "name": "BaseBdev1", 00:34:17.690 "uuid": "e2e705fd-6821-5e21-a600-fe8b47a7d90c", 00:34:17.690 "is_configured": true, 00:34:17.690 "data_offset": 256, 00:34:17.690 "data_size": 7936 00:34:17.690 }, 00:34:17.690 { 00:34:17.690 "name": "BaseBdev2", 00:34:17.690 "uuid": "09428192-c1d2-5f0c-951a-c5e0056254c1", 00:34:17.690 "is_configured": true, 00:34:17.690 "data_offset": 256, 00:34:17.690 "data_size": 7936 00:34:17.690 } 00:34:17.690 ] 00:34:17.690 }' 00:34:17.690 14:27:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:17.690 14:27:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:18.256 14:27:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:34:18.256 14:27:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:34:18.514 [2024-07-15 14:27:04.415339] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:18.514 14:27:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=7936 00:34:18.514 14:27:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:18.514 14:27:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:34:18.772 14:27:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # data_offset=256 00:34:18.772 14:27:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:34:18.772 14:27:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@623 -- # '[' false = true ']' 00:34:18.772 14:27:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:34:19.030 [2024-07-15 14:27:04.923326] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:34:19.030 14:27:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:19.030 14:27:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:19.030 14:27:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:19.030 14:27:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:34:19.030 14:27:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:34:19.030 14:27:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:34:19.030 14:27:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:19.030 14:27:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:19.030 14:27:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:19.030 14:27:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:19.030 14:27:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:19.030 14:27:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:19.289 14:27:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:19.289 "name": "raid_bdev1", 00:34:19.289 "uuid": "16dd4cf7-ebfc-4119-973b-7a9abacff64a", 00:34:19.289 "strip_size_kb": 0, 00:34:19.289 "state": "online", 00:34:19.289 "raid_level": "raid1", 00:34:19.289 "superblock": true, 00:34:19.289 "num_base_bdevs": 2, 00:34:19.289 "num_base_bdevs_discovered": 1, 00:34:19.289 "num_base_bdevs_operational": 1, 00:34:19.289 "base_bdevs_list": [ 00:34:19.289 { 00:34:19.289 "name": null, 00:34:19.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:19.289 "is_configured": false, 00:34:19.289 "data_offset": 256, 00:34:19.289 "data_size": 7936 00:34:19.289 }, 00:34:19.289 { 00:34:19.289 "name": "BaseBdev2", 00:34:19.289 "uuid": "09428192-c1d2-5f0c-951a-c5e0056254c1", 00:34:19.289 "is_configured": true, 00:34:19.289 "data_offset": 256, 00:34:19.289 "data_size": 7936 00:34:19.289 } 00:34:19.289 ] 00:34:19.289 }' 00:34:19.289 14:27:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:19.289 14:27:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:19.856 14:27:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:34:20.115 [2024-07-15 14:27:05.967447] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:20.115 [2024-07-15 14:27:05.982798] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:34:20.115 [2024-07-15 14:27:05.984382] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:34:20.115 14:27:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # sleep 1 00:34:21.049 14:27:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:21.049 14:27:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:21.049 14:27:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:34:21.049 14:27:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:34:21.049 14:27:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:21.049 14:27:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:21.049 14:27:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:21.307 14:27:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:21.307 "name": "raid_bdev1", 00:34:21.307 "uuid": "16dd4cf7-ebfc-4119-973b-7a9abacff64a", 00:34:21.307 "strip_size_kb": 0, 00:34:21.307 "state": "online", 00:34:21.307 "raid_level": "raid1", 00:34:21.308 "superblock": true, 00:34:21.308 "num_base_bdevs": 2, 00:34:21.308 "num_base_bdevs_discovered": 2, 00:34:21.308 "num_base_bdevs_operational": 2, 00:34:21.308 "process": { 00:34:21.308 "type": "rebuild", 00:34:21.308 "target": "spare", 00:34:21.308 "progress": { 00:34:21.308 "blocks": 3072, 00:34:21.308 "percent": 38 00:34:21.308 } 00:34:21.308 }, 00:34:21.308 "base_bdevs_list": [ 00:34:21.308 { 00:34:21.308 "name": "spare", 00:34:21.308 "uuid": "78cc2c89-9f45-5056-ac66-9ad584a22fa6", 00:34:21.308 "is_configured": true, 00:34:21.308 "data_offset": 256, 00:34:21.308 "data_size": 7936 00:34:21.308 }, 00:34:21.308 { 00:34:21.308 "name": "BaseBdev2", 00:34:21.308 "uuid": "09428192-c1d2-5f0c-951a-c5e0056254c1", 00:34:21.308 "is_configured": true, 00:34:21.308 "data_offset": 256, 00:34:21.308 "data_size": 7936 00:34:21.308 } 00:34:21.308 ] 00:34:21.308 }' 00:34:21.308 14:27:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:21.566 14:27:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:21.566 14:27:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:21.566 14:27:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:34:21.566 14:27:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:34:21.824 [2024-07-15 14:27:07.629987] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:21.824 [2024-07-15 14:27:07.694537] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:34:21.824 [2024-07-15 14:27:07.695104] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:21.824 [2024-07-15 14:27:07.695248] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:21.824 [2024-07-15 14:27:07.695297] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:34:21.824 14:27:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:21.824 14:27:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:21.824 14:27:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:21.824 14:27:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:34:21.824 14:27:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:34:21.824 14:27:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:34:21.824 14:27:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:21.824 14:27:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:21.824 14:27:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:21.824 14:27:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:21.824 14:27:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:21.824 14:27:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:22.083 14:27:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:22.083 "name": "raid_bdev1", 00:34:22.083 "uuid": "16dd4cf7-ebfc-4119-973b-7a9abacff64a", 00:34:22.083 "strip_size_kb": 0, 00:34:22.083 "state": "online", 00:34:22.083 "raid_level": "raid1", 00:34:22.083 "superblock": true, 00:34:22.083 "num_base_bdevs": 2, 00:34:22.083 "num_base_bdevs_discovered": 1, 00:34:22.083 "num_base_bdevs_operational": 1, 00:34:22.083 "base_bdevs_list": [ 00:34:22.083 { 00:34:22.083 "name": null, 00:34:22.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:22.083 "is_configured": false, 00:34:22.083 "data_offset": 256, 00:34:22.083 "data_size": 7936 00:34:22.083 }, 00:34:22.083 { 00:34:22.083 "name": "BaseBdev2", 00:34:22.083 "uuid": "09428192-c1d2-5f0c-951a-c5e0056254c1", 00:34:22.083 "is_configured": true, 00:34:22.083 "data_offset": 256, 00:34:22.083 "data_size": 7936 00:34:22.083 } 00:34:22.083 ] 00:34:22.083 }' 00:34:22.083 14:27:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:22.083 14:27:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:22.650 14:27:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:22.650 14:27:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:22.650 14:27:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:34:22.650 14:27:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:34:22.650 14:27:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:22.650 14:27:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:22.650 14:27:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:22.907 14:27:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:22.908 "name": "raid_bdev1", 00:34:22.908 "uuid": "16dd4cf7-ebfc-4119-973b-7a9abacff64a", 00:34:22.908 "strip_size_kb": 0, 00:34:22.908 "state": "online", 00:34:22.908 "raid_level": "raid1", 00:34:22.908 "superblock": true, 00:34:22.908 "num_base_bdevs": 2, 00:34:22.908 "num_base_bdevs_discovered": 1, 00:34:22.908 "num_base_bdevs_operational": 1, 00:34:22.908 "base_bdevs_list": [ 00:34:22.908 { 00:34:22.908 "name": null, 00:34:22.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:22.908 "is_configured": false, 00:34:22.908 "data_offset": 256, 00:34:22.908 "data_size": 7936 00:34:22.908 }, 00:34:22.908 { 00:34:22.908 "name": "BaseBdev2", 00:34:22.908 "uuid": "09428192-c1d2-5f0c-951a-c5e0056254c1", 00:34:22.908 "is_configured": true, 00:34:22.908 "data_offset": 256, 00:34:22.908 "data_size": 7936 00:34:22.908 } 00:34:22.908 ] 00:34:22.908 }' 00:34:22.908 14:27:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:22.908 14:27:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:34:22.908 14:27:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:23.165 14:27:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:34:23.165 14:27:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:34:23.424 [2024-07-15 14:27:09.192765] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:23.424 [2024-07-15 14:27:09.207715] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:34:23.424 [2024-07-15 14:27:09.209330] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:34:23.424 14:27:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # sleep 1 00:34:24.381 14:27:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:24.381 14:27:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:24.381 14:27:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:34:24.381 14:27:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:34:24.381 14:27:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:24.381 14:27:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:24.381 14:27:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:24.640 14:27:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:24.640 "name": "raid_bdev1", 00:34:24.640 "uuid": "16dd4cf7-ebfc-4119-973b-7a9abacff64a", 00:34:24.640 "strip_size_kb": 0, 00:34:24.640 "state": "online", 00:34:24.640 "raid_level": "raid1", 00:34:24.640 "superblock": true, 00:34:24.640 "num_base_bdevs": 2, 00:34:24.640 "num_base_bdevs_discovered": 2, 00:34:24.640 "num_base_bdevs_operational": 2, 00:34:24.640 "process": { 00:34:24.640 "type": "rebuild", 00:34:24.640 "target": "spare", 00:34:24.640 "progress": { 00:34:24.640 "blocks": 3072, 00:34:24.640 "percent": 38 00:34:24.640 } 00:34:24.640 }, 00:34:24.640 "base_bdevs_list": [ 00:34:24.640 { 00:34:24.640 "name": "spare", 00:34:24.640 "uuid": "78cc2c89-9f45-5056-ac66-9ad584a22fa6", 00:34:24.640 "is_configured": true, 00:34:24.640 "data_offset": 256, 00:34:24.640 "data_size": 7936 00:34:24.640 }, 00:34:24.640 { 00:34:24.640 "name": "BaseBdev2", 00:34:24.640 "uuid": "09428192-c1d2-5f0c-951a-c5e0056254c1", 00:34:24.640 "is_configured": true, 00:34:24.640 "data_offset": 256, 00:34:24.640 "data_size": 7936 00:34:24.640 } 00:34:24.640 ] 00:34:24.640 }' 00:34:24.640 14:27:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:24.640 14:27:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:24.640 14:27:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:24.640 14:27:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:34:24.640 14:27:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:34:24.640 14:27:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:34:24.640 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:34:24.640 14:27:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:34:24.640 14:27:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:34:24.640 14:27:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:34:24.640 14:27:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@705 -- # local timeout=1287 00:34:24.640 14:27:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:34:24.640 14:27:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:24.640 14:27:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:24.640 14:27:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:34:24.640 14:27:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:34:24.640 14:27:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:24.640 14:27:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:24.640 14:27:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:24.898 14:27:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:24.898 "name": "raid_bdev1", 00:34:24.898 "uuid": "16dd4cf7-ebfc-4119-973b-7a9abacff64a", 00:34:24.898 "strip_size_kb": 0, 00:34:24.899 "state": "online", 00:34:24.899 "raid_level": "raid1", 00:34:24.899 "superblock": true, 00:34:24.899 "num_base_bdevs": 2, 00:34:24.899 "num_base_bdevs_discovered": 2, 00:34:24.899 "num_base_bdevs_operational": 2, 00:34:24.899 "process": { 00:34:24.899 "type": "rebuild", 00:34:24.899 "target": "spare", 00:34:24.899 "progress": { 00:34:24.899 "blocks": 4096, 00:34:24.899 "percent": 51 00:34:24.899 } 00:34:24.899 }, 00:34:24.899 "base_bdevs_list": [ 00:34:24.899 { 00:34:24.899 "name": "spare", 00:34:24.899 "uuid": "78cc2c89-9f45-5056-ac66-9ad584a22fa6", 00:34:24.899 "is_configured": true, 00:34:24.899 "data_offset": 256, 00:34:24.899 "data_size": 7936 00:34:24.899 }, 00:34:24.899 { 00:34:24.899 "name": "BaseBdev2", 00:34:24.899 "uuid": "09428192-c1d2-5f0c-951a-c5e0056254c1", 00:34:24.899 "is_configured": true, 00:34:24.899 "data_offset": 256, 00:34:24.899 "data_size": 7936 00:34:24.899 } 00:34:24.899 ] 00:34:24.899 }' 00:34:24.899 14:27:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:24.899 14:27:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:24.899 14:27:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:25.157 14:27:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:34:25.157 14:27:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@710 -- # sleep 1 00:34:26.093 14:27:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:34:26.093 14:27:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:26.093 14:27:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:26.093 14:27:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:34:26.093 14:27:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:34:26.093 14:27:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:26.093 14:27:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:26.093 14:27:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:26.352 14:27:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:26.352 "name": "raid_bdev1", 00:34:26.352 "uuid": "16dd4cf7-ebfc-4119-973b-7a9abacff64a", 00:34:26.352 "strip_size_kb": 0, 00:34:26.352 "state": "online", 00:34:26.352 "raid_level": "raid1", 00:34:26.352 "superblock": true, 00:34:26.352 "num_base_bdevs": 2, 00:34:26.352 "num_base_bdevs_discovered": 2, 00:34:26.352 "num_base_bdevs_operational": 2, 00:34:26.352 "process": { 00:34:26.352 "type": "rebuild", 00:34:26.352 "target": "spare", 00:34:26.352 "progress": { 00:34:26.352 "blocks": 7424, 00:34:26.352 "percent": 93 00:34:26.352 } 00:34:26.352 }, 00:34:26.352 "base_bdevs_list": [ 00:34:26.352 { 00:34:26.352 "name": "spare", 00:34:26.352 "uuid": "78cc2c89-9f45-5056-ac66-9ad584a22fa6", 00:34:26.352 "is_configured": true, 00:34:26.352 "data_offset": 256, 00:34:26.352 "data_size": 7936 00:34:26.352 }, 00:34:26.352 { 00:34:26.352 "name": "BaseBdev2", 00:34:26.352 "uuid": "09428192-c1d2-5f0c-951a-c5e0056254c1", 00:34:26.352 "is_configured": true, 00:34:26.352 "data_offset": 256, 00:34:26.352 "data_size": 7936 00:34:26.352 } 00:34:26.352 ] 00:34:26.352 }' 00:34:26.352 14:27:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:26.352 14:27:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:26.352 14:27:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:26.352 [2024-07-15 14:27:12.327019] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:34:26.352 [2024-07-15 14:27:12.327238] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:34:26.352 [2024-07-15 14:27:12.327793] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:26.352 14:27:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:34:26.352 14:27:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@710 -- # sleep 1 00:34:27.764 14:27:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:34:27.764 14:27:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:27.764 14:27:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:27.764 14:27:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:34:27.764 14:27:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:34:27.764 14:27:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:27.764 14:27:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:27.764 14:27:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:27.764 14:27:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:27.764 "name": "raid_bdev1", 00:34:27.764 "uuid": "16dd4cf7-ebfc-4119-973b-7a9abacff64a", 00:34:27.764 "strip_size_kb": 0, 00:34:27.764 "state": "online", 00:34:27.764 "raid_level": "raid1", 00:34:27.764 "superblock": true, 00:34:27.764 "num_base_bdevs": 2, 00:34:27.764 "num_base_bdevs_discovered": 2, 00:34:27.764 "num_base_bdevs_operational": 2, 00:34:27.764 "base_bdevs_list": [ 00:34:27.764 { 00:34:27.764 "name": "spare", 00:34:27.764 "uuid": "78cc2c89-9f45-5056-ac66-9ad584a22fa6", 00:34:27.764 "is_configured": true, 00:34:27.764 "data_offset": 256, 00:34:27.764 "data_size": 7936 00:34:27.764 }, 00:34:27.764 { 00:34:27.764 "name": "BaseBdev2", 00:34:27.764 "uuid": "09428192-c1d2-5f0c-951a-c5e0056254c1", 00:34:27.764 "is_configured": true, 00:34:27.764 "data_offset": 256, 00:34:27.764 "data_size": 7936 00:34:27.764 } 00:34:27.764 ] 00:34:27.764 }' 00:34:27.764 14:27:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:27.764 14:27:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:34:27.764 14:27:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:27.764 14:27:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:34:27.764 14:27:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # break 00:34:27.764 14:27:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:27.764 14:27:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:27.764 14:27:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:34:27.764 14:27:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:34:27.764 14:27:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:27.764 14:27:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:27.764 14:27:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:28.331 14:27:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:28.331 "name": "raid_bdev1", 00:34:28.331 "uuid": "16dd4cf7-ebfc-4119-973b-7a9abacff64a", 00:34:28.331 "strip_size_kb": 0, 00:34:28.331 "state": "online", 00:34:28.331 "raid_level": "raid1", 00:34:28.331 "superblock": true, 00:34:28.331 "num_base_bdevs": 2, 00:34:28.331 "num_base_bdevs_discovered": 2, 00:34:28.331 "num_base_bdevs_operational": 2, 00:34:28.331 "base_bdevs_list": [ 00:34:28.331 { 00:34:28.331 "name": "spare", 00:34:28.331 "uuid": "78cc2c89-9f45-5056-ac66-9ad584a22fa6", 00:34:28.331 "is_configured": true, 00:34:28.331 "data_offset": 256, 00:34:28.331 "data_size": 7936 00:34:28.331 }, 00:34:28.331 { 00:34:28.331 "name": "BaseBdev2", 00:34:28.331 "uuid": "09428192-c1d2-5f0c-951a-c5e0056254c1", 00:34:28.331 "is_configured": true, 00:34:28.331 "data_offset": 256, 00:34:28.331 "data_size": 7936 00:34:28.331 } 00:34:28.331 ] 00:34:28.331 }' 00:34:28.331 14:27:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:28.331 14:27:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:34:28.331 14:27:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:28.331 14:27:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:34:28.331 14:27:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:34:28.331 14:27:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:28.331 14:27:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:28.331 14:27:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:34:28.331 14:27:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:34:28.331 14:27:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:34:28.331 14:27:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:28.331 14:27:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:28.331 14:27:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:28.331 14:27:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:28.331 14:27:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:28.331 14:27:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:28.590 14:27:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:28.590 "name": "raid_bdev1", 00:34:28.590 "uuid": "16dd4cf7-ebfc-4119-973b-7a9abacff64a", 00:34:28.590 "strip_size_kb": 0, 00:34:28.590 "state": "online", 00:34:28.590 "raid_level": "raid1", 00:34:28.590 "superblock": true, 00:34:28.590 "num_base_bdevs": 2, 00:34:28.590 "num_base_bdevs_discovered": 2, 00:34:28.590 "num_base_bdevs_operational": 2, 00:34:28.590 "base_bdevs_list": [ 00:34:28.590 { 00:34:28.590 "name": "spare", 00:34:28.590 "uuid": "78cc2c89-9f45-5056-ac66-9ad584a22fa6", 00:34:28.590 "is_configured": true, 00:34:28.590 "data_offset": 256, 00:34:28.590 "data_size": 7936 00:34:28.590 }, 00:34:28.590 { 00:34:28.590 "name": "BaseBdev2", 00:34:28.590 "uuid": "09428192-c1d2-5f0c-951a-c5e0056254c1", 00:34:28.590 "is_configured": true, 00:34:28.590 "data_offset": 256, 00:34:28.590 "data_size": 7936 00:34:28.590 } 00:34:28.590 ] 00:34:28.590 }' 00:34:28.590 14:27:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:28.590 14:27:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:29.157 14:27:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:34:29.416 [2024-07-15 14:27:15.223806] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:29.416 [2024-07-15 14:27:15.224000] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:29.416 [2024-07-15 14:27:15.224192] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:29.416 [2024-07-15 14:27:15.224369] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:29.416 [2024-07-15 14:27:15.224483] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:34:29.416 14:27:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:29.416 14:27:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # jq length 00:34:29.675 14:27:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:34:29.675 14:27:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@721 -- # '[' false = true ']' 00:34:29.675 14:27:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:34:29.675 14:27:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:34:29.933 14:27:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:34:30.191 [2024-07-15 14:27:16.003880] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:34:30.191 [2024-07-15 14:27:16.004489] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:30.191 [2024-07-15 14:27:16.004812] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:34:30.191 [2024-07-15 14:27:16.005029] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:30.191 [2024-07-15 14:27:16.006779] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:30.191 [2024-07-15 14:27:16.007015] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:34:30.191 [2024-07-15 14:27:16.007262] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:34:30.191 [2024-07-15 14:27:16.007434] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:30.191 [2024-07-15 14:27:16.007636] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:30.191 spare 00:34:30.191 14:27:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:34:30.191 14:27:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:30.191 14:27:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:30.191 14:27:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:34:30.191 14:27:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:34:30.191 14:27:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:34:30.191 14:27:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:30.191 14:27:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:30.191 14:27:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:30.191 14:27:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:30.191 14:27:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:30.191 14:27:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:30.191 [2024-07-15 14:27:16.107835] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a280 00:34:30.191 [2024-07-15 14:27:16.108034] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:34:30.192 [2024-07-15 14:27:16.108202] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:34:30.192 [2024-07-15 14:27:16.108460] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a280 00:34:30.192 [2024-07-15 14:27:16.108562] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a280 00:34:30.192 [2024-07-15 14:27:16.108738] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:30.450 14:27:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:30.450 "name": "raid_bdev1", 00:34:30.450 "uuid": "16dd4cf7-ebfc-4119-973b-7a9abacff64a", 00:34:30.450 "strip_size_kb": 0, 00:34:30.450 "state": "online", 00:34:30.450 "raid_level": "raid1", 00:34:30.450 "superblock": true, 00:34:30.450 "num_base_bdevs": 2, 00:34:30.450 "num_base_bdevs_discovered": 2, 00:34:30.450 "num_base_bdevs_operational": 2, 00:34:30.450 "base_bdevs_list": [ 00:34:30.450 { 00:34:30.450 "name": "spare", 00:34:30.450 "uuid": "78cc2c89-9f45-5056-ac66-9ad584a22fa6", 00:34:30.450 "is_configured": true, 00:34:30.450 "data_offset": 256, 00:34:30.450 "data_size": 7936 00:34:30.450 }, 00:34:30.450 { 00:34:30.450 "name": "BaseBdev2", 00:34:30.450 "uuid": "09428192-c1d2-5f0c-951a-c5e0056254c1", 00:34:30.450 "is_configured": true, 00:34:30.450 "data_offset": 256, 00:34:30.450 "data_size": 7936 00:34:30.450 } 00:34:30.450 ] 00:34:30.450 }' 00:34:30.450 14:27:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:30.450 14:27:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:31.017 14:27:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:31.017 14:27:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:31.017 14:27:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:34:31.017 14:27:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:34:31.017 14:27:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:31.017 14:27:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:31.017 14:27:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:31.274 14:27:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:31.274 "name": "raid_bdev1", 00:34:31.274 "uuid": "16dd4cf7-ebfc-4119-973b-7a9abacff64a", 00:34:31.274 "strip_size_kb": 0, 00:34:31.274 "state": "online", 00:34:31.274 "raid_level": "raid1", 00:34:31.274 "superblock": true, 00:34:31.274 "num_base_bdevs": 2, 00:34:31.274 "num_base_bdevs_discovered": 2, 00:34:31.274 "num_base_bdevs_operational": 2, 00:34:31.274 "base_bdevs_list": [ 00:34:31.274 { 00:34:31.274 "name": "spare", 00:34:31.274 "uuid": "78cc2c89-9f45-5056-ac66-9ad584a22fa6", 00:34:31.274 "is_configured": true, 00:34:31.274 "data_offset": 256, 00:34:31.274 "data_size": 7936 00:34:31.274 }, 00:34:31.274 { 00:34:31.274 "name": "BaseBdev2", 00:34:31.274 "uuid": "09428192-c1d2-5f0c-951a-c5e0056254c1", 00:34:31.274 "is_configured": true, 00:34:31.274 "data_offset": 256, 00:34:31.274 "data_size": 7936 00:34:31.274 } 00:34:31.274 ] 00:34:31.274 }' 00:34:31.274 14:27:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:31.274 14:27:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:34:31.274 14:27:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:31.274 14:27:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:34:31.274 14:27:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:31.274 14:27:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:34:31.840 14:27:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:34:31.840 14:27:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:34:31.840 [2024-07-15 14:27:17.818016] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:31.840 14:27:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:31.840 14:27:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:31.840 14:27:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:31.840 14:27:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:34:31.840 14:27:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:34:31.840 14:27:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:34:31.840 14:27:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:31.840 14:27:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:31.840 14:27:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:31.840 14:27:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:31.840 14:27:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:31.840 14:27:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:32.097 14:27:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:32.097 "name": "raid_bdev1", 00:34:32.097 "uuid": "16dd4cf7-ebfc-4119-973b-7a9abacff64a", 00:34:32.097 "strip_size_kb": 0, 00:34:32.097 "state": "online", 00:34:32.097 "raid_level": "raid1", 00:34:32.097 "superblock": true, 00:34:32.097 "num_base_bdevs": 2, 00:34:32.097 "num_base_bdevs_discovered": 1, 00:34:32.098 "num_base_bdevs_operational": 1, 00:34:32.098 "base_bdevs_list": [ 00:34:32.098 { 00:34:32.098 "name": null, 00:34:32.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:32.098 "is_configured": false, 00:34:32.098 "data_offset": 256, 00:34:32.098 "data_size": 7936 00:34:32.098 }, 00:34:32.098 { 00:34:32.098 "name": "BaseBdev2", 00:34:32.098 "uuid": "09428192-c1d2-5f0c-951a-c5e0056254c1", 00:34:32.098 "is_configured": true, 00:34:32.098 "data_offset": 256, 00:34:32.098 "data_size": 7936 00:34:32.098 } 00:34:32.098 ] 00:34:32.098 }' 00:34:32.098 14:27:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:32.098 14:27:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:33.035 14:27:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:34:33.035 [2024-07-15 14:27:18.994259] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:33.035 [2024-07-15 14:27:18.994602] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:34:33.035 [2024-07-15 14:27:18.994746] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:34:33.035 [2024-07-15 14:27:18.995276] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:33.035 [2024-07-15 14:27:19.009153] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:34:33.035 [2024-07-15 14:27:19.010692] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:34:33.035 14:27:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # sleep 1 00:34:34.410 14:27:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:34.410 14:27:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:34.410 14:27:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:34:34.410 14:27:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:34:34.410 14:27:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:34.410 14:27:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:34.410 14:27:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:34.410 14:27:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:34.410 "name": "raid_bdev1", 00:34:34.410 "uuid": "16dd4cf7-ebfc-4119-973b-7a9abacff64a", 00:34:34.410 "strip_size_kb": 0, 00:34:34.410 "state": "online", 00:34:34.410 "raid_level": "raid1", 00:34:34.410 "superblock": true, 00:34:34.410 "num_base_bdevs": 2, 00:34:34.410 "num_base_bdevs_discovered": 2, 00:34:34.410 "num_base_bdevs_operational": 2, 00:34:34.410 "process": { 00:34:34.410 "type": "rebuild", 00:34:34.410 "target": "spare", 00:34:34.410 "progress": { 00:34:34.410 "blocks": 3072, 00:34:34.410 "percent": 38 00:34:34.410 } 00:34:34.410 }, 00:34:34.410 "base_bdevs_list": [ 00:34:34.410 { 00:34:34.410 "name": "spare", 00:34:34.410 "uuid": "78cc2c89-9f45-5056-ac66-9ad584a22fa6", 00:34:34.410 "is_configured": true, 00:34:34.410 "data_offset": 256, 00:34:34.410 "data_size": 7936 00:34:34.410 }, 00:34:34.410 { 00:34:34.410 "name": "BaseBdev2", 00:34:34.410 "uuid": "09428192-c1d2-5f0c-951a-c5e0056254c1", 00:34:34.410 "is_configured": true, 00:34:34.410 "data_offset": 256, 00:34:34.410 "data_size": 7936 00:34:34.410 } 00:34:34.410 ] 00:34:34.410 }' 00:34:34.410 14:27:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:34.410 14:27:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:34.410 14:27:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:34.670 14:27:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:34:34.670 14:27:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:34:34.670 [2024-07-15 14:27:20.649065] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:34.930 [2024-07-15 14:27:20.720285] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:34:34.930 [2024-07-15 14:27:20.720912] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:34.930 [2024-07-15 14:27:20.721110] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:34.930 [2024-07-15 14:27:20.721183] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:34:34.930 14:27:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:34.930 14:27:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:34.930 14:27:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:34.930 14:27:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:34:34.930 14:27:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:34:34.930 14:27:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:34:34.930 14:27:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:34.930 14:27:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:34.930 14:27:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:34.930 14:27:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:34.930 14:27:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:34.930 14:27:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:35.189 14:27:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:35.189 "name": "raid_bdev1", 00:34:35.189 "uuid": "16dd4cf7-ebfc-4119-973b-7a9abacff64a", 00:34:35.189 "strip_size_kb": 0, 00:34:35.189 "state": "online", 00:34:35.189 "raid_level": "raid1", 00:34:35.189 "superblock": true, 00:34:35.189 "num_base_bdevs": 2, 00:34:35.189 "num_base_bdevs_discovered": 1, 00:34:35.189 "num_base_bdevs_operational": 1, 00:34:35.189 "base_bdevs_list": [ 00:34:35.189 { 00:34:35.189 "name": null, 00:34:35.189 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:35.189 "is_configured": false, 00:34:35.189 "data_offset": 256, 00:34:35.189 "data_size": 7936 00:34:35.189 }, 00:34:35.189 { 00:34:35.189 "name": "BaseBdev2", 00:34:35.189 "uuid": "09428192-c1d2-5f0c-951a-c5e0056254c1", 00:34:35.189 "is_configured": true, 00:34:35.189 "data_offset": 256, 00:34:35.189 "data_size": 7936 00:34:35.189 } 00:34:35.189 ] 00:34:35.189 }' 00:34:35.189 14:27:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:35.189 14:27:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:35.757 14:27:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:34:36.015 [2024-07-15 14:27:21.864123] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:34:36.015 [2024-07-15 14:27:21.864740] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:36.015 [2024-07-15 14:27:21.865066] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:34:36.015 [2024-07-15 14:27:21.865328] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:36.015 [2024-07-15 14:27:21.865785] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:36.015 [2024-07-15 14:27:21.866050] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:34:36.015 [2024-07-15 14:27:21.866336] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:34:36.015 [2024-07-15 14:27:21.866469] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:34:36.015 [2024-07-15 14:27:21.866574] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:34:36.015 [2024-07-15 14:27:21.866717] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:36.015 [2024-07-15 14:27:21.881038] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:34:36.015 spare 00:34:36.015 [2024-07-15 14:27:21.882743] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:34:36.015 14:27:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # sleep 1 00:34:36.951 14:27:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:36.951 14:27:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:36.951 14:27:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:34:36.951 14:27:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:34:36.951 14:27:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:36.951 14:27:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:36.951 14:27:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:37.210 14:27:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:37.210 "name": "raid_bdev1", 00:34:37.210 "uuid": "16dd4cf7-ebfc-4119-973b-7a9abacff64a", 00:34:37.210 "strip_size_kb": 0, 00:34:37.210 "state": "online", 00:34:37.210 "raid_level": "raid1", 00:34:37.210 "superblock": true, 00:34:37.210 "num_base_bdevs": 2, 00:34:37.210 "num_base_bdevs_discovered": 2, 00:34:37.210 "num_base_bdevs_operational": 2, 00:34:37.210 "process": { 00:34:37.210 "type": "rebuild", 00:34:37.210 "target": "spare", 00:34:37.210 "progress": { 00:34:37.210 "blocks": 3072, 00:34:37.210 "percent": 38 00:34:37.210 } 00:34:37.210 }, 00:34:37.210 "base_bdevs_list": [ 00:34:37.210 { 00:34:37.210 "name": "spare", 00:34:37.210 "uuid": "78cc2c89-9f45-5056-ac66-9ad584a22fa6", 00:34:37.210 "is_configured": true, 00:34:37.210 "data_offset": 256, 00:34:37.210 "data_size": 7936 00:34:37.210 }, 00:34:37.210 { 00:34:37.210 "name": "BaseBdev2", 00:34:37.210 "uuid": "09428192-c1d2-5f0c-951a-c5e0056254c1", 00:34:37.210 "is_configured": true, 00:34:37.210 "data_offset": 256, 00:34:37.210 "data_size": 7936 00:34:37.210 } 00:34:37.210 ] 00:34:37.210 }' 00:34:37.210 14:27:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:37.477 14:27:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:37.477 14:27:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:37.477 14:27:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:34:37.477 14:27:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:34:37.737 [2024-07-15 14:27:23.516686] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:37.738 [2024-07-15 14:27:23.592319] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:34:37.738 [2024-07-15 14:27:23.592580] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:37.738 [2024-07-15 14:27:23.592639] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:37.738 [2024-07-15 14:27:23.592781] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:34:37.738 14:27:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:37.738 14:27:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:37.738 14:27:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:37.738 14:27:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:34:37.738 14:27:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:34:37.738 14:27:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:34:37.738 14:27:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:37.738 14:27:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:37.738 14:27:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:37.738 14:27:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:37.738 14:27:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:37.738 14:27:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:37.996 14:27:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:37.996 "name": "raid_bdev1", 00:34:37.996 "uuid": "16dd4cf7-ebfc-4119-973b-7a9abacff64a", 00:34:37.996 "strip_size_kb": 0, 00:34:37.996 "state": "online", 00:34:37.996 "raid_level": "raid1", 00:34:37.996 "superblock": true, 00:34:37.996 "num_base_bdevs": 2, 00:34:37.996 "num_base_bdevs_discovered": 1, 00:34:37.996 "num_base_bdevs_operational": 1, 00:34:37.996 "base_bdevs_list": [ 00:34:37.996 { 00:34:37.996 "name": null, 00:34:37.996 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:37.996 "is_configured": false, 00:34:37.996 "data_offset": 256, 00:34:37.996 "data_size": 7936 00:34:37.996 }, 00:34:37.996 { 00:34:37.996 "name": "BaseBdev2", 00:34:37.996 "uuid": "09428192-c1d2-5f0c-951a-c5e0056254c1", 00:34:37.996 "is_configured": true, 00:34:37.996 "data_offset": 256, 00:34:37.996 "data_size": 7936 00:34:37.996 } 00:34:37.996 ] 00:34:37.996 }' 00:34:37.996 14:27:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:37.996 14:27:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:38.563 14:27:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:38.563 14:27:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:38.563 14:27:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:34:38.563 14:27:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:34:38.563 14:27:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:38.563 14:27:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:38.563 14:27:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:38.821 14:27:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:38.821 "name": "raid_bdev1", 00:34:38.821 "uuid": "16dd4cf7-ebfc-4119-973b-7a9abacff64a", 00:34:38.821 "strip_size_kb": 0, 00:34:38.822 "state": "online", 00:34:38.822 "raid_level": "raid1", 00:34:38.822 "superblock": true, 00:34:38.822 "num_base_bdevs": 2, 00:34:38.822 "num_base_bdevs_discovered": 1, 00:34:38.822 "num_base_bdevs_operational": 1, 00:34:38.822 "base_bdevs_list": [ 00:34:38.822 { 00:34:38.822 "name": null, 00:34:38.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:38.822 "is_configured": false, 00:34:38.822 "data_offset": 256, 00:34:38.822 "data_size": 7936 00:34:38.822 }, 00:34:38.822 { 00:34:38.822 "name": "BaseBdev2", 00:34:38.822 "uuid": "09428192-c1d2-5f0c-951a-c5e0056254c1", 00:34:38.822 "is_configured": true, 00:34:38.822 "data_offset": 256, 00:34:38.822 "data_size": 7936 00:34:38.822 } 00:34:38.822 ] 00:34:38.822 }' 00:34:38.822 14:27:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:38.822 14:27:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:34:38.822 14:27:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:39.080 14:27:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:34:39.080 14:27:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:34:39.339 14:27:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:34:39.339 [2024-07-15 14:27:25.309097] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:34:39.339 [2024-07-15 14:27:25.309369] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:39.339 [2024-07-15 14:27:25.309454] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:34:39.339 [2024-07-15 14:27:25.309703] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:39.339 [2024-07-15 14:27:25.309976] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:39.339 [2024-07-15 14:27:25.310133] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:34:39.339 [2024-07-15 14:27:25.310326] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:34:39.339 [2024-07-15 14:27:25.310445] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:34:39.339 [2024-07-15 14:27:25.310547] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:34:39.339 BaseBdev1 00:34:39.339 14:27:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # sleep 1 00:34:40.717 14:27:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:40.717 14:27:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:40.717 14:27:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:40.717 14:27:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:34:40.717 14:27:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:34:40.717 14:27:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:34:40.717 14:27:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:40.717 14:27:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:40.717 14:27:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:40.717 14:27:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:40.717 14:27:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:40.717 14:27:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:40.717 14:27:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:40.717 "name": "raid_bdev1", 00:34:40.717 "uuid": "16dd4cf7-ebfc-4119-973b-7a9abacff64a", 00:34:40.717 "strip_size_kb": 0, 00:34:40.717 "state": "online", 00:34:40.717 "raid_level": "raid1", 00:34:40.717 "superblock": true, 00:34:40.717 "num_base_bdevs": 2, 00:34:40.717 "num_base_bdevs_discovered": 1, 00:34:40.717 "num_base_bdevs_operational": 1, 00:34:40.717 "base_bdevs_list": [ 00:34:40.717 { 00:34:40.717 "name": null, 00:34:40.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:40.717 "is_configured": false, 00:34:40.717 "data_offset": 256, 00:34:40.717 "data_size": 7936 00:34:40.717 }, 00:34:40.717 { 00:34:40.717 "name": "BaseBdev2", 00:34:40.717 "uuid": "09428192-c1d2-5f0c-951a-c5e0056254c1", 00:34:40.717 "is_configured": true, 00:34:40.717 "data_offset": 256, 00:34:40.717 "data_size": 7936 00:34:40.717 } 00:34:40.717 ] 00:34:40.717 }' 00:34:40.717 14:27:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:40.717 14:27:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:41.284 14:27:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:41.284 14:27:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:41.284 14:27:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:34:41.284 14:27:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:34:41.284 14:27:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:41.284 14:27:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:41.284 14:27:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:41.852 14:27:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:41.852 "name": "raid_bdev1", 00:34:41.852 "uuid": "16dd4cf7-ebfc-4119-973b-7a9abacff64a", 00:34:41.852 "strip_size_kb": 0, 00:34:41.852 "state": "online", 00:34:41.852 "raid_level": "raid1", 00:34:41.852 "superblock": true, 00:34:41.852 "num_base_bdevs": 2, 00:34:41.852 "num_base_bdevs_discovered": 1, 00:34:41.852 "num_base_bdevs_operational": 1, 00:34:41.852 "base_bdevs_list": [ 00:34:41.852 { 00:34:41.852 "name": null, 00:34:41.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:41.852 "is_configured": false, 00:34:41.852 "data_offset": 256, 00:34:41.852 "data_size": 7936 00:34:41.852 }, 00:34:41.852 { 00:34:41.852 "name": "BaseBdev2", 00:34:41.852 "uuid": "09428192-c1d2-5f0c-951a-c5e0056254c1", 00:34:41.852 "is_configured": true, 00:34:41.852 "data_offset": 256, 00:34:41.852 "data_size": 7936 00:34:41.852 } 00:34:41.852 ] 00:34:41.852 }' 00:34:41.852 14:27:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:41.852 14:27:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:34:41.852 14:27:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:41.852 14:27:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:34:41.852 14:27:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:34:41.852 14:27:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@648 -- # local es=0 00:34:41.852 14:27:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:34:41.852 14:27:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:41.852 14:27:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:41.852 14:27:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:41.852 14:27:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:41.852 14:27:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:41.852 14:27:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:41.852 14:27:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:41.852 14:27:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:34:41.852 14:27:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:34:42.110 [2024-07-15 14:27:27.875935] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:42.110 [2024-07-15 14:27:27.876105] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:34:42.110 [2024-07-15 14:27:27.876119] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:34:42.110 request: 00:34:42.110 { 00:34:42.110 "base_bdev": "BaseBdev1", 00:34:42.110 "raid_bdev": "raid_bdev1", 00:34:42.110 "method": "bdev_raid_add_base_bdev", 00:34:42.110 "req_id": 1 00:34:42.110 } 00:34:42.110 Got JSON-RPC error response 00:34:42.110 response: 00:34:42.110 { 00:34:42.110 "code": -22, 00:34:42.110 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:34:42.110 } 00:34:42.110 14:27:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@651 -- # es=1 00:34:42.110 14:27:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:42.110 14:27:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:42.110 14:27:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:42.110 14:27:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # sleep 1 00:34:43.046 14:27:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:34:43.046 14:27:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:43.046 14:27:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:43.046 14:27:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:34:43.046 14:27:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:34:43.046 14:27:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:34:43.046 14:27:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:43.046 14:27:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:43.046 14:27:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:43.046 14:27:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:43.046 14:27:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:43.046 14:27:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:43.304 14:27:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:43.304 "name": "raid_bdev1", 00:34:43.304 "uuid": "16dd4cf7-ebfc-4119-973b-7a9abacff64a", 00:34:43.304 "strip_size_kb": 0, 00:34:43.304 "state": "online", 00:34:43.304 "raid_level": "raid1", 00:34:43.304 "superblock": true, 00:34:43.304 "num_base_bdevs": 2, 00:34:43.304 "num_base_bdevs_discovered": 1, 00:34:43.304 "num_base_bdevs_operational": 1, 00:34:43.304 "base_bdevs_list": [ 00:34:43.304 { 00:34:43.304 "name": null, 00:34:43.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:43.304 "is_configured": false, 00:34:43.304 "data_offset": 256, 00:34:43.304 "data_size": 7936 00:34:43.304 }, 00:34:43.304 { 00:34:43.304 "name": "BaseBdev2", 00:34:43.304 "uuid": "09428192-c1d2-5f0c-951a-c5e0056254c1", 00:34:43.304 "is_configured": true, 00:34:43.304 "data_offset": 256, 00:34:43.304 "data_size": 7936 00:34:43.304 } 00:34:43.304 ] 00:34:43.304 }' 00:34:43.304 14:27:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:43.304 14:27:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:43.898 14:27:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:43.898 14:27:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:43.898 14:27:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:34:43.898 14:27:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:34:43.898 14:27:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:43.898 14:27:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:43.898 14:27:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:44.157 14:27:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:44.157 "name": "raid_bdev1", 00:34:44.157 "uuid": "16dd4cf7-ebfc-4119-973b-7a9abacff64a", 00:34:44.157 "strip_size_kb": 0, 00:34:44.157 "state": "online", 00:34:44.157 "raid_level": "raid1", 00:34:44.157 "superblock": true, 00:34:44.157 "num_base_bdevs": 2, 00:34:44.157 "num_base_bdevs_discovered": 1, 00:34:44.157 "num_base_bdevs_operational": 1, 00:34:44.157 "base_bdevs_list": [ 00:34:44.157 { 00:34:44.157 "name": null, 00:34:44.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:44.157 "is_configured": false, 00:34:44.157 "data_offset": 256, 00:34:44.157 "data_size": 7936 00:34:44.157 }, 00:34:44.157 { 00:34:44.157 "name": "BaseBdev2", 00:34:44.157 "uuid": "09428192-c1d2-5f0c-951a-c5e0056254c1", 00:34:44.157 "is_configured": true, 00:34:44.157 "data_offset": 256, 00:34:44.157 "data_size": 7936 00:34:44.157 } 00:34:44.157 ] 00:34:44.157 }' 00:34:44.157 14:27:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:44.416 14:27:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:34:44.416 14:27:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:44.416 14:27:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:34:44.416 14:27:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@782 -- # killprocess 220652 00:34:44.416 14:27:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@948 -- # '[' -z 220652 ']' 00:34:44.416 14:27:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # kill -0 220652 00:34:44.416 14:27:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@953 -- # uname 00:34:44.416 14:27:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:44.416 14:27:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 220652 00:34:44.416 14:27:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:44.416 14:27:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:44.416 14:27:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@966 -- # echo 'killing process with pid 220652' 00:34:44.416 killing process with pid 220652 00:34:44.416 14:27:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@967 -- # kill 220652 00:34:44.416 Received shutdown signal, test time was about 60.000000 seconds 00:34:44.416 00:34:44.416 Latency(us) 00:34:44.416 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:44.416 =================================================================================================================== 00:34:44.416 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:34:44.416 [2024-07-15 14:27:30.277266] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:34:44.416 [2024-07-15 14:27:30.277364] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:44.416 [2024-07-15 14:27:30.277408] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:44.416 [2024-07-15 14:27:30.277419] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a280 name raid_bdev1, state offline 00:34:44.416 14:27:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # wait 220652 00:34:44.676 [2024-07-15 14:27:30.529671] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:46.055 14:27:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # return 0 00:34:46.055 00:34:46.055 real 0m31.690s 00:34:46.055 user 0m51.050s 00:34:46.055 sys 0m2.818s 00:34:46.055 14:27:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:46.055 14:27:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:34:46.055 ************************************ 00:34:46.055 END TEST raid_rebuild_test_sb_md_interleaved 00:34:46.055 ************************************ 00:34:46.055 14:27:31 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:34:46.055 14:27:31 bdev_raid -- bdev/bdev_raid.sh@916 -- # trap - EXIT 00:34:46.055 14:27:31 bdev_raid -- bdev/bdev_raid.sh@917 -- # cleanup 00:34:46.055 14:27:31 bdev_raid -- bdev/bdev_raid.sh@58 -- # '[' -n 220652 ']' 00:34:46.055 14:27:31 bdev_raid -- bdev/bdev_raid.sh@58 -- # ps -p 220652 00:34:46.055 14:27:31 bdev_raid -- bdev/bdev_raid.sh@62 -- # rm -rf /raidtest 00:34:46.055 00:34:46.055 real 21m18.245s 00:34:46.055 user 36m17.121s 00:34:46.055 sys 2m33.727s 00:34:46.055 14:27:31 bdev_raid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:46.055 14:27:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:34:46.055 ************************************ 00:34:46.055 END TEST bdev_raid 00:34:46.055 ************************************ 00:34:46.055 14:27:31 -- common/autotest_common.sh@1142 -- # return 0 00:34:46.055 14:27:31 -- spdk/autotest.sh@191 -- # run_test bdevperf_config /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:34:46.055 14:27:31 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:46.055 14:27:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:46.055 14:27:31 -- common/autotest_common.sh@10 -- # set +x 00:34:46.055 ************************************ 00:34:46.055 START TEST bdevperf_config 00:34:46.055 ************************************ 00:34:46.055 14:27:31 bdevperf_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:34:46.055 * Looking for test storage... 00:34:46.055 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf 00:34:46.055 14:27:31 bdevperf_config -- bdevperf/test_config.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/common.sh 00:34:46.055 14:27:31 bdevperf_config -- bdevperf/common.sh@5 -- # bdevperf=/home/vagrant/spdk_repo/spdk/build/examples/bdevperf 00:34:46.055 14:27:31 bdevperf_config -- bdevperf/test_config.sh@12 -- # jsonconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json 00:34:46.055 14:27:31 bdevperf_config -- bdevperf/test_config.sh@13 -- # testconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:34:46.055 14:27:31 bdevperf_config -- bdevperf/test_config.sh@15 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:34:46.055 14:27:31 bdevperf_config -- bdevperf/test_config.sh@17 -- # create_job global read Malloc0 00:34:46.055 14:27:31 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global 00:34:46.055 14:27:31 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=read 00:34:46.055 14:27:31 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:34:46.055 14:27:31 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:34:46.055 14:27:31 bdevperf_config -- bdevperf/common.sh@13 -- # cat 00:34:46.055 14:27:31 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]' 00:34:46.055 00:34:46.055 14:27:31 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:34:46.055 14:27:31 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:34:46.055 14:27:31 bdevperf_config -- bdevperf/test_config.sh@18 -- # create_job job0 00:34:46.055 14:27:31 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:34:46.055 14:27:31 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:34:46.055 14:27:31 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:34:46.055 14:27:31 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:34:46.055 14:27:31 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:34:46.055 00:34:46.055 14:27:31 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:34:46.055 14:27:31 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:34:46.055 14:27:31 bdevperf_config -- bdevperf/test_config.sh@19 -- # create_job job1 00:34:46.055 14:27:31 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:34:46.055 14:27:31 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:34:46.055 14:27:31 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:34:46.055 14:27:31 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:34:46.055 14:27:31 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:34:46.055 00:34:46.055 14:27:31 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:34:46.055 14:27:31 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:34:46.055 14:27:31 bdevperf_config -- bdevperf/test_config.sh@20 -- # create_job job2 00:34:46.055 14:27:31 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:34:46.055 14:27:31 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:34:46.055 14:27:31 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:34:46.055 14:27:31 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:34:46.055 14:27:31 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:34:46.055 00:34:46.055 14:27:31 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:34:46.055 14:27:31 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:34:46.055 14:27:31 bdevperf_config -- bdevperf/test_config.sh@21 -- # create_job job3 00:34:46.055 14:27:31 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3 00:34:46.055 14:27:31 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:34:46.055 14:27:31 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:34:46.055 14:27:31 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:34:46.055 14:27:31 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]' 00:34:46.055 00:34:46.055 14:27:31 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:34:46.055 14:27:31 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:34:46.055 14:27:31 bdevperf_config -- bdevperf/test_config.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:34:50.246 14:27:36 bdevperf_config -- bdevperf/test_config.sh@22 -- # bdevperf_output='[2024-07-15 14:27:31.947207] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:34:50.246 [2024-07-15 14:27:31.947456] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid221504 ] 00:34:50.246 Using job config with 4 jobs 00:34:50.246 [2024-07-15 14:27:32.113863] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:50.246 [2024-07-15 14:27:32.380659] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:50.246 cpumask for '\''job0'\'' is too big 00:34:50.246 cpumask for '\''job1'\'' is too big 00:34:50.246 cpumask for '\''job2'\'' is too big 00:34:50.246 cpumask for '\''job3'\'' is too big 00:34:50.246 Running I/O for 2 seconds... 00:34:50.246 00:34:50.246 Latency(us) 00:34:50.246 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:50.246 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:34:50.246 Malloc0 : 2.00 81379.26 79.47 0.00 0.00 3144.12 621.85 4676.89 00:34:50.246 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:34:50.246 Malloc0 : 2.01 81397.26 79.49 0.00 0.00 3141.22 599.51 4081.11 00:34:50.246 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:34:50.246 Malloc0 : 2.01 81383.43 79.48 0.00 0.00 3139.77 554.82 3693.85 00:34:50.246 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:34:50.246 Malloc0 : 2.01 81369.12 79.46 0.00 0.00 3138.29 551.10 3708.74 00:34:50.246 =================================================================================================================== 00:34:50.246 Total : 325529.07 317.90 0.00 0.00 3140.85 551.10 4676.89' 00:34:50.246 14:27:36 bdevperf_config -- bdevperf/test_config.sh@23 -- # get_num_jobs '[2024-07-15 14:27:31.947207] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:34:50.246 [2024-07-15 14:27:31.947456] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid221504 ] 00:34:50.246 Using job config with 4 jobs 00:34:50.246 [2024-07-15 14:27:32.113863] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:50.246 [2024-07-15 14:27:32.380659] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:50.246 cpumask for '\''job0'\'' is too big 00:34:50.246 cpumask for '\''job1'\'' is too big 00:34:50.246 cpumask for '\''job2'\'' is too big 00:34:50.246 cpumask for '\''job3'\'' is too big 00:34:50.246 Running I/O for 2 seconds... 00:34:50.246 00:34:50.246 Latency(us) 00:34:50.246 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:50.246 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:34:50.246 Malloc0 : 2.00 81379.26 79.47 0.00 0.00 3144.12 621.85 4676.89 00:34:50.246 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:34:50.246 Malloc0 : 2.01 81397.26 79.49 0.00 0.00 3141.22 599.51 4081.11 00:34:50.246 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:34:50.246 Malloc0 : 2.01 81383.43 79.48 0.00 0.00 3139.77 554.82 3693.85 00:34:50.246 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:34:50.246 Malloc0 : 2.01 81369.12 79.46 0.00 0.00 3138.29 551.10 3708.74 00:34:50.246 =================================================================================================================== 00:34:50.246 Total : 325529.07 317.90 0.00 0.00 3140.85 551.10 4676.89' 00:34:50.246 14:27:36 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-15 14:27:31.947207] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:34:50.246 [2024-07-15 14:27:31.947456] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid221504 ] 00:34:50.246 Using job config with 4 jobs 00:34:50.246 [2024-07-15 14:27:32.113863] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:50.246 [2024-07-15 14:27:32.380659] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:50.246 cpumask for '\''job0'\'' is too big 00:34:50.246 cpumask for '\''job1'\'' is too big 00:34:50.246 cpumask for '\''job2'\'' is too big 00:34:50.246 cpumask for '\''job3'\'' is too big 00:34:50.246 Running I/O for 2 seconds... 00:34:50.246 00:34:50.246 Latency(us) 00:34:50.246 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:50.246 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:34:50.246 Malloc0 : 2.00 81379.26 79.47 0.00 0.00 3144.12 621.85 4676.89 00:34:50.246 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:34:50.246 Malloc0 : 2.01 81397.26 79.49 0.00 0.00 3141.22 599.51 4081.11 00:34:50.246 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:34:50.246 Malloc0 : 2.01 81383.43 79.48 0.00 0.00 3139.77 554.82 3693.85 00:34:50.246 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:34:50.246 Malloc0 : 2.01 81369.12 79.46 0.00 0.00 3138.29 551.10 3708.74 00:34:50.246 =================================================================================================================== 00:34:50.246 Total : 325529.07 317.90 0.00 0.00 3140.85 551.10 4676.89' 00:34:50.246 14:27:36 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:34:50.246 14:27:36 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:34:50.246 14:27:36 bdevperf_config -- bdevperf/test_config.sh@23 -- # [[ 4 == \4 ]] 00:34:50.246 14:27:36 bdevperf_config -- bdevperf/test_config.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -C -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:34:50.506 [2024-07-15 14:27:36.261044] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:34:50.506 [2024-07-15 14:27:36.261318] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid221557 ] 00:34:50.506 [2024-07-15 14:27:36.434649] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:50.765 [2024-07-15 14:27:36.662068] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:51.333 cpumask for 'job0' is too big 00:34:51.333 cpumask for 'job1' is too big 00:34:51.333 cpumask for 'job2' is too big 00:34:51.333 cpumask for 'job3' is too big 00:34:54.668 14:27:40 bdevperf_config -- bdevperf/test_config.sh@25 -- # bdevperf_output='Using job config with 4 jobs 00:34:54.668 Running I/O for 2 seconds... 00:34:54.668 00:34:54.668 Latency(us) 00:34:54.668 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:54.668 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:34:54.668 Malloc0 : 2.00 82148.78 80.22 0.00 0.00 3114.62 592.06 5153.51 00:34:54.668 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:34:54.668 Malloc0 : 2.01 82157.47 80.23 0.00 0.00 3112.22 532.48 4617.31 00:34:54.668 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:34:54.668 Malloc0 : 2.01 82140.97 80.22 0.00 0.00 3110.51 603.23 4051.32 00:34:54.668 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:34:54.668 Malloc0 : 2.01 82125.30 80.20 0.00 0.00 3108.95 636.74 3902.37 00:34:54.668 =================================================================================================================== 00:34:54.668 Total : 328572.53 320.87 0.00 0.00 3111.57 532.48 5153.51' 00:34:54.668 14:27:40 bdevperf_config -- bdevperf/test_config.sh@27 -- # cleanup 00:34:54.668 14:27:40 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:34:54.668 14:27:40 bdevperf_config -- bdevperf/test_config.sh@29 -- # create_job job0 write Malloc0 00:34:54.668 14:27:40 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:34:54.668 14:27:40 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:34:54.668 14:27:40 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:34:54.668 14:27:40 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:34:54.668 14:27:40 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:34:54.668 00:34:54.668 14:27:40 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:34:54.668 14:27:40 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:34:54.668 14:27:40 bdevperf_config -- bdevperf/test_config.sh@30 -- # create_job job1 write Malloc0 00:34:54.668 14:27:40 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:34:54.668 14:27:40 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:34:54.668 14:27:40 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:34:54.668 14:27:40 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:34:54.668 14:27:40 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:34:54.668 00:34:54.668 14:27:40 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:34:54.668 14:27:40 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:34:54.668 14:27:40 bdevperf_config -- bdevperf/test_config.sh@31 -- # create_job job2 write Malloc0 00:34:54.668 14:27:40 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:34:54.668 14:27:40 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:34:54.668 14:27:40 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:34:54.668 14:27:40 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:34:54.668 14:27:40 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:34:54.668 00:34:54.668 14:27:40 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:34:54.668 14:27:40 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:34:54.668 14:27:40 bdevperf_config -- bdevperf/test_config.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:34:58.853 14:27:44 bdevperf_config -- bdevperf/test_config.sh@32 -- # bdevperf_output='[2024-07-15 14:27:40.503183] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:34:58.853 [2024-07-15 14:27:40.503436] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid221608 ] 00:34:58.853 Using job config with 3 jobs 00:34:58.853 [2024-07-15 14:27:40.664568] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:58.853 [2024-07-15 14:27:40.874819] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:58.853 cpumask for '\''job0'\'' is too big 00:34:58.853 cpumask for '\''job1'\'' is too big 00:34:58.853 cpumask for '\''job2'\'' is too big 00:34:58.853 Running I/O for 2 seconds... 00:34:58.853 00:34:58.853 Latency(us) 00:34:58.853 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:58.853 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:34:58.853 Malloc0 : 2.00 109848.82 107.27 0.00 0.00 2328.84 644.19 3693.85 00:34:58.853 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:34:58.853 Malloc0 : 2.00 109822.96 107.25 0.00 0.00 2327.73 595.78 3098.07 00:34:58.853 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:34:58.853 Malloc0 : 2.01 109802.97 107.23 0.00 0.00 2326.60 592.06 2904.44 00:34:58.853 =================================================================================================================== 00:34:58.853 Total : 329474.75 321.75 0.00 0.00 2327.72 592.06 3693.85' 00:34:58.853 14:27:44 bdevperf_config -- bdevperf/test_config.sh@33 -- # get_num_jobs '[2024-07-15 14:27:40.503183] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:34:58.853 [2024-07-15 14:27:40.503436] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid221608 ] 00:34:58.853 Using job config with 3 jobs 00:34:58.853 [2024-07-15 14:27:40.664568] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:58.853 [2024-07-15 14:27:40.874819] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:58.853 cpumask for '\''job0'\'' is too big 00:34:58.853 cpumask for '\''job1'\'' is too big 00:34:58.853 cpumask for '\''job2'\'' is too big 00:34:58.853 Running I/O for 2 seconds... 00:34:58.853 00:34:58.853 Latency(us) 00:34:58.853 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:58.853 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:34:58.853 Malloc0 : 2.00 109848.82 107.27 0.00 0.00 2328.84 644.19 3693.85 00:34:58.853 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:34:58.853 Malloc0 : 2.00 109822.96 107.25 0.00 0.00 2327.73 595.78 3098.07 00:34:58.853 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:34:58.853 Malloc0 : 2.01 109802.97 107.23 0.00 0.00 2326.60 592.06 2904.44 00:34:58.853 =================================================================================================================== 00:34:58.853 Total : 329474.75 321.75 0.00 0.00 2327.72 592.06 3693.85' 00:34:58.853 14:27:44 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-15 14:27:40.503183] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:34:58.853 [2024-07-15 14:27:40.503436] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid221608 ] 00:34:58.853 Using job config with 3 jobs 00:34:58.853 [2024-07-15 14:27:40.664568] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:58.853 [2024-07-15 14:27:40.874819] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:58.853 cpumask for '\''job0'\'' is too big 00:34:58.853 cpumask for '\''job1'\'' is too big 00:34:58.853 cpumask for '\''job2'\'' is too big 00:34:58.853 Running I/O for 2 seconds... 00:34:58.853 00:34:58.853 Latency(us) 00:34:58.853 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:58.853 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:34:58.853 Malloc0 : 2.00 109848.82 107.27 0.00 0.00 2328.84 644.19 3693.85 00:34:58.853 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:34:58.853 Malloc0 : 2.00 109822.96 107.25 0.00 0.00 2327.73 595.78 3098.07 00:34:58.853 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:34:58.853 Malloc0 : 2.01 109802.97 107.23 0.00 0.00 2326.60 592.06 2904.44 00:34:58.853 =================================================================================================================== 00:34:58.853 Total : 329474.75 321.75 0.00 0.00 2327.72 592.06 3693.85' 00:34:58.853 14:27:44 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:34:58.853 14:27:44 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:34:58.853 14:27:44 bdevperf_config -- bdevperf/test_config.sh@33 -- # [[ 3 == \3 ]] 00:34:58.853 14:27:44 bdevperf_config -- bdevperf/test_config.sh@35 -- # cleanup 00:34:58.853 14:27:44 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:34:58.853 14:27:44 bdevperf_config -- bdevperf/test_config.sh@37 -- # create_job global rw Malloc0:Malloc1 00:34:58.853 14:27:44 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global 00:34:58.853 14:27:44 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=rw 00:34:58.853 14:27:44 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0:Malloc1 00:34:58.853 14:27:44 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:34:58.853 14:27:44 bdevperf_config -- bdevperf/common.sh@13 -- # cat 00:34:58.853 14:27:44 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]' 00:34:58.853 00:34:58.853 14:27:44 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:34:58.853 14:27:44 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:34:58.853 14:27:44 bdevperf_config -- bdevperf/test_config.sh@38 -- # create_job job0 00:34:58.853 14:27:44 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:34:58.853 14:27:44 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:34:58.853 14:27:44 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:34:58.854 14:27:44 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:34:58.854 14:27:44 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:34:58.854 14:27:44 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:34:58.854 00:34:58.854 14:27:44 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:34:58.854 14:27:44 bdevperf_config -- bdevperf/test_config.sh@39 -- # create_job job1 00:34:58.854 14:27:44 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:34:58.854 14:27:44 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:34:58.854 14:27:44 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:34:58.854 14:27:44 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:34:58.854 14:27:44 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:34:58.854 00:34:58.854 14:27:44 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:34:58.854 14:27:44 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:34:58.854 14:27:44 bdevperf_config -- bdevperf/test_config.sh@40 -- # create_job job2 00:34:58.854 14:27:44 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:34:58.854 14:27:44 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:34:58.854 14:27:44 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:34:58.854 14:27:44 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:34:58.854 14:27:44 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:34:58.854 00:34:58.854 14:27:44 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:34:58.854 14:27:44 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:34:58.854 14:27:44 bdevperf_config -- bdevperf/test_config.sh@41 -- # create_job job3 00:34:58.854 14:27:44 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3 00:34:58.854 14:27:44 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:34:58.854 14:27:44 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:34:58.854 14:27:44 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:34:58.854 14:27:44 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]' 00:34:58.854 00:34:58.854 14:27:44 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:34:58.854 14:27:44 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:34:58.854 14:27:44 bdevperf_config -- bdevperf/test_config.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:35:03.043 14:27:48 bdevperf_config -- bdevperf/test_config.sh@42 -- # bdevperf_output='[2024-07-15 14:27:44.772705] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:35:03.043 [2024-07-15 14:27:44.772968] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid221666 ] 00:35:03.043 Using job config with 4 jobs 00:35:03.043 [2024-07-15 14:27:44.935058] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:03.043 [2024-07-15 14:27:45.132592] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:03.043 cpumask for '\''job0'\'' is too big 00:35:03.043 cpumask for '\''job1'\'' is too big 00:35:03.043 cpumask for '\''job2'\'' is too big 00:35:03.043 cpumask for '\''job3'\'' is too big 00:35:03.043 Running I/O for 2 seconds... 00:35:03.043 00:35:03.043 Latency(us) 00:35:03.043 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:03.043 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:35:03.043 Malloc0 : 2.01 41790.63 40.81 0.00 0.00 6123.20 1496.90 10485.76 00:35:03.043 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:35:03.044 Malloc1 : 2.01 41780.65 40.80 0.00 0.00 6122.18 1712.87 10307.03 00:35:03.044 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:35:03.044 Malloc0 : 2.01 41772.01 40.79 0.00 0.00 6115.76 1266.04 9055.88 00:35:03.044 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:35:03.044 Malloc1 : 2.01 41761.31 40.78 0.00 0.00 6115.60 1377.75 9115.46 00:35:03.044 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:35:03.044 Malloc0 : 2.01 41808.37 40.83 0.00 0.00 6102.14 1295.83 7804.74 00:35:03.044 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:35:03.044 Malloc1 : 2.02 41797.25 40.82 0.00 0.00 6101.15 1407.53 7745.16 00:35:03.044 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:35:03.044 Malloc0 : 2.02 41788.20 40.81 0.00 0.00 6095.78 1280.93 7298.33 00:35:03.044 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:35:03.044 Malloc1 : 2.02 41778.23 40.80 0.00 0.00 6095.38 1474.56 7298.33 00:35:03.044 =================================================================================================================== 00:35:03.044 Total : 334276.65 326.44 0.00 0.00 6108.88 1266.04 10485.76' 00:35:03.044 14:27:48 bdevperf_config -- bdevperf/test_config.sh@43 -- # get_num_jobs '[2024-07-15 14:27:44.772705] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:35:03.044 [2024-07-15 14:27:44.772968] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid221666 ] 00:35:03.044 Using job config with 4 jobs 00:35:03.044 [2024-07-15 14:27:44.935058] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:03.044 [2024-07-15 14:27:45.132592] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:03.044 cpumask for '\''job0'\'' is too big 00:35:03.044 cpumask for '\''job1'\'' is too big 00:35:03.044 cpumask for '\''job2'\'' is too big 00:35:03.044 cpumask for '\''job3'\'' is too big 00:35:03.044 Running I/O for 2 seconds... 00:35:03.044 00:35:03.044 Latency(us) 00:35:03.044 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:03.044 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:35:03.044 Malloc0 : 2.01 41790.63 40.81 0.00 0.00 6123.20 1496.90 10485.76 00:35:03.044 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:35:03.044 Malloc1 : 2.01 41780.65 40.80 0.00 0.00 6122.18 1712.87 10307.03 00:35:03.044 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:35:03.044 Malloc0 : 2.01 41772.01 40.79 0.00 0.00 6115.76 1266.04 9055.88 00:35:03.044 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:35:03.044 Malloc1 : 2.01 41761.31 40.78 0.00 0.00 6115.60 1377.75 9115.46 00:35:03.044 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:35:03.044 Malloc0 : 2.01 41808.37 40.83 0.00 0.00 6102.14 1295.83 7804.74 00:35:03.044 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:35:03.044 Malloc1 : 2.02 41797.25 40.82 0.00 0.00 6101.15 1407.53 7745.16 00:35:03.044 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:35:03.044 Malloc0 : 2.02 41788.20 40.81 0.00 0.00 6095.78 1280.93 7298.33 00:35:03.044 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:35:03.044 Malloc1 : 2.02 41778.23 40.80 0.00 0.00 6095.38 1474.56 7298.33 00:35:03.044 =================================================================================================================== 00:35:03.044 Total : 334276.65 326.44 0.00 0.00 6108.88 1266.04 10485.76' 00:35:03.044 14:27:48 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:35:03.044 14:27:48 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-15 14:27:44.772705] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:35:03.044 [2024-07-15 14:27:44.772968] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid221666 ] 00:35:03.044 Using job config with 4 jobs 00:35:03.044 [2024-07-15 14:27:44.935058] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:03.044 [2024-07-15 14:27:45.132592] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:03.044 cpumask for '\''job0'\'' is too big 00:35:03.044 cpumask for '\''job1'\'' is too big 00:35:03.044 cpumask for '\''job2'\'' is too big 00:35:03.044 cpumask for '\''job3'\'' is too big 00:35:03.044 Running I/O for 2 seconds... 00:35:03.044 00:35:03.044 Latency(us) 00:35:03.044 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:03.044 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:35:03.044 Malloc0 : 2.01 41790.63 40.81 0.00 0.00 6123.20 1496.90 10485.76 00:35:03.044 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:35:03.044 Malloc1 : 2.01 41780.65 40.80 0.00 0.00 6122.18 1712.87 10307.03 00:35:03.044 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:35:03.044 Malloc0 : 2.01 41772.01 40.79 0.00 0.00 6115.76 1266.04 9055.88 00:35:03.044 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:35:03.044 Malloc1 : 2.01 41761.31 40.78 0.00 0.00 6115.60 1377.75 9115.46 00:35:03.044 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:35:03.044 Malloc0 : 2.01 41808.37 40.83 0.00 0.00 6102.14 1295.83 7804.74 00:35:03.044 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:35:03.044 Malloc1 : 2.02 41797.25 40.82 0.00 0.00 6101.15 1407.53 7745.16 00:35:03.044 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:35:03.044 Malloc0 : 2.02 41788.20 40.81 0.00 0.00 6095.78 1280.93 7298.33 00:35:03.044 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:35:03.044 Malloc1 : 2.02 41778.23 40.80 0.00 0.00 6095.38 1474.56 7298.33 00:35:03.044 =================================================================================================================== 00:35:03.044 Total : 334276.65 326.44 0.00 0.00 6108.88 1266.04 10485.76' 00:35:03.044 14:27:48 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:35:03.044 14:27:48 bdevperf_config -- bdevperf/test_config.sh@43 -- # [[ 4 == \4 ]] 00:35:03.044 14:27:48 bdevperf_config -- bdevperf/test_config.sh@44 -- # cleanup 00:35:03.044 14:27:48 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:35:03.044 14:27:48 bdevperf_config -- bdevperf/test_config.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:35:03.044 00:35:03.044 real 0m17.094s 00:35:03.044 user 0m15.368s 00:35:03.044 sys 0m1.156s 00:35:03.044 14:27:48 bdevperf_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:03.044 14:27:48 bdevperf_config -- common/autotest_common.sh@10 -- # set +x 00:35:03.044 ************************************ 00:35:03.044 END TEST bdevperf_config 00:35:03.044 ************************************ 00:35:03.044 14:27:48 -- common/autotest_common.sh@1142 -- # return 0 00:35:03.044 14:27:48 -- spdk/autotest.sh@192 -- # uname -s 00:35:03.044 14:27:48 -- spdk/autotest.sh@192 -- # [[ Linux == Linux ]] 00:35:03.045 14:27:48 -- spdk/autotest.sh@193 -- # run_test reactor_set_interrupt /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:35:03.045 14:27:48 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:03.045 14:27:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:03.045 14:27:48 -- common/autotest_common.sh@10 -- # set +x 00:35:03.045 ************************************ 00:35:03.045 START TEST reactor_set_interrupt 00:35:03.045 ************************************ 00:35:03.045 14:27:48 reactor_set_interrupt -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:35:03.045 * Looking for test storage... 00:35:03.045 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:35:03.045 14:27:49 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh 00:35:03.045 14:27:49 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:35:03.045 14:27:49 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt 00:35:03.045 14:27:49 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt 00:35:03.045 14:27:49 reactor_set_interrupt -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../.. 00:35:03.045 14:27:49 reactor_set_interrupt -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:35:03.045 14:27:49 reactor_set_interrupt -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:35:03.045 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:35:03.045 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@34 -- # set -e 00:35:03.045 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:35:03.045 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@36 -- # shopt -s extglob 00:35:03.045 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:35:03.045 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:35:03.045 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:35:03.045 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:35:03.045 14:27:49 reactor_set_interrupt -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:35:03.045 14:27:49 reactor_set_interrupt -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:35:03.045 14:27:49 reactor_set_interrupt -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:35:03.045 14:27:49 reactor_set_interrupt -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:35:03.045 14:27:49 reactor_set_interrupt -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:35:03.045 14:27:49 reactor_set_interrupt -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:35:03.045 14:27:49 reactor_set_interrupt -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:35:03.045 14:27:49 reactor_set_interrupt -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:35:03.045 14:27:49 reactor_set_interrupt -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:35:03.045 14:27:49 reactor_set_interrupt -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:35:03.045 14:27:49 reactor_set_interrupt -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:35:03.045 14:27:49 reactor_set_interrupt -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:35:03.045 14:27:49 reactor_set_interrupt -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:35:03.045 14:27:49 reactor_set_interrupt -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:35:03.045 14:27:49 reactor_set_interrupt -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:35:03.045 14:27:49 reactor_set_interrupt -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:35:03.045 14:27:49 reactor_set_interrupt -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:35:03.045 14:27:49 reactor_set_interrupt -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:35:03.045 14:27:49 reactor_set_interrupt -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:35:03.045 14:27:49 reactor_set_interrupt -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:35:03.045 14:27:49 reactor_set_interrupt -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:35:03.045 14:27:49 reactor_set_interrupt -- common/build_config.sh@22 -- # CONFIG_CET=n 00:35:03.045 14:27:49 reactor_set_interrupt -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:35:03.045 14:27:49 reactor_set_interrupt -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:35:03.045 14:27:49 reactor_set_interrupt -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:35:03.045 14:27:49 reactor_set_interrupt -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=n 00:35:03.306 14:27:49 reactor_set_interrupt -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:35:03.306 14:27:49 reactor_set_interrupt -- common/build_config.sh@28 -- # CONFIG_UBLK=n 00:35:03.306 14:27:49 reactor_set_interrupt -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:35:03.306 14:27:49 reactor_set_interrupt -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:35:03.306 14:27:49 reactor_set_interrupt -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:35:03.306 14:27:49 reactor_set_interrupt -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:35:03.306 14:27:49 reactor_set_interrupt -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:35:03.306 14:27:49 reactor_set_interrupt -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:35:03.306 14:27:49 reactor_set_interrupt -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:35:03.306 14:27:49 reactor_set_interrupt -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:35:03.306 14:27:49 reactor_set_interrupt -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:35:03.306 14:27:49 reactor_set_interrupt -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:35:03.306 14:27:49 reactor_set_interrupt -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:35:03.306 14:27:49 reactor_set_interrupt -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:35:03.306 14:27:49 reactor_set_interrupt -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:35:03.306 14:27:49 reactor_set_interrupt -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:35:03.306 14:27:49 reactor_set_interrupt -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=y 00:35:03.306 14:27:49 reactor_set_interrupt -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:35:03.306 14:27:49 reactor_set_interrupt -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:35:03.306 14:27:49 reactor_set_interrupt -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:35:03.306 14:27:49 reactor_set_interrupt -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:35:03.306 14:27:49 reactor_set_interrupt -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:35:03.306 14:27:49 reactor_set_interrupt -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:35:03.306 14:27:49 reactor_set_interrupt -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:35:03.306 14:27:49 reactor_set_interrupt -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:35:03.306 14:27:49 reactor_set_interrupt -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:35:03.306 14:27:49 reactor_set_interrupt -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:35:03.306 14:27:49 reactor_set_interrupt -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:35:03.306 14:27:49 reactor_set_interrupt -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:35:03.306 14:27:49 reactor_set_interrupt -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:35:03.306 14:27:49 reactor_set_interrupt -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:35:03.306 14:27:49 reactor_set_interrupt -- common/build_config.sh@58 -- # CONFIG_UBSAN=n 00:35:03.306 14:27:49 reactor_set_interrupt -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:35:03.306 14:27:49 reactor_set_interrupt -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:35:03.306 14:27:49 reactor_set_interrupt -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:35:03.306 14:27:49 reactor_set_interrupt -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=n 00:35:03.306 14:27:49 reactor_set_interrupt -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:35:03.306 14:27:49 reactor_set_interrupt -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:35:03.306 14:27:49 reactor_set_interrupt -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:35:03.306 14:27:49 reactor_set_interrupt -- common/build_config.sh@66 -- # CONFIG_SHARED=n 00:35:03.306 14:27:49 reactor_set_interrupt -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:35:03.306 14:27:49 reactor_set_interrupt -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:35:03.306 14:27:49 reactor_set_interrupt -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:35:03.306 14:27:49 reactor_set_interrupt -- common/build_config.sh@70 -- # CONFIG_FC=n 00:35:03.306 14:27:49 reactor_set_interrupt -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:35:03.306 14:27:49 reactor_set_interrupt -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:35:03.306 14:27:49 reactor_set_interrupt -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:35:03.306 14:27:49 reactor_set_interrupt -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:35:03.306 14:27:49 reactor_set_interrupt -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:35:03.306 14:27:49 reactor_set_interrupt -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:35:03.306 14:27:49 reactor_set_interrupt -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:35:03.306 14:27:49 reactor_set_interrupt -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:35:03.306 14:27:49 reactor_set_interrupt -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:35:03.306 14:27:49 reactor_set_interrupt -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:35:03.306 14:27:49 reactor_set_interrupt -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:35:03.306 14:27:49 reactor_set_interrupt -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:35:03.306 14:27:49 reactor_set_interrupt -- common/build_config.sh@83 -- # CONFIG_URING=n 00:35:03.306 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:35:03.306 14:27:49 reactor_set_interrupt -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:35:03.306 14:27:49 reactor_set_interrupt -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:35:03.307 14:27:49 reactor_set_interrupt -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:35:03.307 14:27:49 reactor_set_interrupt -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:35:03.307 14:27:49 reactor_set_interrupt -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:35:03.307 14:27:49 reactor_set_interrupt -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:35:03.307 14:27:49 reactor_set_interrupt -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:35:03.307 14:27:49 reactor_set_interrupt -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:35:03.307 14:27:49 reactor_set_interrupt -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:35:03.307 14:27:49 reactor_set_interrupt -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:35:03.307 14:27:49 reactor_set_interrupt -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:35:03.307 14:27:49 reactor_set_interrupt -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:35:03.307 14:27:49 reactor_set_interrupt -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:35:03.307 14:27:49 reactor_set_interrupt -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:35:03.307 14:27:49 reactor_set_interrupt -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:35:03.307 #define SPDK_CONFIG_H 00:35:03.307 #define SPDK_CONFIG_APPS 1 00:35:03.307 #define SPDK_CONFIG_ARCH native 00:35:03.307 #define SPDK_CONFIG_ASAN 1 00:35:03.307 #undef SPDK_CONFIG_AVAHI 00:35:03.307 #undef SPDK_CONFIG_CET 00:35:03.307 #define SPDK_CONFIG_COVERAGE 1 00:35:03.307 #define SPDK_CONFIG_CROSS_PREFIX 00:35:03.307 #undef SPDK_CONFIG_CRYPTO 00:35:03.307 #undef SPDK_CONFIG_CRYPTO_MLX5 00:35:03.307 #undef SPDK_CONFIG_CUSTOMOCF 00:35:03.307 #undef SPDK_CONFIG_DAOS 00:35:03.307 #define SPDK_CONFIG_DAOS_DIR 00:35:03.307 #define SPDK_CONFIG_DEBUG 1 00:35:03.307 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:35:03.307 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:35:03.307 #define SPDK_CONFIG_DPDK_INC_DIR 00:35:03.307 #define SPDK_CONFIG_DPDK_LIB_DIR 00:35:03.307 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:35:03.307 #undef SPDK_CONFIG_DPDK_UADK 00:35:03.307 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:35:03.307 #define SPDK_CONFIG_EXAMPLES 1 00:35:03.307 #undef SPDK_CONFIG_FC 00:35:03.307 #define SPDK_CONFIG_FC_PATH 00:35:03.307 #define SPDK_CONFIG_FIO_PLUGIN 1 00:35:03.307 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:35:03.307 #undef SPDK_CONFIG_FUSE 00:35:03.307 #undef SPDK_CONFIG_FUZZER 00:35:03.307 #define SPDK_CONFIG_FUZZER_LIB 00:35:03.307 #undef SPDK_CONFIG_GOLANG 00:35:03.307 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:35:03.307 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:35:03.307 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:35:03.307 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:35:03.307 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:35:03.307 #undef SPDK_CONFIG_HAVE_LIBBSD 00:35:03.307 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:35:03.307 #define SPDK_CONFIG_IDXD 1 00:35:03.307 #undef SPDK_CONFIG_IDXD_KERNEL 00:35:03.307 #undef SPDK_CONFIG_IPSEC_MB 00:35:03.307 #define SPDK_CONFIG_IPSEC_MB_DIR 00:35:03.307 #define SPDK_CONFIG_ISAL 1 00:35:03.307 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:35:03.307 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:35:03.307 #define SPDK_CONFIG_LIBDIR 00:35:03.307 #undef SPDK_CONFIG_LTO 00:35:03.307 #define SPDK_CONFIG_MAX_LCORES 128 00:35:03.307 #define SPDK_CONFIG_NVME_CUSE 1 00:35:03.307 #undef SPDK_CONFIG_OCF 00:35:03.307 #define SPDK_CONFIG_OCF_PATH 00:35:03.307 #define SPDK_CONFIG_OPENSSL_PATH 00:35:03.307 #undef SPDK_CONFIG_PGO_CAPTURE 00:35:03.307 #define SPDK_CONFIG_PGO_DIR 00:35:03.307 #undef SPDK_CONFIG_PGO_USE 00:35:03.307 #define SPDK_CONFIG_PREFIX /usr/local 00:35:03.307 #undef SPDK_CONFIG_RAID5F 00:35:03.307 #undef SPDK_CONFIG_RBD 00:35:03.307 #define SPDK_CONFIG_RDMA 1 00:35:03.307 #define SPDK_CONFIG_RDMA_PROV verbs 00:35:03.307 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:35:03.307 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:35:03.307 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:35:03.307 #undef SPDK_CONFIG_SHARED 00:35:03.307 #undef SPDK_CONFIG_SMA 00:35:03.307 #define SPDK_CONFIG_TESTS 1 00:35:03.307 #undef SPDK_CONFIG_TSAN 00:35:03.307 #undef SPDK_CONFIG_UBLK 00:35:03.307 #undef SPDK_CONFIG_UBSAN 00:35:03.307 #define SPDK_CONFIG_UNIT_TESTS 1 00:35:03.307 #undef SPDK_CONFIG_URING 00:35:03.307 #define SPDK_CONFIG_URING_PATH 00:35:03.307 #undef SPDK_CONFIG_URING_ZNS 00:35:03.307 #undef SPDK_CONFIG_USDT 00:35:03.307 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:35:03.307 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:35:03.307 #undef SPDK_CONFIG_VFIO_USER 00:35:03.307 #define SPDK_CONFIG_VFIO_USER_DIR 00:35:03.307 #define SPDK_CONFIG_VHOST 1 00:35:03.307 #define SPDK_CONFIG_VIRTIO 1 00:35:03.307 #undef SPDK_CONFIG_VTUNE 00:35:03.307 #define SPDK_CONFIG_VTUNE_DIR 00:35:03.307 #define SPDK_CONFIG_WERROR 1 00:35:03.307 #define SPDK_CONFIG_WPDK_DIR 00:35:03.307 #undef SPDK_CONFIG_XNVME 00:35:03.307 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:35:03.307 14:27:49 reactor_set_interrupt -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:35:03.307 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:35:03.307 14:27:49 reactor_set_interrupt -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:03.307 14:27:49 reactor_set_interrupt -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:03.307 14:27:49 reactor_set_interrupt -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:03.307 14:27:49 reactor_set_interrupt -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:35:03.307 14:27:49 reactor_set_interrupt -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:35:03.307 14:27:49 reactor_set_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:35:03.307 14:27:49 reactor_set_interrupt -- paths/export.sh@5 -- # export PATH 00:35:03.307 14:27:49 reactor_set_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:35:03.307 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:35:03.307 14:27:49 reactor_set_interrupt -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:35:03.307 14:27:49 reactor_set_interrupt -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:35:03.307 14:27:49 reactor_set_interrupt -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:35:03.307 14:27:49 reactor_set_interrupt -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:35:03.307 14:27:49 reactor_set_interrupt -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:35:03.307 14:27:49 reactor_set_interrupt -- pm/common@64 -- # TEST_TAG=N/A 00:35:03.307 14:27:49 reactor_set_interrupt -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:35:03.307 14:27:49 reactor_set_interrupt -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:35:03.307 14:27:49 reactor_set_interrupt -- pm/common@68 -- # uname -s 00:35:03.307 14:27:49 reactor_set_interrupt -- pm/common@68 -- # PM_OS=Linux 00:35:03.307 14:27:49 reactor_set_interrupt -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:35:03.307 14:27:49 reactor_set_interrupt -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:35:03.307 14:27:49 reactor_set_interrupt -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:35:03.307 14:27:49 reactor_set_interrupt -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:35:03.307 14:27:49 reactor_set_interrupt -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:35:03.307 14:27:49 reactor_set_interrupt -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:35:03.307 14:27:49 reactor_set_interrupt -- pm/common@76 -- # SUDO[0]= 00:35:03.308 14:27:49 reactor_set_interrupt -- pm/common@76 -- # SUDO[1]='sudo -E' 00:35:03.308 14:27:49 reactor_set_interrupt -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:35:03.308 14:27:49 reactor_set_interrupt -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:35:03.308 14:27:49 reactor_set_interrupt -- pm/common@81 -- # [[ Linux == Linux ]] 00:35:03.308 14:27:49 reactor_set_interrupt -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:35:03.308 14:27:49 reactor_set_interrupt -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:35:03.308 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@58 -- # : 0 00:35:03.308 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:35:03.308 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@62 -- # : 0 00:35:03.308 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:35:03.308 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@64 -- # : 0 00:35:03.308 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:35:03.308 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@66 -- # : 1 00:35:03.308 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:35:03.308 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@68 -- # : 1 00:35:03.308 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:35:03.308 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@70 -- # : 00:35:03.308 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:35:03.308 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@72 -- # : 1 00:35:03.308 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:35:03.308 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@74 -- # : 0 00:35:03.308 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:35:03.308 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@76 -- # : 0 00:35:03.308 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:35:03.308 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@78 -- # : 0 00:35:03.308 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:35:03.308 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@80 -- # : 0 00:35:03.308 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:35:03.308 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@82 -- # : 0 00:35:03.308 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:35:03.308 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@84 -- # : 0 00:35:03.308 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:35:03.308 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@86 -- # : 0 00:35:03.308 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:35:03.308 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@88 -- # : 0 00:35:03.308 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:35:03.308 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@90 -- # : 0 00:35:03.308 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:35:03.308 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@92 -- # : 0 00:35:03.308 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:35:03.308 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@94 -- # : 0 00:35:03.308 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:35:03.308 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@96 -- # : 0 00:35:03.308 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:35:03.308 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@98 -- # : 0 00:35:03.308 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:35:03.308 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@100 -- # : 0 00:35:03.308 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:35:03.308 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@102 -- # : rdma 00:35:03.308 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:35:03.308 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@104 -- # : 0 00:35:03.308 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:35:03.308 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@106 -- # : 0 00:35:03.308 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:35:03.308 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@108 -- # : 1 00:35:03.308 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:35:03.308 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@110 -- # : 0 00:35:03.308 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:35:03.308 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@112 -- # : 0 00:35:03.308 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:35:03.308 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@114 -- # : 0 00:35:03.308 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:35:03.308 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@116 -- # : 0 00:35:03.308 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:35:03.308 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@118 -- # : 0 00:35:03.308 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:35:03.308 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@120 -- # : 1 00:35:03.308 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:35:03.308 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@122 -- # : 0 00:35:03.308 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:35:03.308 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@124 -- # : 00:35:03.308 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:35:03.308 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@126 -- # : 0 00:35:03.308 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:35:03.308 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@128 -- # : 0 00:35:03.308 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:35:03.308 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@130 -- # : 0 00:35:03.308 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:35:03.308 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@132 -- # : 0 00:35:03.308 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:35:03.308 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@134 -- # : 0 00:35:03.308 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:35:03.308 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@136 -- # : 0 00:35:03.308 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:35:03.308 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@138 -- # : 00:35:03.308 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:35:03.308 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@140 -- # : true 00:35:03.308 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:35:03.308 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@142 -- # : 0 00:35:03.308 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:35:03.308 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@144 -- # : 0 00:35:03.308 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:35:03.308 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@146 -- # : 0 00:35:03.308 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:35:03.308 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@148 -- # : 1 00:35:03.308 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:35:03.308 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@150 -- # : 0 00:35:03.308 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:35:03.308 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@152 -- # : 0 00:35:03.308 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:35:03.309 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@154 -- # : 00:35:03.309 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:35:03.309 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@156 -- # : 0 00:35:03.309 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:35:03.309 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@158 -- # : 1 00:35:03.309 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:35:03.309 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@160 -- # : 0 00:35:03.309 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:35:03.309 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@162 -- # : 0 00:35:03.309 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:35:03.309 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@164 -- # : 0 00:35:03.309 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:35:03.309 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@167 -- # : 00:35:03.309 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:35:03.309 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@169 -- # : 0 00:35:03.309 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:35:03.309 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@171 -- # : 0 00:35:03.309 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:35:03.309 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:35:03.309 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:35:03.309 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:35:03.309 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:35:03.309 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:35:03.309 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:35:03.309 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:35:03.309 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:35:03.309 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:35:03.309 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:35:03.309 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:35:03.309 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@185 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:35:03.309 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:35:03.309 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:35:03.309 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:35:03.309 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:35:03.309 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:35:03.309 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:35:03.309 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:35:03.309 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:35:03.309 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@200 -- # cat 00:35:03.309 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:35:03.309 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:35:03.309 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:35:03.309 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:35:03.309 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:35:03.309 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:35:03.309 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:35:03.309 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:35:03.309 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:35:03.309 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:35:03.309 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:35:03.309 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@253 -- # export QEMU_BIN= 00:35:03.309 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@253 -- # QEMU_BIN= 00:35:03.309 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@254 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:35:03.309 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:35:03.309 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@256 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:35:03.309 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@256 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:35:03.309 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:35:03.309 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:35:03.309 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:35:03.309 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@263 -- # export valgrind= 00:35:03.309 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@263 -- # valgrind= 00:35:03.309 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@269 -- # uname -s 00:35:03.309 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:35:03.309 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:35:03.309 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:35:03.309 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:35:03.309 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:35:03.309 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:35:03.309 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@279 -- # MAKE=make 00:35:03.309 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j10 00:35:03.309 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:35:03.309 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:35:03.309 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:35:03.309 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@299 -- # TEST_MODE= 00:35:03.309 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@318 -- # [[ -z 221763 ]] 00:35:03.309 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@318 -- # kill -0 221763 00:35:03.309 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:35:03.309 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:35:03.309 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:35:03.309 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@331 -- # local mount target_dir 00:35:03.309 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:35:03.309 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:35:03.310 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:35:03.310 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:35:03.310 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.sK5ZW2 00:35:03.310 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:35:03.310 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:35:03.310 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:35:03.310 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@355 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.sK5ZW2/tests/interrupt /tmp/spdk.sK5ZW2 00:35:03.310 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:35:03.310 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:35:03.310 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@327 -- # df -T 00:35:03.310 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:35:03.310 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=devtmpfs 00:35:03.310 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:35:03.310 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=4194304 00:35:03.310 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=4194304 00:35:03.310 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:35:03.310 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:35:03.310 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:35:03.310 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:35:03.310 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=6265700352 00:35:03.310 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=6270410752 00:35:03.310 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=4710400 00:35:03.310 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:35:03.310 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:35:03.310 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:35:03.310 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=2487136256 00:35:03.310 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=2508165120 00:35:03.310 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=21028864 00:35:03.310 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:35:03.310 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda5 00:35:03.310 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=xfs 00:35:03.310 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=12907356160 00:35:03.310 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=20303577088 00:35:03.310 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=7396220928 00:35:03.310 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:35:03.310 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda2 00:35:03.310 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=xfs 00:35:03.310 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=896184320 00:35:03.310 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=1042161664 00:35:03.310 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=145977344 00:35:03.310 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:35:03.310 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda1 00:35:03.310 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=vfat 00:35:03.310 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=97312768 00:35:03.310 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=104607744 00:35:03.310 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=7294976 00:35:03.310 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:35:03.310 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:35:03.310 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:35:03.310 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=1254076416 00:35:03.310 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=1254080512 00:35:03.310 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:35:03.310 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:35:03.310 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@361 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/rocky9-vg-autotest_2/rocky9-libvirt/output 00:35:03.310 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@361 -- # fss["$mount"]=fuse.sshfs 00:35:03.310 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@362 -- # avails["$mount"]=94543278080 00:35:03.310 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@362 -- # sizes["$mount"]=105088212992 00:35:03.310 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@363 -- # uses["$mount"]=5159501824 00:35:03.310 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:35:03.310 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:35:03.310 * Looking for test storage... 00:35:03.310 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@368 -- # local target_space new_size 00:35:03.310 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:35:03.310 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@372 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt 00:35:03.310 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:35:03.310 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@372 -- # mount=/ 00:35:03.310 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@374 -- # target_space=12907356160 00:35:03.310 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:35:03.310 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:35:03.310 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@380 -- # [[ xfs == tmpfs ]] 00:35:03.310 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@380 -- # [[ xfs == ramfs ]] 00:35:03.310 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:35:03.310 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@381 -- # new_size=9610813440 00:35:03.310 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:35:03.310 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:35:03.310 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:35:03.310 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt 00:35:03.310 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:35:03.310 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@389 -- # return 0 00:35:03.310 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@1682 -- # set -o errtrace 00:35:03.310 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:35:03.310 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:35:03.310 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:35:03.310 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@1687 -- # true 00:35:03.310 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@1689 -- # xtrace_fd 00:35:03.310 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:35:03.310 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:35:03.310 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@27 -- # exec 00:35:03.310 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@29 -- # exec 00:35:03.310 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@31 -- # xtrace_restore 00:35:03.310 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:35:03.310 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:35:03.311 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@18 -- # set -x 00:35:03.311 14:27:49 reactor_set_interrupt -- interrupt/interrupt_common.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/common.sh 00:35:03.311 14:27:49 reactor_set_interrupt -- interrupt/interrupt_common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:03.311 14:27:49 reactor_set_interrupt -- interrupt/interrupt_common.sh@12 -- # r0_mask=0x1 00:35:03.311 14:27:49 reactor_set_interrupt -- interrupt/interrupt_common.sh@13 -- # r1_mask=0x2 00:35:03.311 14:27:49 reactor_set_interrupt -- interrupt/interrupt_common.sh@14 -- # r2_mask=0x4 00:35:03.311 14:27:49 reactor_set_interrupt -- interrupt/interrupt_common.sh@16 -- # cpu_server_mask=0x07 00:35:03.311 14:27:49 reactor_set_interrupt -- interrupt/interrupt_common.sh@17 -- # rpc_server_addr=/var/tmp/spdk.sock 00:35:03.311 14:27:49 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@11 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:35:03.311 14:27:49 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@11 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:35:03.311 14:27:49 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@86 -- # start_intr_tgt 00:35:03.311 14:27:49 reactor_set_interrupt -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:03.311 14:27:49 reactor_set_interrupt -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:35:03.311 14:27:49 reactor_set_interrupt -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=221807 00:35:03.311 14:27:49 reactor_set_interrupt -- interrupt/interrupt_common.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:35:03.311 14:27:49 reactor_set_interrupt -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:35:03.311 14:27:49 reactor_set_interrupt -- interrupt/interrupt_common.sh@26 -- # waitforlisten 221807 /var/tmp/spdk.sock 00:35:03.311 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@829 -- # '[' -z 221807 ']' 00:35:03.311 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:03.311 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:03.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:03.311 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:03.311 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:03.311 14:27:49 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:03.311 [2024-07-15 14:27:49.197969] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:35:03.311 [2024-07-15 14:27:49.198673] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid221807 ] 00:35:03.569 [2024-07-15 14:27:49.372399] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:03.827 [2024-07-15 14:27:49.580160] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:03.827 [2024-07-15 14:27:49.580292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:03.827 [2024-07-15 14:27:49.580292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:35:04.085 [2024-07-15 14:27:49.873834] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:04.343 14:27:50 reactor_set_interrupt -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:04.344 14:27:50 reactor_set_interrupt -- common/autotest_common.sh@862 -- # return 0 00:35:04.344 14:27:50 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@87 -- # setup_bdev_mem 00:35:04.344 14:27:50 reactor_set_interrupt -- interrupt/common.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:04.602 Malloc0 00:35:04.602 Malloc1 00:35:04.602 Malloc2 00:35:04.602 14:27:50 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@88 -- # setup_bdev_aio 00:35:04.602 14:27:50 reactor_set_interrupt -- interrupt/common.sh@75 -- # uname -s 00:35:04.602 14:27:50 reactor_set_interrupt -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:35:04.602 14:27:50 reactor_set_interrupt -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:35:04.602 5000+0 records in 00:35:04.602 5000+0 records out 00:35:04.602 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0155224 s, 660 MB/s 00:35:04.602 14:27:50 reactor_set_interrupt -- interrupt/common.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:35:05.182 AIO0 00:35:05.182 14:27:50 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@90 -- # reactor_set_mode_without_threads 221807 00:35:05.182 14:27:50 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@76 -- # reactor_set_intr_mode 221807 without_thd 00:35:05.182 14:27:50 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=221807 00:35:05.182 14:27:50 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd=without_thd 00:35:05.182 14:27:50 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:35:05.182 14:27:50 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:35:05.182 14:27:50 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x1 00:35:05.182 14:27:50 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:35:05.182 14:27:50 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=1 00:35:05.182 14:27:50 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:35:05.182 14:27:50 reactor_set_interrupt -- interrupt/common.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:35:05.182 14:27:50 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:35:05.441 14:27:51 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo 1 00:35:05.441 14:27:51 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:35:05.441 14:27:51 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:35:05.441 14:27:51 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x4 00:35:05.441 14:27:51 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:35:05.441 14:27:51 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=4 00:35:05.441 14:27:51 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:35:05.441 14:27:51 reactor_set_interrupt -- interrupt/common.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:35:05.441 14:27:51 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:35:05.700 14:27:51 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo '' 00:35:05.700 14:27:51 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:35:05.700 spdk_thread ids are 1 on reactor0. 00:35:05.700 14:27:51 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:35:05.700 14:27:51 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:35:05.700 14:27:51 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 221807 0 00:35:05.700 14:27:51 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 221807 0 idle 00:35:05.700 14:27:51 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=221807 00:35:05.700 14:27:51 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:05.700 14:27:51 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:05.700 14:27:51 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:35:05.700 14:27:51 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:35:05.700 14:27:51 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:35:05.700 14:27:51 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:35:05.700 14:27:51 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:35:05.700 14:27:51 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 221807 -w 256 00:35:05.700 14:27:51 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:35:05.700 14:27:51 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 221807 root 20 0 20.1t 134136 21884 S 0.0 1.1 0:00.74 reactor_0' 00:35:05.700 14:27:51 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 221807 root 20 0 20.1t 134136 21884 S 0.0 1.1 0:00.74 reactor_0 00:35:05.700 14:27:51 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:35:05.700 14:27:51 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:35:05.700 14:27:51 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:35:05.700 14:27:51 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:35:05.700 14:27:51 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:35:05.700 14:27:51 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:35:05.700 14:27:51 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:35:05.700 14:27:51 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:35:05.700 14:27:51 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:35:05.700 14:27:51 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 221807 1 00:35:05.700 14:27:51 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 221807 1 idle 00:35:05.700 14:27:51 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=221807 00:35:05.700 14:27:51 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:05.700 14:27:51 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:05.700 14:27:51 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:35:05.700 14:27:51 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:35:05.700 14:27:51 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:35:05.700 14:27:51 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:35:05.700 14:27:51 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:35:05.700 14:27:51 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 221807 -w 256 00:35:05.700 14:27:51 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_1 00:35:05.959 14:27:51 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 221810 root 20 0 20.1t 134136 21884 S 0.0 1.1 0:00.00 reactor_1' 00:35:05.959 14:27:51 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 221810 root 20 0 20.1t 134136 21884 S 0.0 1.1 0:00.00 reactor_1 00:35:05.959 14:27:51 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:35:05.959 14:27:51 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:35:05.959 14:27:51 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:35:05.959 14:27:51 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:35:05.959 14:27:51 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:35:05.959 14:27:51 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:35:05.959 14:27:51 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:35:05.959 14:27:51 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:35:05.959 14:27:51 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:35:05.959 14:27:51 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 221807 2 00:35:05.959 14:27:51 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 221807 2 idle 00:35:05.959 14:27:51 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=221807 00:35:05.959 14:27:51 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:35:05.959 14:27:51 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:05.959 14:27:51 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:35:05.959 14:27:51 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:35:05.959 14:27:51 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:35:05.959 14:27:51 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:35:05.959 14:27:51 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:35:05.959 14:27:51 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 221807 -w 256 00:35:05.959 14:27:51 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:35:06.218 14:27:51 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 221811 root 20 0 20.1t 134136 21884 S 0.0 1.1 0:00.00 reactor_2' 00:35:06.218 14:27:51 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 221811 root 20 0 20.1t 134136 21884 S 0.0 1.1 0:00.00 reactor_2 00:35:06.218 14:27:52 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:35:06.218 14:27:52 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:35:06.218 14:27:52 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:35:06.218 14:27:52 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:35:06.219 14:27:52 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:35:06.219 14:27:52 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:35:06.219 14:27:52 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:35:06.219 14:27:52 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:35:06.219 14:27:52 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@33 -- # '[' without_thdx '!=' x ']' 00:35:06.219 14:27:52 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@35 -- # for i in "${thd0_ids[@]}" 00:35:06.219 14:27:52 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x2 00:35:06.478 [2024-07-15 14:27:52.277699] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:06.478 14:27:52 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:35:06.738 [2024-07-15 14:27:52.561538] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:35:06.738 [2024-07-15 14:27:52.562324] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:35:06.738 14:27:52 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:35:06.998 [2024-07-15 14:27:52.861419] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:35:06.998 [2024-07-15 14:27:52.862169] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:35:06.998 14:27:52 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:35:06.998 14:27:52 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 221807 0 00:35:06.998 14:27:52 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 221807 0 busy 00:35:06.998 14:27:52 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=221807 00:35:06.998 14:27:52 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:06.998 14:27:52 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:35:06.998 14:27:52 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:35:06.998 14:27:52 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:35:06.998 14:27:52 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:35:06.998 14:27:52 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:35:06.998 14:27:52 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 221807 -w 256 00:35:06.998 14:27:52 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:35:07.257 14:27:53 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 221807 root 20 0 20.1t 134240 21884 R 99.9 1.1 0:01.23 reactor_0' 00:35:07.257 14:27:53 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 221807 root 20 0 20.1t 134240 21884 R 99.9 1.1 0:01.23 reactor_0 00:35:07.257 14:27:53 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:35:07.257 14:27:53 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:35:07.257 14:27:53 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:35:07.257 14:27:53 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:35:07.257 14:27:53 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:35:07.257 14:27:53 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:35:07.257 14:27:53 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:35:07.257 14:27:53 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:35:07.257 14:27:53 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:35:07.257 14:27:53 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 221807 2 00:35:07.257 14:27:53 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 221807 2 busy 00:35:07.257 14:27:53 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=221807 00:35:07.257 14:27:53 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:35:07.257 14:27:53 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:35:07.257 14:27:53 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:35:07.257 14:27:53 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:35:07.257 14:27:53 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:35:07.257 14:27:53 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:35:07.257 14:27:53 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:35:07.257 14:27:53 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 221807 -w 256 00:35:07.258 14:27:53 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 221811 root 20 0 20.1t 134240 21884 R 99.9 1.1 0:00.35 reactor_2' 00:35:07.258 14:27:53 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 221811 root 20 0 20.1t 134240 21884 R 99.9 1.1 0:00.35 reactor_2 00:35:07.258 14:27:53 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:35:07.258 14:27:53 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:35:07.258 14:27:53 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:35:07.258 14:27:53 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:35:07.258 14:27:53 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:35:07.258 14:27:53 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:35:07.258 14:27:53 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:35:07.258 14:27:53 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:35:07.258 14:27:53 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:35:07.825 [2024-07-15 14:27:53.585376] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:35:07.825 [2024-07-15 14:27:53.586291] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:35:07.825 14:27:53 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@52 -- # '[' without_thdx '!=' x ']' 00:35:07.825 14:27:53 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 221807 2 00:35:07.825 14:27:53 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 221807 2 idle 00:35:07.825 14:27:53 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=221807 00:35:07.825 14:27:53 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:35:07.825 14:27:53 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:07.825 14:27:53 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:35:07.825 14:27:53 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:35:07.825 14:27:53 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:35:07.825 14:27:53 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:35:07.825 14:27:53 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:35:07.825 14:27:53 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 221807 -w 256 00:35:07.825 14:27:53 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:35:07.825 14:27:53 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 221811 root 20 0 20.1t 134304 21884 S 0.0 1.1 0:00.72 reactor_2' 00:35:07.825 14:27:53 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 221811 root 20 0 20.1t 134304 21884 S 0.0 1.1 0:00.72 reactor_2 00:35:07.825 14:27:53 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:35:07.825 14:27:53 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:35:07.825 14:27:53 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:35:07.825 14:27:53 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:35:07.825 14:27:53 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:35:07.825 14:27:53 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:35:07.825 14:27:53 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:35:07.825 14:27:53 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:35:07.825 14:27:53 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:35:08.084 [2024-07-15 14:27:54.005411] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:35:08.084 [2024-07-15 14:27:54.006059] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:35:08.084 14:27:54 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@63 -- # '[' without_thdx '!=' x ']' 00:35:08.084 14:27:54 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@65 -- # for i in "${thd0_ids[@]}" 00:35:08.084 14:27:54 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x1 00:35:08.343 [2024-07-15 14:27:54.237863] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:08.343 14:27:54 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 221807 0 00:35:08.343 14:27:54 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 221807 0 idle 00:35:08.343 14:27:54 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=221807 00:35:08.343 14:27:54 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:08.343 14:27:54 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:08.343 14:27:54 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:35:08.343 14:27:54 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:35:08.343 14:27:54 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:35:08.343 14:27:54 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:35:08.343 14:27:54 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:35:08.343 14:27:54 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:35:08.343 14:27:54 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 221807 -w 256 00:35:08.602 14:27:54 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 221807 root 20 0 20.1t 134392 21884 S 0.0 1.1 0:02.19 reactor_0' 00:35:08.602 14:27:54 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 221807 root 20 0 20.1t 134392 21884 S 0.0 1.1 0:02.19 reactor_0 00:35:08.602 14:27:54 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:35:08.602 14:27:54 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:35:08.602 14:27:54 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:35:08.602 14:27:54 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:35:08.602 14:27:54 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:35:08.603 14:27:54 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:35:08.603 14:27:54 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:35:08.603 14:27:54 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:35:08.603 14:27:54 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:35:08.603 14:27:54 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@77 -- # return 0 00:35:08.603 14:27:54 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@92 -- # trap - SIGINT SIGTERM EXIT 00:35:08.603 14:27:54 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@93 -- # killprocess 221807 00:35:08.603 14:27:54 reactor_set_interrupt -- common/autotest_common.sh@948 -- # '[' -z 221807 ']' 00:35:08.603 14:27:54 reactor_set_interrupt -- common/autotest_common.sh@952 -- # kill -0 221807 00:35:08.603 14:27:54 reactor_set_interrupt -- common/autotest_common.sh@953 -- # uname 00:35:08.603 14:27:54 reactor_set_interrupt -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:08.603 14:27:54 reactor_set_interrupt -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 221807 00:35:08.603 14:27:54 reactor_set_interrupt -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:08.603 14:27:54 reactor_set_interrupt -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:08.603 killing process with pid 221807 00:35:08.603 14:27:54 reactor_set_interrupt -- common/autotest_common.sh@966 -- # echo 'killing process with pid 221807' 00:35:08.603 14:27:54 reactor_set_interrupt -- common/autotest_common.sh@967 -- # kill 221807 00:35:08.603 14:27:54 reactor_set_interrupt -- common/autotest_common.sh@972 -- # wait 221807 00:35:10.016 14:27:55 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@94 -- # cleanup 00:35:10.016 14:27:55 reactor_set_interrupt -- interrupt/common.sh@6 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:35:10.016 14:27:55 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@97 -- # start_intr_tgt 00:35:10.016 14:27:55 reactor_set_interrupt -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:10.016 14:27:55 reactor_set_interrupt -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:35:10.016 14:27:55 reactor_set_interrupt -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=221953 00:35:10.016 14:27:55 reactor_set_interrupt -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:35:10.016 14:27:55 reactor_set_interrupt -- interrupt/interrupt_common.sh@26 -- # waitforlisten 221953 /var/tmp/spdk.sock 00:35:10.016 14:27:55 reactor_set_interrupt -- interrupt/interrupt_common.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:35:10.016 14:27:55 reactor_set_interrupt -- common/autotest_common.sh@829 -- # '[' -z 221953 ']' 00:35:10.016 14:27:55 reactor_set_interrupt -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:10.016 14:27:55 reactor_set_interrupt -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:10.016 14:27:55 reactor_set_interrupt -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:10.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:10.016 14:27:55 reactor_set_interrupt -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:10.016 14:27:55 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:10.016 [2024-07-15 14:27:55.737632] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:35:10.016 [2024-07-15 14:27:55.737868] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid221953 ] 00:35:10.016 [2024-07-15 14:27:55.911434] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:10.275 [2024-07-15 14:27:56.123130] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:10.275 [2024-07-15 14:27:56.123286] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:10.275 [2024-07-15 14:27:56.123289] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:35:10.534 [2024-07-15 14:27:56.398497] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:10.793 14:27:56 reactor_set_interrupt -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:10.793 14:27:56 reactor_set_interrupt -- common/autotest_common.sh@862 -- # return 0 00:35:10.793 14:27:56 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@98 -- # setup_bdev_mem 00:35:10.793 14:27:56 reactor_set_interrupt -- interrupt/common.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:11.052 Malloc0 00:35:11.052 Malloc1 00:35:11.052 Malloc2 00:35:11.052 14:27:57 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@99 -- # setup_bdev_aio 00:35:11.052 14:27:57 reactor_set_interrupt -- interrupt/common.sh@75 -- # uname -s 00:35:11.052 14:27:57 reactor_set_interrupt -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:35:11.052 14:27:57 reactor_set_interrupt -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:35:11.311 5000+0 records in 00:35:11.311 5000+0 records out 00:35:11.311 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0203097 s, 504 MB/s 00:35:11.311 14:27:57 reactor_set_interrupt -- interrupt/common.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:35:11.311 AIO0 00:35:11.570 14:27:57 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@101 -- # reactor_set_mode_with_threads 221953 00:35:11.570 14:27:57 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@81 -- # reactor_set_intr_mode 221953 00:35:11.570 14:27:57 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=221953 00:35:11.570 14:27:57 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd= 00:35:11.570 14:27:57 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:35:11.570 14:27:57 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:35:11.570 14:27:57 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x1 00:35:11.570 14:27:57 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:35:11.570 14:27:57 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=1 00:35:11.570 14:27:57 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:35:11.570 14:27:57 reactor_set_interrupt -- interrupt/common.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:35:11.570 14:27:57 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:35:11.829 14:27:57 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo 1 00:35:11.829 14:27:57 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:35:11.829 14:27:57 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:35:11.829 14:27:57 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x4 00:35:11.829 14:27:57 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:35:11.829 14:27:57 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=4 00:35:11.829 14:27:57 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:35:11.829 14:27:57 reactor_set_interrupt -- interrupt/common.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:35:11.829 14:27:57 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:35:12.088 14:27:57 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo '' 00:35:12.088 14:27:57 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:35:12.088 spdk_thread ids are 1 on reactor0. 00:35:12.088 14:27:57 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:35:12.088 14:27:57 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:35:12.088 14:27:57 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 221953 0 00:35:12.088 14:27:57 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 221953 0 idle 00:35:12.088 14:27:57 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=221953 00:35:12.089 14:27:57 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:12.089 14:27:57 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:12.089 14:27:57 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:35:12.089 14:27:57 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:35:12.089 14:27:57 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:35:12.089 14:27:57 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:35:12.089 14:27:57 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:35:12.089 14:27:57 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 221953 -w 256 00:35:12.089 14:27:57 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:35:12.089 14:27:58 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 221953 root 20 0 20.1t 131944 21840 S 0.0 1.1 0:00.71 reactor_0' 00:35:12.089 14:27:58 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 221953 root 20 0 20.1t 131944 21840 S 0.0 1.1 0:00.71 reactor_0 00:35:12.089 14:27:58 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:35:12.089 14:27:58 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:35:12.089 14:27:58 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:35:12.089 14:27:58 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:35:12.089 14:27:58 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:35:12.089 14:27:58 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:35:12.089 14:27:58 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:35:12.089 14:27:58 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:35:12.089 14:27:58 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:35:12.089 14:27:58 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 221953 1 00:35:12.089 14:27:58 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 221953 1 idle 00:35:12.089 14:27:58 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=221953 00:35:12.089 14:27:58 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:12.089 14:27:58 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:12.089 14:27:58 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:35:12.089 14:27:58 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:35:12.089 14:27:58 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:35:12.089 14:27:58 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:35:12.089 14:27:58 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:35:12.089 14:27:58 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 221953 -w 256 00:35:12.089 14:27:58 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_1 00:35:12.347 14:27:58 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 221960 root 20 0 20.1t 131944 21840 S 0.0 1.1 0:00.00 reactor_1' 00:35:12.347 14:27:58 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 221960 root 20 0 20.1t 131944 21840 S 0.0 1.1 0:00.00 reactor_1 00:35:12.347 14:27:58 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:35:12.347 14:27:58 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:35:12.347 14:27:58 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:35:12.347 14:27:58 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:35:12.347 14:27:58 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:35:12.347 14:27:58 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:35:12.347 14:27:58 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:35:12.347 14:27:58 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:35:12.347 14:27:58 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:35:12.347 14:27:58 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 221953 2 00:35:12.347 14:27:58 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 221953 2 idle 00:35:12.347 14:27:58 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=221953 00:35:12.347 14:27:58 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:35:12.347 14:27:58 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:12.347 14:27:58 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:35:12.347 14:27:58 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:35:12.347 14:27:58 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:35:12.347 14:27:58 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:35:12.347 14:27:58 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:35:12.347 14:27:58 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:35:12.347 14:27:58 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 221953 -w 256 00:35:12.347 14:27:58 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 221961 root 20 0 20.1t 131944 21840 S 0.0 1.1 0:00.00 reactor_2' 00:35:12.606 14:27:58 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 221961 root 20 0 20.1t 131944 21840 S 0.0 1.1 0:00.00 reactor_2 00:35:12.606 14:27:58 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:35:12.606 14:27:58 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:35:12.606 14:27:58 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:35:12.606 14:27:58 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:35:12.606 14:27:58 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:35:12.606 14:27:58 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:35:12.606 14:27:58 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:35:12.606 14:27:58 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:35:12.606 14:27:58 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@33 -- # '[' x '!=' x ']' 00:35:12.606 14:27:58 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:35:12.864 [2024-07-15 14:27:58.624799] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:35:12.864 [2024-07-15 14:27:58.625789] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to poll mode from intr mode. 00:35:12.864 [2024-07-15 14:27:58.626163] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:35:12.864 14:27:58 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:35:13.120 [2024-07-15 14:27:58.904650] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:35:13.120 [2024-07-15 14:27:58.905307] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:35:13.120 14:27:58 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:35:13.120 14:27:58 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 221953 0 00:35:13.120 14:27:58 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 221953 0 busy 00:35:13.120 14:27:58 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=221953 00:35:13.120 14:27:58 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:13.120 14:27:58 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:35:13.120 14:27:58 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:35:13.120 14:27:58 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:35:13.120 14:27:58 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:35:13.120 14:27:58 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:35:13.120 14:27:58 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 221953 -w 256 00:35:13.120 14:27:58 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:35:13.120 14:27:59 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 221953 root 20 0 20.1t 131992 21840 R 99.9 1.1 0:01.16 reactor_0' 00:35:13.120 14:27:59 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 221953 root 20 0 20.1t 131992 21840 R 99.9 1.1 0:01.16 reactor_0 00:35:13.120 14:27:59 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:35:13.120 14:27:59 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:35:13.120 14:27:59 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:35:13.120 14:27:59 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:35:13.120 14:27:59 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:35:13.120 14:27:59 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:35:13.120 14:27:59 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:35:13.120 14:27:59 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:35:13.120 14:27:59 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:35:13.120 14:27:59 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 221953 2 00:35:13.120 14:27:59 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 221953 2 busy 00:35:13.120 14:27:59 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=221953 00:35:13.120 14:27:59 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:35:13.120 14:27:59 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:35:13.120 14:27:59 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:35:13.120 14:27:59 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:35:13.120 14:27:59 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:35:13.120 14:27:59 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:35:13.120 14:27:59 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 221953 -w 256 00:35:13.120 14:27:59 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:35:13.378 14:27:59 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 221961 root 20 0 20.1t 131992 21840 R 99.9 1.1 0:00.33 reactor_2' 00:35:13.378 14:27:59 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:35:13.378 14:27:59 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 221961 root 20 0 20.1t 131992 21840 R 99.9 1.1 0:00.33 reactor_2 00:35:13.378 14:27:59 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:35:13.378 14:27:59 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:35:13.378 14:27:59 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:35:13.378 14:27:59 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:35:13.378 14:27:59 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:35:13.378 14:27:59 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:35:13.378 14:27:59 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:35:13.378 14:27:59 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:35:13.636 [2024-07-15 14:27:59.556882] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:35:13.636 [2024-07-15 14:27:59.557330] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:35:13.636 14:27:59 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@52 -- # '[' x '!=' x ']' 00:35:13.636 14:27:59 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 221953 2 00:35:13.636 14:27:59 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 221953 2 idle 00:35:13.636 14:27:59 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=221953 00:35:13.636 14:27:59 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:35:13.636 14:27:59 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:13.636 14:27:59 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:35:13.636 14:27:59 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:35:13.636 14:27:59 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:35:13.636 14:27:59 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:35:13.636 14:27:59 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:35:13.636 14:27:59 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:35:13.636 14:27:59 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 221953 -w 256 00:35:13.910 14:27:59 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 221961 root 20 0 20.1t 132092 21840 S 0.0 1.1 0:00.65 reactor_2' 00:35:13.910 14:27:59 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 221961 root 20 0 20.1t 132092 21840 S 0.0 1.1 0:00.65 reactor_2 00:35:13.910 14:27:59 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:35:13.910 14:27:59 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:35:13.910 14:27:59 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:35:13.910 14:27:59 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:35:13.910 14:27:59 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:35:13.910 14:27:59 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:35:13.910 14:27:59 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:35:13.910 14:27:59 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:35:13.910 14:27:59 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:35:14.168 [2024-07-15 14:27:59.961058] interrupt_tgt.c: 99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:35:14.168 [2024-07-15 14:27:59.961837] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from poll mode. 00:35:14.168 [2024-07-15 14:27:59.962002] interrupt_tgt.c: 36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:35:14.168 14:27:59 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@63 -- # '[' x '!=' x ']' 00:35:14.168 14:27:59 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 221953 0 00:35:14.168 14:27:59 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 221953 0 idle 00:35:14.168 14:27:59 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=221953 00:35:14.168 14:27:59 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:14.168 14:27:59 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:14.168 14:27:59 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:35:14.168 14:27:59 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:35:14.168 14:27:59 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:35:14.168 14:27:59 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:35:14.168 14:27:59 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:35:14.168 14:27:59 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 221953 -w 256 00:35:14.168 14:27:59 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:35:14.168 14:28:00 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 221953 root 20 0 20.1t 132092 21840 S 0.0 1.1 0:02.04 reactor_0' 00:35:14.168 14:28:00 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 221953 root 20 0 20.1t 132092 21840 S 0.0 1.1 0:02.04 reactor_0 00:35:14.168 14:28:00 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:35:14.168 14:28:00 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:35:14.168 14:28:00 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:35:14.168 14:28:00 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:35:14.168 14:28:00 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:35:14.168 14:28:00 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:35:14.168 14:28:00 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:35:14.168 14:28:00 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:35:14.168 14:28:00 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:35:14.168 14:28:00 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@82 -- # return 0 00:35:14.168 14:28:00 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:35:14.168 14:28:00 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@104 -- # killprocess 221953 00:35:14.168 14:28:00 reactor_set_interrupt -- common/autotest_common.sh@948 -- # '[' -z 221953 ']' 00:35:14.168 14:28:00 reactor_set_interrupt -- common/autotest_common.sh@952 -- # kill -0 221953 00:35:14.168 14:28:00 reactor_set_interrupt -- common/autotest_common.sh@953 -- # uname 00:35:14.168 14:28:00 reactor_set_interrupt -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:14.168 14:28:00 reactor_set_interrupt -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 221953 00:35:14.426 14:28:00 reactor_set_interrupt -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:14.426 14:28:00 reactor_set_interrupt -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:14.426 killing process with pid 221953 00:35:14.426 14:28:00 reactor_set_interrupt -- common/autotest_common.sh@966 -- # echo 'killing process with pid 221953' 00:35:14.426 14:28:00 reactor_set_interrupt -- common/autotest_common.sh@967 -- # kill 221953 00:35:14.426 14:28:00 reactor_set_interrupt -- common/autotest_common.sh@972 -- # wait 221953 00:35:15.805 14:28:01 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@105 -- # cleanup 00:35:15.805 14:28:01 reactor_set_interrupt -- interrupt/common.sh@6 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:35:15.805 00:35:15.805 real 0m12.488s 00:35:15.805 user 0m13.350s 00:35:15.805 sys 0m1.633s 00:35:15.805 14:28:01 reactor_set_interrupt -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:15.805 14:28:01 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:15.805 ************************************ 00:35:15.805 END TEST reactor_set_interrupt 00:35:15.805 ************************************ 00:35:15.805 14:28:01 -- common/autotest_common.sh@1142 -- # return 0 00:35:15.805 14:28:01 -- spdk/autotest.sh@194 -- # run_test reap_unregistered_poller /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:35:15.805 14:28:01 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:15.805 14:28:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:15.805 14:28:01 -- common/autotest_common.sh@10 -- # set +x 00:35:15.806 ************************************ 00:35:15.806 START TEST reap_unregistered_poller 00:35:15.806 ************************************ 00:35:15.806 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:35:15.806 * Looking for test storage... 00:35:15.806 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:35:15.806 14:28:01 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh 00:35:15.806 14:28:01 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:35:15.806 14:28:01 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt 00:35:15.806 14:28:01 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt 00:35:15.806 14:28:01 reap_unregistered_poller -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../.. 00:35:15.806 14:28:01 reap_unregistered_poller -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:35:15.806 14:28:01 reap_unregistered_poller -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:35:15.806 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:35:15.806 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@34 -- # set -e 00:35:15.806 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:35:15.806 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@36 -- # shopt -s extglob 00:35:15.806 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:35:15.806 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:35:15.806 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:35:15.806 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:35:15.806 14:28:01 reap_unregistered_poller -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:35:15.806 14:28:01 reap_unregistered_poller -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:35:15.806 14:28:01 reap_unregistered_poller -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:35:15.806 14:28:01 reap_unregistered_poller -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:35:15.806 14:28:01 reap_unregistered_poller -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:35:15.806 14:28:01 reap_unregistered_poller -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:35:15.806 14:28:01 reap_unregistered_poller -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:35:15.806 14:28:01 reap_unregistered_poller -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:35:15.806 14:28:01 reap_unregistered_poller -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:35:15.806 14:28:01 reap_unregistered_poller -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:35:15.806 14:28:01 reap_unregistered_poller -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:35:15.806 14:28:01 reap_unregistered_poller -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:35:15.806 14:28:01 reap_unregistered_poller -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:35:15.806 14:28:01 reap_unregistered_poller -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:35:15.806 14:28:01 reap_unregistered_poller -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:35:15.806 14:28:01 reap_unregistered_poller -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:35:15.806 14:28:01 reap_unregistered_poller -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:35:15.806 14:28:01 reap_unregistered_poller -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:35:15.806 14:28:01 reap_unregistered_poller -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:35:15.806 14:28:01 reap_unregistered_poller -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:35:15.806 14:28:01 reap_unregistered_poller -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:35:15.806 14:28:01 reap_unregistered_poller -- common/build_config.sh@22 -- # CONFIG_CET=n 00:35:15.806 14:28:01 reap_unregistered_poller -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:35:15.806 14:28:01 reap_unregistered_poller -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:35:15.806 14:28:01 reap_unregistered_poller -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:35:15.806 14:28:01 reap_unregistered_poller -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=n 00:35:15.806 14:28:01 reap_unregistered_poller -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:35:15.806 14:28:01 reap_unregistered_poller -- common/build_config.sh@28 -- # CONFIG_UBLK=n 00:35:15.806 14:28:01 reap_unregistered_poller -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:35:15.806 14:28:01 reap_unregistered_poller -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:35:15.806 14:28:01 reap_unregistered_poller -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:35:15.806 14:28:01 reap_unregistered_poller -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:35:15.806 14:28:01 reap_unregistered_poller -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:35:15.806 14:28:01 reap_unregistered_poller -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:35:15.806 14:28:01 reap_unregistered_poller -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:35:15.806 14:28:01 reap_unregistered_poller -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:35:15.806 14:28:01 reap_unregistered_poller -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:35:15.806 14:28:01 reap_unregistered_poller -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:35:15.806 14:28:01 reap_unregistered_poller -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:35:15.806 14:28:01 reap_unregistered_poller -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:35:15.806 14:28:01 reap_unregistered_poller -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:35:15.806 14:28:01 reap_unregistered_poller -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:35:15.806 14:28:01 reap_unregistered_poller -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=y 00:35:15.806 14:28:01 reap_unregistered_poller -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:35:15.806 14:28:01 reap_unregistered_poller -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:35:15.806 14:28:01 reap_unregistered_poller -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:35:15.806 14:28:01 reap_unregistered_poller -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:35:15.806 14:28:01 reap_unregistered_poller -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:35:15.806 14:28:01 reap_unregistered_poller -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:35:15.806 14:28:01 reap_unregistered_poller -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:35:15.806 14:28:01 reap_unregistered_poller -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:35:15.806 14:28:01 reap_unregistered_poller -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:35:15.806 14:28:01 reap_unregistered_poller -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:35:15.806 14:28:01 reap_unregistered_poller -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:35:15.806 14:28:01 reap_unregistered_poller -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:35:15.806 14:28:01 reap_unregistered_poller -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:35:15.806 14:28:01 reap_unregistered_poller -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:35:15.806 14:28:01 reap_unregistered_poller -- common/build_config.sh@58 -- # CONFIG_UBSAN=n 00:35:15.806 14:28:01 reap_unregistered_poller -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:35:15.806 14:28:01 reap_unregistered_poller -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:35:15.806 14:28:01 reap_unregistered_poller -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:35:15.806 14:28:01 reap_unregistered_poller -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=n 00:35:15.806 14:28:01 reap_unregistered_poller -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:35:15.806 14:28:01 reap_unregistered_poller -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:35:15.806 14:28:01 reap_unregistered_poller -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:35:15.806 14:28:01 reap_unregistered_poller -- common/build_config.sh@66 -- # CONFIG_SHARED=n 00:35:15.806 14:28:01 reap_unregistered_poller -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:35:15.806 14:28:01 reap_unregistered_poller -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:35:15.806 14:28:01 reap_unregistered_poller -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:35:15.806 14:28:01 reap_unregistered_poller -- common/build_config.sh@70 -- # CONFIG_FC=n 00:35:15.806 14:28:01 reap_unregistered_poller -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:35:15.806 14:28:01 reap_unregistered_poller -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:35:15.806 14:28:01 reap_unregistered_poller -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:35:15.806 14:28:01 reap_unregistered_poller -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:35:15.806 14:28:01 reap_unregistered_poller -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:35:15.806 14:28:01 reap_unregistered_poller -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:35:15.806 14:28:01 reap_unregistered_poller -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:35:15.806 14:28:01 reap_unregistered_poller -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:35:15.806 14:28:01 reap_unregistered_poller -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:35:15.806 14:28:01 reap_unregistered_poller -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:35:15.806 14:28:01 reap_unregistered_poller -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:35:15.806 14:28:01 reap_unregistered_poller -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:35:15.806 14:28:01 reap_unregistered_poller -- common/build_config.sh@83 -- # CONFIG_URING=n 00:35:15.806 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:35:15.806 14:28:01 reap_unregistered_poller -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:35:15.806 14:28:01 reap_unregistered_poller -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:35:15.806 14:28:01 reap_unregistered_poller -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:35:15.806 14:28:01 reap_unregistered_poller -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:35:15.806 14:28:01 reap_unregistered_poller -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:35:15.806 14:28:01 reap_unregistered_poller -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:35:15.806 14:28:01 reap_unregistered_poller -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:35:15.806 14:28:01 reap_unregistered_poller -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:35:15.806 14:28:01 reap_unregistered_poller -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:35:15.806 14:28:01 reap_unregistered_poller -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:35:15.806 14:28:01 reap_unregistered_poller -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:35:15.806 14:28:01 reap_unregistered_poller -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:35:15.806 14:28:01 reap_unregistered_poller -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:35:15.806 14:28:01 reap_unregistered_poller -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:35:15.806 14:28:01 reap_unregistered_poller -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:35:15.806 #define SPDK_CONFIG_H 00:35:15.806 #define SPDK_CONFIG_APPS 1 00:35:15.807 #define SPDK_CONFIG_ARCH native 00:35:15.807 #define SPDK_CONFIG_ASAN 1 00:35:15.807 #undef SPDK_CONFIG_AVAHI 00:35:15.807 #undef SPDK_CONFIG_CET 00:35:15.807 #define SPDK_CONFIG_COVERAGE 1 00:35:15.807 #define SPDK_CONFIG_CROSS_PREFIX 00:35:15.807 #undef SPDK_CONFIG_CRYPTO 00:35:15.807 #undef SPDK_CONFIG_CRYPTO_MLX5 00:35:15.807 #undef SPDK_CONFIG_CUSTOMOCF 00:35:15.807 #undef SPDK_CONFIG_DAOS 00:35:15.807 #define SPDK_CONFIG_DAOS_DIR 00:35:15.807 #define SPDK_CONFIG_DEBUG 1 00:35:15.807 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:35:15.807 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:35:15.807 #define SPDK_CONFIG_DPDK_INC_DIR 00:35:15.807 #define SPDK_CONFIG_DPDK_LIB_DIR 00:35:15.807 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:35:15.807 #undef SPDK_CONFIG_DPDK_UADK 00:35:15.807 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:35:15.807 #define SPDK_CONFIG_EXAMPLES 1 00:35:15.807 #undef SPDK_CONFIG_FC 00:35:15.807 #define SPDK_CONFIG_FC_PATH 00:35:15.807 #define SPDK_CONFIG_FIO_PLUGIN 1 00:35:15.807 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:35:15.807 #undef SPDK_CONFIG_FUSE 00:35:15.807 #undef SPDK_CONFIG_FUZZER 00:35:15.807 #define SPDK_CONFIG_FUZZER_LIB 00:35:15.807 #undef SPDK_CONFIG_GOLANG 00:35:15.807 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:35:15.807 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:35:15.807 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:35:15.807 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:35:15.807 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:35:15.807 #undef SPDK_CONFIG_HAVE_LIBBSD 00:35:15.807 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:35:15.807 #define SPDK_CONFIG_IDXD 1 00:35:15.807 #undef SPDK_CONFIG_IDXD_KERNEL 00:35:15.807 #undef SPDK_CONFIG_IPSEC_MB 00:35:15.807 #define SPDK_CONFIG_IPSEC_MB_DIR 00:35:15.807 #define SPDK_CONFIG_ISAL 1 00:35:15.807 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:35:15.807 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:35:15.807 #define SPDK_CONFIG_LIBDIR 00:35:15.807 #undef SPDK_CONFIG_LTO 00:35:15.807 #define SPDK_CONFIG_MAX_LCORES 128 00:35:15.807 #define SPDK_CONFIG_NVME_CUSE 1 00:35:15.807 #undef SPDK_CONFIG_OCF 00:35:15.807 #define SPDK_CONFIG_OCF_PATH 00:35:15.807 #define SPDK_CONFIG_OPENSSL_PATH 00:35:15.807 #undef SPDK_CONFIG_PGO_CAPTURE 00:35:15.807 #define SPDK_CONFIG_PGO_DIR 00:35:15.807 #undef SPDK_CONFIG_PGO_USE 00:35:15.807 #define SPDK_CONFIG_PREFIX /usr/local 00:35:15.807 #undef SPDK_CONFIG_RAID5F 00:35:15.807 #undef SPDK_CONFIG_RBD 00:35:15.807 #define SPDK_CONFIG_RDMA 1 00:35:15.807 #define SPDK_CONFIG_RDMA_PROV verbs 00:35:15.807 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:35:15.807 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:35:15.807 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:35:15.807 #undef SPDK_CONFIG_SHARED 00:35:15.807 #undef SPDK_CONFIG_SMA 00:35:15.807 #define SPDK_CONFIG_TESTS 1 00:35:15.807 #undef SPDK_CONFIG_TSAN 00:35:15.807 #undef SPDK_CONFIG_UBLK 00:35:15.807 #undef SPDK_CONFIG_UBSAN 00:35:15.807 #define SPDK_CONFIG_UNIT_TESTS 1 00:35:15.807 #undef SPDK_CONFIG_URING 00:35:15.807 #define SPDK_CONFIG_URING_PATH 00:35:15.807 #undef SPDK_CONFIG_URING_ZNS 00:35:15.807 #undef SPDK_CONFIG_USDT 00:35:15.807 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:35:15.807 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:35:15.807 #undef SPDK_CONFIG_VFIO_USER 00:35:15.807 #define SPDK_CONFIG_VFIO_USER_DIR 00:35:15.807 #define SPDK_CONFIG_VHOST 1 00:35:15.807 #define SPDK_CONFIG_VIRTIO 1 00:35:15.807 #undef SPDK_CONFIG_VTUNE 00:35:15.807 #define SPDK_CONFIG_VTUNE_DIR 00:35:15.807 #define SPDK_CONFIG_WERROR 1 00:35:15.807 #define SPDK_CONFIG_WPDK_DIR 00:35:15.807 #undef SPDK_CONFIG_XNVME 00:35:15.807 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:35:15.807 14:28:01 reap_unregistered_poller -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:35:15.807 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:35:15.807 14:28:01 reap_unregistered_poller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:15.807 14:28:01 reap_unregistered_poller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:15.807 14:28:01 reap_unregistered_poller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:15.807 14:28:01 reap_unregistered_poller -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:35:15.807 14:28:01 reap_unregistered_poller -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:35:15.807 14:28:01 reap_unregistered_poller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:35:15.807 14:28:01 reap_unregistered_poller -- paths/export.sh@5 -- # export PATH 00:35:15.807 14:28:01 reap_unregistered_poller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:35:15.807 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:35:15.807 14:28:01 reap_unregistered_poller -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:35:15.807 14:28:01 reap_unregistered_poller -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:35:15.807 14:28:01 reap_unregistered_poller -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:35:15.807 14:28:01 reap_unregistered_poller -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:35:15.807 14:28:01 reap_unregistered_poller -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:35:15.807 14:28:01 reap_unregistered_poller -- pm/common@64 -- # TEST_TAG=N/A 00:35:15.807 14:28:01 reap_unregistered_poller -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:35:15.807 14:28:01 reap_unregistered_poller -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:35:15.807 14:28:01 reap_unregistered_poller -- pm/common@68 -- # uname -s 00:35:15.807 14:28:01 reap_unregistered_poller -- pm/common@68 -- # PM_OS=Linux 00:35:15.807 14:28:01 reap_unregistered_poller -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:35:15.807 14:28:01 reap_unregistered_poller -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:35:15.807 14:28:01 reap_unregistered_poller -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:35:15.807 14:28:01 reap_unregistered_poller -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:35:15.807 14:28:01 reap_unregistered_poller -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:35:15.807 14:28:01 reap_unregistered_poller -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:35:15.807 14:28:01 reap_unregistered_poller -- pm/common@76 -- # SUDO[0]= 00:35:15.807 14:28:01 reap_unregistered_poller -- pm/common@76 -- # SUDO[1]='sudo -E' 00:35:15.807 14:28:01 reap_unregistered_poller -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:35:15.807 14:28:01 reap_unregistered_poller -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:35:15.807 14:28:01 reap_unregistered_poller -- pm/common@81 -- # [[ Linux == Linux ]] 00:35:15.807 14:28:01 reap_unregistered_poller -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:35:15.807 14:28:01 reap_unregistered_poller -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:35:15.807 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@58 -- # : 0 00:35:15.807 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:35:15.807 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@62 -- # : 0 00:35:15.807 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:35:15.807 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@64 -- # : 0 00:35:15.807 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:35:15.807 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@66 -- # : 1 00:35:15.807 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:35:15.807 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@68 -- # : 1 00:35:15.807 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:35:15.807 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@70 -- # : 00:35:15.807 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:35:15.807 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@72 -- # : 1 00:35:15.807 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:35:15.807 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@74 -- # : 0 00:35:15.807 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:35:15.807 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@76 -- # : 0 00:35:15.807 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:35:15.807 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@78 -- # : 0 00:35:15.807 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:35:15.807 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@80 -- # : 0 00:35:15.807 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:35:15.807 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@82 -- # : 0 00:35:15.807 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:35:15.807 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@84 -- # : 0 00:35:15.807 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:35:15.807 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@86 -- # : 0 00:35:15.807 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:35:15.807 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@88 -- # : 0 00:35:15.807 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:35:15.807 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@90 -- # : 0 00:35:15.807 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:35:15.807 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@92 -- # : 0 00:35:15.807 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:35:15.807 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@94 -- # : 0 00:35:15.807 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:35:15.807 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@96 -- # : 0 00:35:15.807 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:35:15.807 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@98 -- # : 0 00:35:15.807 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@100 -- # : 0 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@102 -- # : rdma 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@104 -- # : 0 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@106 -- # : 0 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@108 -- # : 1 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@110 -- # : 0 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@112 -- # : 0 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@114 -- # : 0 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@116 -- # : 0 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@118 -- # : 0 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@120 -- # : 1 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@122 -- # : 0 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@124 -- # : 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@126 -- # : 0 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@128 -- # : 0 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@130 -- # : 0 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@132 -- # : 0 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@134 -- # : 0 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@136 -- # : 0 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@138 -- # : 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@140 -- # : true 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@142 -- # : 0 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@144 -- # : 0 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@146 -- # : 0 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@148 -- # : 1 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@150 -- # : 0 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@152 -- # : 0 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@154 -- # : 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@156 -- # : 0 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@158 -- # : 1 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@160 -- # : 0 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@162 -- # : 0 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@164 -- # : 0 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@167 -- # : 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@169 -- # : 0 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@171 -- # : 0 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@185 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@200 -- # cat 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@253 -- # export QEMU_BIN= 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@253 -- # QEMU_BIN= 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@254 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:35:15.808 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@256 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@256 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@263 -- # export valgrind= 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@263 -- # valgrind= 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@269 -- # uname -s 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@279 -- # MAKE=make 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j10 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@299 -- # TEST_MODE= 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@318 -- # [[ -z 222136 ]] 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@318 -- # kill -0 222136 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@331 -- # local mount target_dir 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.deSmtc 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@355 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.deSmtc/tests/interrupt /tmp/spdk.deSmtc 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@327 -- # df -T 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=devtmpfs 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=4194304 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=4194304 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=6265700352 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=6270410752 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=4710400 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=2487136256 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=2508165120 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=21028864 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda5 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=xfs 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=12907352064 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=20303577088 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=7396225024 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda2 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=xfs 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=896184320 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=1042161664 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=145977344 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda1 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=vfat 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=97312768 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=104607744 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=7294976 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=1254076416 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=1254080512 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@361 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/rocky9-vg-autotest_2/rocky9-libvirt/output 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@361 -- # fss["$mount"]=fuse.sshfs 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@362 -- # avails["$mount"]=94544736256 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@362 -- # sizes["$mount"]=105088212992 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@363 -- # uses["$mount"]=5158043648 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:35:15.809 * Looking for test storage... 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@368 -- # local target_space new_size 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@372 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@372 -- # mount=/ 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@374 -- # target_space=12907352064 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@380 -- # [[ xfs == tmpfs ]] 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@380 -- # [[ xfs == ramfs ]] 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@381 -- # new_size=9610817536 00:35:15.809 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:35:15.810 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:35:15.810 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:35:15.810 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt 00:35:15.810 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:35:15.810 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@389 -- # return 0 00:35:15.810 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@1682 -- # set -o errtrace 00:35:15.810 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:35:15.810 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:35:15.810 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:35:15.810 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@1687 -- # true 00:35:15.810 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@1689 -- # xtrace_fd 00:35:15.810 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:35:15.810 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:35:15.810 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@27 -- # exec 00:35:15.810 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@29 -- # exec 00:35:15.810 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@31 -- # xtrace_restore 00:35:15.810 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:35:15.810 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:35:15.810 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@18 -- # set -x 00:35:15.810 14:28:01 reap_unregistered_poller -- interrupt/interrupt_common.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/common.sh 00:35:15.810 14:28:01 reap_unregistered_poller -- interrupt/interrupt_common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:15.810 14:28:01 reap_unregistered_poller -- interrupt/interrupt_common.sh@12 -- # r0_mask=0x1 00:35:15.810 14:28:01 reap_unregistered_poller -- interrupt/interrupt_common.sh@13 -- # r1_mask=0x2 00:35:15.810 14:28:01 reap_unregistered_poller -- interrupt/interrupt_common.sh@14 -- # r2_mask=0x4 00:35:15.810 14:28:01 reap_unregistered_poller -- interrupt/interrupt_common.sh@16 -- # cpu_server_mask=0x07 00:35:15.810 14:28:01 reap_unregistered_poller -- interrupt/interrupt_common.sh@17 -- # rpc_server_addr=/var/tmp/spdk.sock 00:35:15.810 14:28:01 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@14 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:35:15.810 14:28:01 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@14 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:35:15.810 14:28:01 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@17 -- # start_intr_tgt 00:35:15.810 14:28:01 reap_unregistered_poller -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:15.810 14:28:01 reap_unregistered_poller -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:35:15.810 14:28:01 reap_unregistered_poller -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=222179 00:35:15.810 14:28:01 reap_unregistered_poller -- interrupt/interrupt_common.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:35:15.810 14:28:01 reap_unregistered_poller -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:35:15.810 14:28:01 reap_unregistered_poller -- interrupt/interrupt_common.sh@26 -- # waitforlisten 222179 /var/tmp/spdk.sock 00:35:15.810 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@829 -- # '[' -z 222179 ']' 00:35:15.810 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:15.810 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:15.810 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:15.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:15.810 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:15.810 14:28:01 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:35:15.810 [2024-07-15 14:28:01.795059] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:35:15.810 [2024-07-15 14:28:01.795454] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid222179 ] 00:35:16.069 [2024-07-15 14:28:01.967128] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:16.328 [2024-07-15 14:28:02.172420] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:16.328 [2024-07-15 14:28:02.172478] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:35:16.328 [2024-07-15 14:28:02.172480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:16.587 [2024-07-15 14:28:02.450761] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:16.845 14:28:02 reap_unregistered_poller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:16.845 14:28:02 reap_unregistered_poller -- common/autotest_common.sh@862 -- # return 0 00:35:16.845 14:28:02 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # rpc_cmd thread_get_pollers 00:35:16.845 14:28:02 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # jq -r '.threads[0]' 00:35:16.845 14:28:02 reap_unregistered_poller -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:16.845 14:28:02 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:35:16.845 14:28:02 reap_unregistered_poller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:16.845 14:28:02 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # app_thread='{ 00:35:16.845 "name": "app_thread", 00:35:16.845 "id": 1, 00:35:16.845 "active_pollers": [], 00:35:16.845 "timed_pollers": [ 00:35:16.845 { 00:35:16.845 "name": "rpc_subsystem_poll_servers", 00:35:16.845 "id": 1, 00:35:16.845 "state": "waiting", 00:35:16.845 "run_count": 0, 00:35:16.845 "busy_count": 0, 00:35:16.845 "period_ticks": 8800000 00:35:16.845 } 00:35:16.845 ], 00:35:16.845 "paused_pollers": [] 00:35:16.845 }' 00:35:16.845 14:28:02 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@21 -- # jq -r '.active_pollers[].name' 00:35:16.845 14:28:02 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@21 -- # native_pollers= 00:35:16.845 14:28:02 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@22 -- # native_pollers+=' ' 00:35:16.845 14:28:02 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@23 -- # jq -r '.timed_pollers[].name' 00:35:17.104 14:28:02 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@23 -- # native_pollers+=rpc_subsystem_poll_servers 00:35:17.104 14:28:02 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@28 -- # setup_bdev_aio 00:35:17.104 14:28:02 reap_unregistered_poller -- interrupt/common.sh@75 -- # uname -s 00:35:17.104 14:28:02 reap_unregistered_poller -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:35:17.104 14:28:02 reap_unregistered_poller -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:35:17.104 5000+0 records in 00:35:17.104 5000+0 records out 00:35:17.104 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0216869 s, 472 MB/s 00:35:17.104 14:28:02 reap_unregistered_poller -- interrupt/common.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:35:17.363 AIO0 00:35:17.363 14:28:03 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:35:17.622 14:28:03 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@34 -- # sleep 0.1 00:35:17.622 14:28:03 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # rpc_cmd thread_get_pollers 00:35:17.622 14:28:03 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # jq -r '.threads[0]' 00:35:17.622 14:28:03 reap_unregistered_poller -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:17.622 14:28:03 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:35:17.622 14:28:03 reap_unregistered_poller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:17.622 14:28:03 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # app_thread='{ 00:35:17.622 "name": "app_thread", 00:35:17.622 "id": 1, 00:35:17.622 "active_pollers": [], 00:35:17.622 "timed_pollers": [ 00:35:17.622 { 00:35:17.622 "name": "rpc_subsystem_poll_servers", 00:35:17.622 "id": 1, 00:35:17.622 "state": "waiting", 00:35:17.622 "run_count": 0, 00:35:17.622 "busy_count": 0, 00:35:17.622 "period_ticks": 8800000 00:35:17.622 } 00:35:17.622 ], 00:35:17.622 "paused_pollers": [] 00:35:17.622 }' 00:35:17.622 14:28:03 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@38 -- # jq -r '.active_pollers[].name' 00:35:17.881 14:28:03 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@38 -- # remaining_pollers= 00:35:17.881 14:28:03 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@39 -- # remaining_pollers+=' ' 00:35:17.881 14:28:03 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@40 -- # jq -r '.timed_pollers[].name' 00:35:17.881 14:28:03 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@40 -- # remaining_pollers+=rpc_subsystem_poll_servers 00:35:17.881 14:28:03 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@44 -- # [[ rpc_subsystem_poll_servers == \ \r\p\c\_\s\u\b\s\y\s\t\e\m\_\p\o\l\l\_\s\e\r\v\e\r\s ]] 00:35:17.881 14:28:03 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:35:17.881 14:28:03 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@47 -- # killprocess 222179 00:35:17.881 14:28:03 reap_unregistered_poller -- common/autotest_common.sh@948 -- # '[' -z 222179 ']' 00:35:17.881 14:28:03 reap_unregistered_poller -- common/autotest_common.sh@952 -- # kill -0 222179 00:35:17.881 14:28:03 reap_unregistered_poller -- common/autotest_common.sh@953 -- # uname 00:35:17.881 14:28:03 reap_unregistered_poller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:17.881 14:28:03 reap_unregistered_poller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 222179 00:35:17.881 14:28:03 reap_unregistered_poller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:17.881 killing process with pid 222179 00:35:17.881 14:28:03 reap_unregistered_poller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:17.881 14:28:03 reap_unregistered_poller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 222179' 00:35:17.881 14:28:03 reap_unregistered_poller -- common/autotest_common.sh@967 -- # kill 222179 00:35:17.881 14:28:03 reap_unregistered_poller -- common/autotest_common.sh@972 -- # wait 222179 00:35:19.264 14:28:04 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@48 -- # cleanup 00:35:19.264 14:28:04 reap_unregistered_poller -- interrupt/common.sh@6 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:35:19.264 00:35:19.264 real 0m3.405s 00:35:19.264 user 0m2.813s 00:35:19.264 sys 0m0.528s 00:35:19.264 14:28:04 reap_unregistered_poller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:19.264 14:28:04 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:35:19.264 ************************************ 00:35:19.264 END TEST reap_unregistered_poller 00:35:19.264 ************************************ 00:35:19.264 14:28:04 -- common/autotest_common.sh@1142 -- # return 0 00:35:19.264 14:28:04 -- spdk/autotest.sh@198 -- # uname -s 00:35:19.264 14:28:04 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:35:19.264 14:28:04 -- spdk/autotest.sh@199 -- # [[ 1 -eq 1 ]] 00:35:19.264 14:28:04 -- spdk/autotest.sh@205 -- # [[ 0 -eq 0 ]] 00:35:19.264 14:28:04 -- spdk/autotest.sh@206 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:35:19.264 14:28:04 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:19.264 14:28:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:19.264 14:28:04 -- common/autotest_common.sh@10 -- # set +x 00:35:19.264 ************************************ 00:35:19.264 START TEST spdk_dd 00:35:19.264 ************************************ 00:35:19.264 14:28:04 spdk_dd -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:35:19.264 * Looking for test storage... 00:35:19.264 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:35:19.264 14:28:05 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:35:19.264 14:28:05 spdk_dd -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:19.264 14:28:05 spdk_dd -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:19.264 14:28:05 spdk_dd -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:19.264 14:28:05 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:35:19.264 14:28:05 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:35:19.264 14:28:05 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:35:19.264 14:28:05 spdk_dd -- paths/export.sh@5 -- # export PATH 00:35:19.264 14:28:05 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:35:19.264 14:28:05 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:35:19.264 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda2,mount@vda:vda5, so not binding PCI dev 00:35:19.264 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:35:19.524 14:28:05 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:35:19.524 14:28:05 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:35:19.524 14:28:05 spdk_dd -- scripts/common.sh@309 -- # local bdf bdfs 00:35:19.524 14:28:05 spdk_dd -- scripts/common.sh@310 -- # local nvmes 00:35:19.524 14:28:05 spdk_dd -- scripts/common.sh@312 -- # [[ -n '' ]] 00:35:19.524 14:28:05 spdk_dd -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:35:19.524 14:28:05 spdk_dd -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:35:19.524 14:28:05 spdk_dd -- scripts/common.sh@295 -- # local bdf= 00:35:19.524 14:28:05 spdk_dd -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:35:19.524 14:28:05 spdk_dd -- scripts/common.sh@230 -- # local class 00:35:19.524 14:28:05 spdk_dd -- scripts/common.sh@231 -- # local subclass 00:35:19.524 14:28:05 spdk_dd -- scripts/common.sh@232 -- # local progif 00:35:19.524 14:28:05 spdk_dd -- scripts/common.sh@233 -- # printf %02x 1 00:35:19.524 14:28:05 spdk_dd -- scripts/common.sh@233 -- # class=01 00:35:19.524 14:28:05 spdk_dd -- scripts/common.sh@234 -- # printf %02x 8 00:35:19.524 14:28:05 spdk_dd -- scripts/common.sh@234 -- # subclass=08 00:35:19.524 14:28:05 spdk_dd -- scripts/common.sh@235 -- # printf %02x 2 00:35:19.524 14:28:05 spdk_dd -- scripts/common.sh@235 -- # progif=02 00:35:19.524 14:28:05 spdk_dd -- scripts/common.sh@237 -- # hash lspci 00:35:19.524 14:28:05 spdk_dd -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:35:19.524 14:28:05 spdk_dd -- scripts/common.sh@240 -- # grep -i -- -p02 00:35:19.524 14:28:05 spdk_dd -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:35:19.524 14:28:05 spdk_dd -- scripts/common.sh@239 -- # lspci -mm -n -D 00:35:19.524 14:28:05 spdk_dd -- scripts/common.sh@242 -- # tr -d '"' 00:35:19.524 14:28:05 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:35:19.524 14:28:05 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:35:19.524 14:28:05 spdk_dd -- scripts/common.sh@15 -- # local i 00:35:19.524 14:28:05 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:35:19.524 14:28:05 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:35:19.524 14:28:05 spdk_dd -- scripts/common.sh@24 -- # return 0 00:35:19.524 14:28:05 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:35:19.524 14:28:05 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:35:19.524 14:28:05 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:35:19.524 14:28:05 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:35:19.524 14:28:05 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:35:19.524 14:28:05 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:35:19.524 14:28:05 spdk_dd -- scripts/common.sh@325 -- # (( 1 )) 00:35:19.524 14:28:05 spdk_dd -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 00:35:19.524 14:28:05 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:35:19.524 14:28:05 spdk_dd -- dd/common.sh@139 -- # local lib so 00:35:19.524 14:28:05 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:35:19.524 14:28:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:35:19.524 14:28:05 spdk_dd -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:35:19.524 14:28:05 spdk_dd -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:35:19.524 14:28:05 spdk_dd -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:35:19.524 14:28:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:35:19.524 14:28:05 spdk_dd -- dd/common.sh@143 -- # [[ libasan.so.6 == liburing.so.* ]] 00:35:19.524 14:28:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:35:19.524 14:28:05 spdk_dd -- dd/common.sh@143 -- # [[ libnuma.so.1 == liburing.so.* ]] 00:35:19.524 14:28:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:35:19.524 14:28:05 spdk_dd -- dd/common.sh@143 -- # [[ libibverbs.so.1 == liburing.so.* ]] 00:35:19.524 14:28:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:35:19.524 14:28:05 spdk_dd -- dd/common.sh@143 -- # [[ librdmacm.so.1 == liburing.so.* ]] 00:35:19.524 14:28:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:35:19.524 14:28:05 spdk_dd -- dd/common.sh@143 -- # [[ libuuid.so.1 == liburing.so.* ]] 00:35:19.524 14:28:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:35:19.524 14:28:05 spdk_dd -- dd/common.sh@143 -- # [[ libssl.so.3 == liburing.so.* ]] 00:35:19.524 14:28:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:35:19.524 14:28:05 spdk_dd -- dd/common.sh@143 -- # [[ libcrypto.so.3 == liburing.so.* ]] 00:35:19.524 14:28:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:35:19.524 14:28:05 spdk_dd -- dd/common.sh@143 -- # [[ libm.so.6 == liburing.so.* ]] 00:35:19.524 14:28:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:35:19.524 14:28:05 spdk_dd -- dd/common.sh@143 -- # [[ libfuse3.so.3 == liburing.so.* ]] 00:35:19.524 14:28:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:35:19.524 14:28:05 spdk_dd -- dd/common.sh@143 -- # [[ libkeyutils.so.1 == liburing.so.* ]] 00:35:19.524 14:28:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:35:19.524 14:28:05 spdk_dd -- dd/common.sh@143 -- # [[ libaio.so.1 == liburing.so.* ]] 00:35:19.525 14:28:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:35:19.525 14:28:05 spdk_dd -- dd/common.sh@143 -- # [[ libiscsi.so.9 == liburing.so.* ]] 00:35:19.525 14:28:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:35:19.525 14:28:05 spdk_dd -- dd/common.sh@143 -- # [[ libc.so.6 == liburing.so.* ]] 00:35:19.525 14:28:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:35:19.525 14:28:05 spdk_dd -- dd/common.sh@143 -- # [[ libstdc++.so.6 == liburing.so.* ]] 00:35:19.525 14:28:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:35:19.525 14:28:05 spdk_dd -- dd/common.sh@143 -- # [[ libgcc_s.so.1 == liburing.so.* ]] 00:35:19.525 14:28:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:35:19.525 14:28:05 spdk_dd -- dd/common.sh@143 -- # [[ /lib64/ld-linux-x86-64.so.2 == liburing.so.* ]] 00:35:19.525 14:28:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:35:19.525 14:28:05 spdk_dd -- dd/common.sh@143 -- # [[ libnl-route-3.so.200 == liburing.so.* ]] 00:35:19.525 14:28:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:35:19.525 14:28:05 spdk_dd -- dd/common.sh@143 -- # [[ libnl-3.so.200 == liburing.so.* ]] 00:35:19.525 14:28:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:35:19.525 14:28:05 spdk_dd -- dd/common.sh@143 -- # [[ libz.so.1 == liburing.so.* ]] 00:35:19.525 14:28:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:35:19.525 14:28:05 spdk_dd -- dd/common.sh@143 -- # [[ libgcrypt.so.20 == liburing.so.* ]] 00:35:19.525 14:28:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:35:19.525 14:28:05 spdk_dd -- dd/common.sh@143 -- # [[ libgpg-error.so.0 == liburing.so.* ]] 00:35:19.525 14:28:05 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:35:19.525 14:28:05 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:35:19.525 14:28:05 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 00:35:19.525 14:28:05 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:35:19.525 14:28:05 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:19.525 14:28:05 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:35:19.525 ************************************ 00:35:19.525 START TEST spdk_dd_basic_rw 00:35:19.525 ************************************ 00:35:19.525 14:28:05 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 00:35:19.525 * Looking for test storage... 00:35:19.525 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:35:19.525 14:28:05 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:35:19.525 14:28:05 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:19.525 14:28:05 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:19.525 14:28:05 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:19.525 14:28:05 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:35:19.525 14:28:05 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:35:19.525 14:28:05 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:35:19.525 14:28:05 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:35:19.525 14:28:05 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:35:19.525 14:28:05 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:35:19.525 14:28:05 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:35:19.525 14:28:05 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:35:19.525 14:28:05 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:35:19.525 14:28:05 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:35:19.525 14:28:05 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:35:19.525 14:28:05 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:35:19.525 14:28:05 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:35:19.525 14:28:05 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:35:19.525 14:28:05 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:35:19.525 14:28:05 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:35:19.525 14:28:05 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:35:19.525 14:28:05 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:35:19.786 14:28:05 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 108 Data Units Written: 7 Host Read Commands: 2348 Host Write Commands: 112 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:35:19.786 14:28:05 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:35:19.787 14:28:05 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 108 Data Units Written: 7 Host Read Commands: 2348 Host Write Commands: 112 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:35:19.787 14:28:05 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:35:19.787 14:28:05 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:35:19.787 14:28:05 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:35:19.787 14:28:05 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:35:19.787 14:28:05 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:35:19.787 14:28:05 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:35:19.787 14:28:05 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:35:19.787 14:28:05 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:35:19.787 14:28:05 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:35:19.787 14:28:05 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:19.787 14:28:05 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:35:19.787 ************************************ 00:35:19.787 START TEST dd_bs_lt_native_bs 00:35:19.787 ************************************ 00:35:19.787 14:28:05 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1123 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:35:19.787 14:28:05 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@648 -- # local es=0 00:35:19.787 14:28:05 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:35:19.787 14:28:05 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:35:19.787 14:28:05 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:19.787 14:28:05 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:35:19.787 14:28:05 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:19.787 14:28:05 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:35:19.787 14:28:05 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:19.787 14:28:05 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:35:19.787 14:28:05 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:35:19.787 14:28:05 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:35:20.052 { 00:35:20.052 "subsystems": [ 00:35:20.052 { 00:35:20.052 "subsystem": "bdev", 00:35:20.052 "config": [ 00:35:20.052 { 00:35:20.052 "params": { 00:35:20.052 "trtype": "pcie", 00:35:20.052 "traddr": "0000:00:10.0", 00:35:20.052 "name": "Nvme0" 00:35:20.052 }, 00:35:20.052 "method": "bdev_nvme_attach_controller" 00:35:20.052 }, 00:35:20.052 { 00:35:20.052 "method": "bdev_wait_for_examine" 00:35:20.052 } 00:35:20.052 ] 00:35:20.052 } 00:35:20.052 ] 00:35:20.052 } 00:35:20.052 [2024-07-15 14:28:05.798034] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:35:20.052 [2024-07-15 14:28:05.798285] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid222462 ] 00:35:20.052 [2024-07-15 14:28:05.962149] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:20.316 [2024-07-15 14:28:06.163998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:20.574 [2024-07-15 14:28:06.521602] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:35:20.574 [2024-07-15 14:28:06.521877] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:35:21.513 [2024-07-15 14:28:07.195195] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:35:21.771 14:28:07 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@651 -- # es=234 00:35:21.771 14:28:07 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:35:21.771 14:28:07 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@660 -- # es=106 00:35:21.771 14:28:07 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@661 -- # case "$es" in 00:35:21.771 14:28:07 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@668 -- # es=1 00:35:21.771 14:28:07 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:35:21.771 00:35:21.771 real 0m1.830s 00:35:21.771 user 0m1.525s 00:35:21.771 sys 0m0.252s 00:35:21.771 14:28:07 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:21.771 ************************************ 00:35:21.771 END TEST dd_bs_lt_native_bs 00:35:21.771 ************************************ 00:35:21.771 14:28:07 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:35:21.771 14:28:07 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:35:21.771 14:28:07 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:35:21.771 14:28:07 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:35:21.771 14:28:07 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:21.771 14:28:07 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:35:21.771 ************************************ 00:35:21.771 START TEST dd_rw 00:35:21.771 ************************************ 00:35:21.771 14:28:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1123 -- # basic_rw 4096 00:35:21.771 14:28:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:35:21.771 14:28:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:35:21.771 14:28:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:35:21.771 14:28:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:35:21.771 14:28:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:35:21.771 14:28:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:35:21.771 14:28:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:35:21.771 14:28:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:35:21.771 14:28:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:35:21.771 14:28:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:35:21.771 14:28:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:35:21.771 14:28:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:35:21.771 14:28:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:35:21.771 14:28:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:35:21.771 14:28:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:35:21.771 14:28:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:35:21.771 14:28:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:35:21.771 14:28:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:35:22.338 14:28:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:35:22.338 14:28:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:35:22.338 14:28:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:35:22.338 14:28:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:35:22.338 [2024-07-15 14:28:08.299593] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:35:22.338 [2024-07-15 14:28:08.300245] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid222507 ] 00:35:22.338 { 00:35:22.338 "subsystems": [ 00:35:22.338 { 00:35:22.338 "subsystem": "bdev", 00:35:22.338 "config": [ 00:35:22.338 { 00:35:22.338 "params": { 00:35:22.338 "trtype": "pcie", 00:35:22.338 "traddr": "0000:00:10.0", 00:35:22.338 "name": "Nvme0" 00:35:22.338 }, 00:35:22.338 "method": "bdev_nvme_attach_controller" 00:35:22.338 }, 00:35:22.338 { 00:35:22.338 "method": "bdev_wait_for_examine" 00:35:22.338 } 00:35:22.338 ] 00:35:22.338 } 00:35:22.338 ] 00:35:22.338 } 00:35:22.596 [2024-07-15 14:28:08.462236] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:22.856 [2024-07-15 14:28:08.658846] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:24.052  Copying: 60/60 [kB] (average 29 MBps) 00:35:24.052 00:35:24.052 14:28:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:35:24.052 14:28:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:35:24.052 14:28:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:35:24.052 14:28:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:35:24.052 [2024-07-15 14:28:09.993108] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:35:24.052 [2024-07-15 14:28:09.993381] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid222538 ] 00:35:24.052 { 00:35:24.052 "subsystems": [ 00:35:24.052 { 00:35:24.053 "subsystem": "bdev", 00:35:24.053 "config": [ 00:35:24.053 { 00:35:24.053 "params": { 00:35:24.053 "trtype": "pcie", 00:35:24.053 "traddr": "0000:00:10.0", 00:35:24.053 "name": "Nvme0" 00:35:24.053 }, 00:35:24.053 "method": "bdev_nvme_attach_controller" 00:35:24.053 }, 00:35:24.053 { 00:35:24.053 "method": "bdev_wait_for_examine" 00:35:24.053 } 00:35:24.053 ] 00:35:24.053 } 00:35:24.053 ] 00:35:24.053 } 00:35:24.312 [2024-07-15 14:28:10.159428] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:24.570 [2024-07-15 14:28:10.355635] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:25.767  Copying: 60/60 [kB] (average 29 MBps) 00:35:25.767 00:35:25.767 14:28:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:35:25.767 14:28:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:35:25.767 14:28:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:35:25.767 14:28:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:35:25.767 14:28:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:35:25.767 14:28:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:35:25.767 14:28:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:35:25.767 14:28:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:35:25.767 14:28:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:35:25.767 14:28:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:35:25.767 14:28:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:35:26.026 [2024-07-15 14:28:11.799622] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:35:26.026 [2024-07-15 14:28:11.800056] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid222570 ] 00:35:26.026 { 00:35:26.026 "subsystems": [ 00:35:26.026 { 00:35:26.026 "subsystem": "bdev", 00:35:26.026 "config": [ 00:35:26.026 { 00:35:26.026 "params": { 00:35:26.026 "trtype": "pcie", 00:35:26.026 "traddr": "0000:00:10.0", 00:35:26.026 "name": "Nvme0" 00:35:26.026 }, 00:35:26.026 "method": "bdev_nvme_attach_controller" 00:35:26.026 }, 00:35:26.026 { 00:35:26.026 "method": "bdev_wait_for_examine" 00:35:26.026 } 00:35:26.026 ] 00:35:26.026 } 00:35:26.026 ] 00:35:26.026 } 00:35:26.026 [2024-07-15 14:28:11.962434] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:26.284 [2024-07-15 14:28:12.154412] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:27.476  Copying: 1024/1024 [kB] (average 500 MBps) 00:35:27.476 00:35:27.476 14:28:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:35:27.476 14:28:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:35:27.476 14:28:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:35:27.476 14:28:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:35:27.476 14:28:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:35:27.476 14:28:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:35:27.476 14:28:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:35:28.411 14:28:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:35:28.411 14:28:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:35:28.411 14:28:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:35:28.411 14:28:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:35:28.411 [2024-07-15 14:28:14.154574] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:35:28.411 [2024-07-15 14:28:14.154936] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid222598 ] 00:35:28.411 { 00:35:28.411 "subsystems": [ 00:35:28.411 { 00:35:28.411 "subsystem": "bdev", 00:35:28.411 "config": [ 00:35:28.411 { 00:35:28.411 "params": { 00:35:28.411 "trtype": "pcie", 00:35:28.411 "traddr": "0000:00:10.0", 00:35:28.411 "name": "Nvme0" 00:35:28.411 }, 00:35:28.411 "method": "bdev_nvme_attach_controller" 00:35:28.411 }, 00:35:28.411 { 00:35:28.411 "method": "bdev_wait_for_examine" 00:35:28.411 } 00:35:28.411 ] 00:35:28.411 } 00:35:28.411 ] 00:35:28.411 } 00:35:28.411 [2024-07-15 14:28:14.309954] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:28.669 [2024-07-15 14:28:14.514170] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:30.335  Copying: 60/60 [kB] (average 58 MBps) 00:35:30.335 00:35:30.335 14:28:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:35:30.335 14:28:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:35:30.335 14:28:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:35:30.335 14:28:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:35:30.335 [2024-07-15 14:28:15.970481] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:35:30.335 [2024-07-15 14:28:15.970908] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid222629 ] 00:35:30.335 { 00:35:30.335 "subsystems": [ 00:35:30.335 { 00:35:30.335 "subsystem": "bdev", 00:35:30.335 "config": [ 00:35:30.335 { 00:35:30.335 "params": { 00:35:30.335 "trtype": "pcie", 00:35:30.335 "traddr": "0000:00:10.0", 00:35:30.335 "name": "Nvme0" 00:35:30.335 }, 00:35:30.335 "method": "bdev_nvme_attach_controller" 00:35:30.335 }, 00:35:30.335 { 00:35:30.335 "method": "bdev_wait_for_examine" 00:35:30.335 } 00:35:30.335 ] 00:35:30.335 } 00:35:30.335 ] 00:35:30.335 } 00:35:30.335 [2024-07-15 14:28:16.141100] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:30.335 [2024-07-15 14:28:16.337679] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:31.835  Copying: 60/60 [kB] (average 58 MBps) 00:35:31.835 00:35:31.835 14:28:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:35:31.835 14:28:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:35:31.835 14:28:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:35:31.835 14:28:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:35:31.835 14:28:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:35:31.835 14:28:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:35:31.835 14:28:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:35:31.835 14:28:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:35:31.835 14:28:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:35:31.836 14:28:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:35:31.836 14:28:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:35:31.836 [2024-07-15 14:28:17.723391] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:35:31.836 [2024-07-15 14:28:17.723841] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid222658 ] 00:35:31.836 { 00:35:31.836 "subsystems": [ 00:35:31.836 { 00:35:31.836 "subsystem": "bdev", 00:35:31.836 "config": [ 00:35:31.836 { 00:35:31.836 "params": { 00:35:31.836 "trtype": "pcie", 00:35:31.836 "traddr": "0000:00:10.0", 00:35:31.836 "name": "Nvme0" 00:35:31.836 }, 00:35:31.836 "method": "bdev_nvme_attach_controller" 00:35:31.836 }, 00:35:31.836 { 00:35:31.836 "method": "bdev_wait_for_examine" 00:35:31.836 } 00:35:31.836 ] 00:35:31.836 } 00:35:31.836 ] 00:35:31.836 } 00:35:32.094 [2024-07-15 14:28:17.886513] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:32.094 [2024-07-15 14:28:18.087736] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:33.593  Copying: 1024/1024 [kB] (average 1000 MBps) 00:35:33.593 00:35:33.593 14:28:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:35:33.593 14:28:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:35:33.593 14:28:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:35:33.593 14:28:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:35:33.593 14:28:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:35:33.593 14:28:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:35:33.593 14:28:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:35:33.593 14:28:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:35:34.158 14:28:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:35:34.158 14:28:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:35:34.158 14:28:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:35:34.158 14:28:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:35:34.158 [2024-07-15 14:28:20.053283] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:35:34.158 [2024-07-15 14:28:20.053590] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid222689 ] 00:35:34.158 { 00:35:34.158 "subsystems": [ 00:35:34.158 { 00:35:34.158 "subsystem": "bdev", 00:35:34.158 "config": [ 00:35:34.158 { 00:35:34.158 "params": { 00:35:34.158 "trtype": "pcie", 00:35:34.158 "traddr": "0000:00:10.0", 00:35:34.158 "name": "Nvme0" 00:35:34.158 }, 00:35:34.158 "method": "bdev_nvme_attach_controller" 00:35:34.158 }, 00:35:34.158 { 00:35:34.158 "method": "bdev_wait_for_examine" 00:35:34.158 } 00:35:34.158 ] 00:35:34.158 } 00:35:34.158 ] 00:35:34.158 } 00:35:34.416 [2024-07-15 14:28:20.207388] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:34.416 [2024-07-15 14:28:20.403754] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:35.914  Copying: 56/56 [kB] (average 54 MBps) 00:35:35.914 00:35:35.914 14:28:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:35:35.914 14:28:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:35:35.914 14:28:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:35:35.914 14:28:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:35:35.914 [2024-07-15 14:28:21.767899] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:35:35.914 [2024-07-15 14:28:21.768657] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid222716 ] 00:35:35.914 { 00:35:35.914 "subsystems": [ 00:35:35.914 { 00:35:35.914 "subsystem": "bdev", 00:35:35.914 "config": [ 00:35:35.914 { 00:35:35.914 "params": { 00:35:35.914 "trtype": "pcie", 00:35:35.914 "traddr": "0000:00:10.0", 00:35:35.914 "name": "Nvme0" 00:35:35.914 }, 00:35:35.914 "method": "bdev_nvme_attach_controller" 00:35:35.914 }, 00:35:35.914 { 00:35:35.914 "method": "bdev_wait_for_examine" 00:35:35.914 } 00:35:35.914 ] 00:35:35.914 } 00:35:35.914 ] 00:35:35.914 } 00:35:36.172 [2024-07-15 14:28:21.932600] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:36.172 [2024-07-15 14:28:22.119645] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:37.670  Copying: 56/56 [kB] (average 54 MBps) 00:35:37.670 00:35:37.670 14:28:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:35:37.670 14:28:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:35:37.670 14:28:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:35:37.670 14:28:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:35:37.670 14:28:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:35:37.670 14:28:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:35:37.670 14:28:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:35:37.670 14:28:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:35:37.670 14:28:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:35:37.670 14:28:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:35:37.670 14:28:23 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:35:37.670 [2024-07-15 14:28:23.563349] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:35:37.670 [2024-07-15 14:28:23.563741] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid222738 ] 00:35:37.670 { 00:35:37.670 "subsystems": [ 00:35:37.670 { 00:35:37.670 "subsystem": "bdev", 00:35:37.670 "config": [ 00:35:37.670 { 00:35:37.670 "params": { 00:35:37.670 "trtype": "pcie", 00:35:37.670 "traddr": "0000:00:10.0", 00:35:37.670 "name": "Nvme0" 00:35:37.670 }, 00:35:37.670 "method": "bdev_nvme_attach_controller" 00:35:37.670 }, 00:35:37.670 { 00:35:37.670 "method": "bdev_wait_for_examine" 00:35:37.670 } 00:35:37.670 ] 00:35:37.670 } 00:35:37.670 ] 00:35:37.670 } 00:35:37.928 [2024-07-15 14:28:23.726307] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:37.928 [2024-07-15 14:28:23.912813] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:39.429  Copying: 1024/1024 [kB] (average 1000 MBps) 00:35:39.429 00:35:39.429 14:28:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:35:39.429 14:28:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:35:39.429 14:28:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:35:39.429 14:28:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:35:39.429 14:28:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:35:39.429 14:28:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:35:39.429 14:28:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:35:40.050 14:28:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:35:40.050 14:28:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:35:40.050 14:28:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:35:40.050 14:28:25 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:35:40.050 [2024-07-15 14:28:25.824127] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:35:40.050 [2024-07-15 14:28:25.824863] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid222776 ] 00:35:40.050 { 00:35:40.050 "subsystems": [ 00:35:40.050 { 00:35:40.050 "subsystem": "bdev", 00:35:40.050 "config": [ 00:35:40.050 { 00:35:40.050 "params": { 00:35:40.050 "trtype": "pcie", 00:35:40.050 "traddr": "0000:00:10.0", 00:35:40.050 "name": "Nvme0" 00:35:40.050 }, 00:35:40.050 "method": "bdev_nvme_attach_controller" 00:35:40.050 }, 00:35:40.050 { 00:35:40.050 "method": "bdev_wait_for_examine" 00:35:40.050 } 00:35:40.050 ] 00:35:40.050 } 00:35:40.050 ] 00:35:40.050 } 00:35:40.050 [2024-07-15 14:28:25.977229] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:40.308 [2024-07-15 14:28:26.165032] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:41.977  Copying: 56/56 [kB] (average 54 MBps) 00:35:41.977 00:35:41.977 14:28:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:35:41.977 14:28:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:35:41.977 14:28:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:35:41.977 14:28:27 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:35:41.977 [2024-07-15 14:28:27.606838] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:35:41.977 [2024-07-15 14:28:27.607176] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid222797 ] 00:35:41.977 { 00:35:41.977 "subsystems": [ 00:35:41.977 { 00:35:41.977 "subsystem": "bdev", 00:35:41.977 "config": [ 00:35:41.977 { 00:35:41.977 "params": { 00:35:41.977 "trtype": "pcie", 00:35:41.977 "traddr": "0000:00:10.0", 00:35:41.977 "name": "Nvme0" 00:35:41.977 }, 00:35:41.977 "method": "bdev_nvme_attach_controller" 00:35:41.977 }, 00:35:41.977 { 00:35:41.977 "method": "bdev_wait_for_examine" 00:35:41.977 } 00:35:41.977 ] 00:35:41.977 } 00:35:41.977 ] 00:35:41.977 } 00:35:41.977 [2024-07-15 14:28:27.756082] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:41.977 [2024-07-15 14:28:27.946338] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:43.480  Copying: 56/56 [kB] (average 54 MBps) 00:35:43.480 00:35:43.480 14:28:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:35:43.480 14:28:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:35:43.480 14:28:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:35:43.480 14:28:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:35:43.480 14:28:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:35:43.480 14:28:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:35:43.480 14:28:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:35:43.480 14:28:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:35:43.480 14:28:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:35:43.480 14:28:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:35:43.480 14:28:29 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:35:43.480 [2024-07-15 14:28:29.308063] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:35:43.480 [2024-07-15 14:28:29.308457] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid222825 ] 00:35:43.480 { 00:35:43.480 "subsystems": [ 00:35:43.480 { 00:35:43.480 "subsystem": "bdev", 00:35:43.480 "config": [ 00:35:43.480 { 00:35:43.480 "params": { 00:35:43.480 "trtype": "pcie", 00:35:43.480 "traddr": "0000:00:10.0", 00:35:43.480 "name": "Nvme0" 00:35:43.480 }, 00:35:43.480 "method": "bdev_nvme_attach_controller" 00:35:43.480 }, 00:35:43.480 { 00:35:43.480 "method": "bdev_wait_for_examine" 00:35:43.480 } 00:35:43.480 ] 00:35:43.480 } 00:35:43.480 ] 00:35:43.480 } 00:35:43.480 [2024-07-15 14:28:29.471618] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:43.738 [2024-07-15 14:28:29.662995] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:45.259  Copying: 1024/1024 [kB] (average 1000 MBps) 00:35:45.259 00:35:45.259 14:28:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:35:45.259 14:28:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:35:45.259 14:28:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:35:45.259 14:28:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:35:45.259 14:28:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:35:45.259 14:28:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:35:45.259 14:28:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:35:45.259 14:28:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:35:45.826 14:28:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:35:45.826 14:28:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:35:45.826 14:28:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:35:45.826 14:28:31 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:35:45.826 [2024-07-15 14:28:31.592554] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:35:45.826 [2024-07-15 14:28:31.592942] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid222856 ] 00:35:45.826 { 00:35:45.826 "subsystems": [ 00:35:45.826 { 00:35:45.826 "subsystem": "bdev", 00:35:45.826 "config": [ 00:35:45.826 { 00:35:45.826 "params": { 00:35:45.826 "trtype": "pcie", 00:35:45.826 "traddr": "0000:00:10.0", 00:35:45.826 "name": "Nvme0" 00:35:45.826 }, 00:35:45.826 "method": "bdev_nvme_attach_controller" 00:35:45.826 }, 00:35:45.826 { 00:35:45.826 "method": "bdev_wait_for_examine" 00:35:45.826 } 00:35:45.826 ] 00:35:45.826 } 00:35:45.826 ] 00:35:45.826 } 00:35:45.826 [2024-07-15 14:28:31.757690] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:46.085 [2024-07-15 14:28:31.974494] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:47.718  Copying: 48/48 [kB] (average 46 MBps) 00:35:47.718 00:35:47.718 14:28:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:35:47.718 14:28:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:35:47.718 14:28:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:35:47.718 14:28:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:35:47.718 [2024-07-15 14:28:33.418548] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:35:47.718 [2024-07-15 14:28:33.418957] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid222884 ] 00:35:47.718 { 00:35:47.718 "subsystems": [ 00:35:47.718 { 00:35:47.718 "subsystem": "bdev", 00:35:47.718 "config": [ 00:35:47.718 { 00:35:47.718 "params": { 00:35:47.718 "trtype": "pcie", 00:35:47.718 "traddr": "0000:00:10.0", 00:35:47.718 "name": "Nvme0" 00:35:47.718 }, 00:35:47.718 "method": "bdev_nvme_attach_controller" 00:35:47.718 }, 00:35:47.718 { 00:35:47.718 "method": "bdev_wait_for_examine" 00:35:47.718 } 00:35:47.718 ] 00:35:47.718 } 00:35:47.718 ] 00:35:47.718 } 00:35:47.718 [2024-07-15 14:28:33.580334] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:47.976 [2024-07-15 14:28:33.761958] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:49.169  Copying: 48/48 [kB] (average 46 MBps) 00:35:49.169 00:35:49.169 14:28:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:35:49.169 14:28:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:35:49.169 14:28:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:35:49.169 14:28:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:35:49.169 14:28:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:35:49.169 14:28:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:35:49.169 14:28:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:35:49.169 14:28:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:35:49.169 14:28:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:35:49.169 14:28:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:35:49.169 14:28:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:35:49.427 [2024-07-15 14:28:35.200582] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:35:49.427 [2024-07-15 14:28:35.201102] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid222912 ] 00:35:49.427 { 00:35:49.427 "subsystems": [ 00:35:49.427 { 00:35:49.427 "subsystem": "bdev", 00:35:49.427 "config": [ 00:35:49.427 { 00:35:49.427 "params": { 00:35:49.427 "trtype": "pcie", 00:35:49.427 "traddr": "0000:00:10.0", 00:35:49.427 "name": "Nvme0" 00:35:49.427 }, 00:35:49.427 "method": "bdev_nvme_attach_controller" 00:35:49.427 }, 00:35:49.427 { 00:35:49.427 "method": "bdev_wait_for_examine" 00:35:49.427 } 00:35:49.427 ] 00:35:49.427 } 00:35:49.427 ] 00:35:49.427 } 00:35:49.427 [2024-07-15 14:28:35.365254] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:49.717 [2024-07-15 14:28:35.565616] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:51.361  Copying: 1024/1024 [kB] (average 500 MBps) 00:35:51.361 00:35:51.361 14:28:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:35:51.361 14:28:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:35:51.361 14:28:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:35:51.361 14:28:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:35:51.361 14:28:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:35:51.361 14:28:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:35:51.361 14:28:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:35:51.619 14:28:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:35:51.619 14:28:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:35:51.619 14:28:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:35:51.619 14:28:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:35:51.619 [2024-07-15 14:28:37.521608] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:35:51.619 [2024-07-15 14:28:37.522467] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid222944 ] 00:35:51.619 { 00:35:51.619 "subsystems": [ 00:35:51.619 { 00:35:51.619 "subsystem": "bdev", 00:35:51.619 "config": [ 00:35:51.619 { 00:35:51.619 "params": { 00:35:51.619 "trtype": "pcie", 00:35:51.619 "traddr": "0000:00:10.0", 00:35:51.619 "name": "Nvme0" 00:35:51.619 }, 00:35:51.619 "method": "bdev_nvme_attach_controller" 00:35:51.619 }, 00:35:51.619 { 00:35:51.619 "method": "bdev_wait_for_examine" 00:35:51.619 } 00:35:51.619 ] 00:35:51.619 } 00:35:51.619 ] 00:35:51.619 } 00:35:51.877 [2024-07-15 14:28:37.686436] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:51.877 [2024-07-15 14:28:37.872571] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:53.378  Copying: 48/48 [kB] (average 46 MBps) 00:35:53.378 00:35:53.378 14:28:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:35:53.378 14:28:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:35:53.378 14:28:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:35:53.378 14:28:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:35:53.378 { 00:35:53.378 "subsystems": [ 00:35:53.378 { 00:35:53.378 "subsystem": "bdev", 00:35:53.378 "config": [ 00:35:53.378 { 00:35:53.378 "params": { 00:35:53.378 "trtype": "pcie", 00:35:53.378 "traddr": "0000:00:10.0", 00:35:53.378 "name": "Nvme0" 00:35:53.378 }, 00:35:53.378 "method": "bdev_nvme_attach_controller" 00:35:53.378 }, 00:35:53.378 { 00:35:53.378 "method": "bdev_wait_for_examine" 00:35:53.378 } 00:35:53.378 ] 00:35:53.378 } 00:35:53.378 ] 00:35:53.378 } 00:35:53.378 [2024-07-15 14:28:39.370597] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:35:53.378 [2024-07-15 14:28:39.371696] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid222971 ] 00:35:53.637 [2024-07-15 14:28:39.555940] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:53.897 [2024-07-15 14:28:39.762218] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:55.559  Copying: 48/48 [kB] (average 46 MBps) 00:35:55.559 00:35:55.559 14:28:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:35:55.559 14:28:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:35:55.559 14:28:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:35:55.559 14:28:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:35:55.559 14:28:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:35:55.559 14:28:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:35:55.559 14:28:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:35:55.559 14:28:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:35:55.559 14:28:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:35:55.559 14:28:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:35:55.559 14:28:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:35:55.559 [2024-07-15 14:28:41.238130] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:35:55.559 [2024-07-15 14:28:41.238927] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid222999 ] 00:35:55.559 { 00:35:55.559 "subsystems": [ 00:35:55.559 { 00:35:55.559 "subsystem": "bdev", 00:35:55.559 "config": [ 00:35:55.559 { 00:35:55.559 "params": { 00:35:55.559 "trtype": "pcie", 00:35:55.559 "traddr": "0000:00:10.0", 00:35:55.559 "name": "Nvme0" 00:35:55.559 }, 00:35:55.559 "method": "bdev_nvme_attach_controller" 00:35:55.559 }, 00:35:55.559 { 00:35:55.559 "method": "bdev_wait_for_examine" 00:35:55.559 } 00:35:55.559 ] 00:35:55.559 } 00:35:55.559 ] 00:35:55.559 } 00:35:55.559 [2024-07-15 14:28:41.399423] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:55.818 [2024-07-15 14:28:41.598191] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:57.012  Copying: 1024/1024 [kB] (average 1000 MBps) 00:35:57.012 00:35:57.012 00:35:57.012 real 0m35.366s 00:35:57.012 user 0m29.729s 00:35:57.012 sys 0m4.438s 00:35:57.012 14:28:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:57.012 14:28:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:35:57.012 ************************************ 00:35:57.012 END TEST dd_rw 00:35:57.012 ************************************ 00:35:57.272 14:28:43 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:35:57.272 14:28:43 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:35:57.272 14:28:43 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:57.272 14:28:43 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:57.272 14:28:43 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:35:57.272 ************************************ 00:35:57.272 START TEST dd_rw_offset 00:35:57.272 ************************************ 00:35:57.272 14:28:43 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1123 -- # basic_offset 00:35:57.272 14:28:43 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:35:57.272 14:28:43 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:35:57.272 14:28:43 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:35:57.272 14:28:43 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:35:57.272 14:28:43 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:35:57.272 14:28:43 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=qbmclk4jkidiklega9z81ggqrwb355iosuxi3hizv8s35hk8cu3rf1sij7xivsuy4z3qy4xc5st8sixqgl2bp7hk5yekpyj9vbx8t3p6y1b1tla73tiuyi0rrwzli5blqj4k7k0zbqybjjpi9nqjqdrjpq69znarwat7gmrcevu1zelz8q0mzma8029th951hfvc6fo694k1fbgg100q77kajyouy6z3q0dhbgw0uwrirhqy1psrxuuywjoooomtn6s848wxyq6yqwcehta22ovxzo2lu1pc36sabu9si45gcf2z38bbqnshop10wlx2v96uc34ir468f1bk9yb5fnwwrdn2jfzkklnt08t69jbhwxr419i1ulfuuemzcnpco6yp75dv2zjdhkd17re1mvg4yctfy64r3rgifrwai8spjz9sw4f8etx3inpq0nctbzln0o031pf2xvjjch7mr0ohdh5dxbicx3ds8p23mh3cds6e8uv0cgepfh8pxea8rqm4b106ur2s7vi275x64386h4valcdcvcvjg7z4g4bdykuhwb9kdl4lddgqs6tqym24q4owez7so2zwbipm3xischhzc5tfccqyo2ldyb75ww1qarddmfgq8v4qwjkh8qzw6eoy1ighx5z7lje31edvncxid8vvhhgf8z0r5npmiaivevesx8s7mbbt596oyf7umickuxjczqr9r3m2rr06pqhuyqve4kriyle3wrg9n7103b6x99tcjn8167r7bs1f8zj56byti2xl0dr908ck8ivjusc5ld4tzq293tjg0dmwe8lsswbbnxut8cxcue9bzzoxvxw49w0phnic29zq3rtlqd9g4xuw58qs77brq1vjifj87lk5oo2k3mcojp2sshd0ogc4y2tpd93oph6epdt44d66bkkktgl0h4t6j9pij9m7xcpkayvdo27dmxx2x6nfr7v1ckqe000bmootdmmovnp4pgaylljj56kmymjr0jbun9lxf985vs5rlk5n087mkg9jksu284yz8on9caj6cnedb42q1v6jqqb932qveksplgiam1d3nx6hagcap4iemk1x5yccyzfqpeg61in2kgmypsfkemt776xn8hp7ng5z567sk720jijjhcxfcn995nqyg3owooq4yclohd5azx16ccyk7cf9628xgr1znch32jqjn7nhq5y0q77dt7eg2r94qbg7jca0dcf2zvhoqmt6rqvodcsvtynbjpffqfotpyn7t9tj7q5r0bbxjgk74l6lo1p8ox3wkm9jp6n470tyrw9yazmoftozl5q8ipfo5e9wg2wwtn6lbojhixgsiocacvkry6k9pgk77yvzw47vwcing0s2a6skndwuhcyit166v6unnocrkm0rp2h5xb5p2l2h8px76mxrq42ib0keh88afm2uscd1j6szdk5bn9fb846wg3b697sef8donld0o4bm3curxxz14cznwddwgr7v4r57e1tmch14di2v8bqz1zssqfjnqc69e0lwk8gk6bcbi1anj85yn7ff6f302iz33srlcro8zh9g436iw7rjxsw0h6928gmgx2cixnx77gbadwpj597sv9rgbhvg1fjihvkaxnlaiea85z02yzo04saoe4aqta2b9sfq4smhm66uqx1hr98jsi0xs8yhdzwk5thsagjebmnvfpsycazmdzxf1vuwap7k8zh0vb73ysdqqv5pikr3y9khyypbnok7p0ssaxgic7p403mf4wlqsc3rkr6kf98y660j9g0rcsljxzca2ld1bewd2qefn37l0awin5ybh05inf9k06xkj5evmq9ysu7krx5qm3a61s0kz3o74vz7j1peb2ub8i58f0ovehtf0tn3ds6qank3g9k1nb0o5uj9w7r45pcdoy8qi0ckv8rm1c1ouo7wp8epale2nyiqplwhfi6h7d6wliuf88p4vtv07xs2oc3c7f6lai70rsmbw3t1n0ih6n3356uws2kum5vbtwfr9c9v91yoyar1bvviejbbyzpb2wrendxlgkkz4o10p8r9ffuzadtwli1s3s0tzx86u9i3ygsf2yjrr7z84fg2gixleapkgi4o0jg2io5zmszxh68mrl0nmgw51wxq90eot6ft36pazck40qhtwdfgo6rakynlarmzf4uc62qex3tf197xxthoueyaxnanvoy9vgewg44v6mubfnenrf3jfm35n9bbdv7zx3950ft2unhenv2l34hkv0v9s5i5af2grp3st553j3w6izp7s0e7sh275hpbd8jczo519uu6v3guzo7ka9ubc04ytqq1q9y91oxjzm694c9dz78dkqndk4kdufavamxwplzdymfhicjou9r2wy9cmtq2qkb58exgbos1qrnr9wsvnactcmcazxj2il3y4wabgx541br6pih867ftp0mdo6beyvflkirwqpt9kfj87hopoxt7fq9w9nzkb8af46xdv8c65d4of2dx92v7ked6ifex39y5byr7rd7ihnaq5m3qbinv576h650p2a4xo73u2oeunssccu4rs50lqlaruuflffxwbhtskc2fzjsxrtue7iusyfd4f2fhpv9nrk36g8prqujaolae04h5tdhacy1f83gg38lrxhjdwtmz6ob3h81jdnzsgco50rcx4ieqktosj65mpdqse9k9igojwybzcx0wy7vlvv4ypba7ai4obufrz6ue4oktyl07ed5eizkjkeajlo2nhooxqf3l1h9ncpcrqp8wxt0uu0zmeeb8otpqniaa3mo6oykz4hr3xwzod05rc379cyhft3lrkuq6q7sz270sblxqktpxxdz10f4t56jluf7hq4mhruq2f0helvm1pk10ilh9r065idl9ywfe1ehdevf0794gvank8z6t4upkmjybu66i7ihtakgm5qbirub32q0z8gf83om4gpgijy91hxnwcpk66zb01s7eeggkr4hjaup4san4cjzu6qbacn39jj8d7b31h70iob4xwb2tpi93kd47udxidhgdcu3nkglx2p5p4mz5po7mnhxvu4terdxfp7geofy26szytwol25lws3v0qnzcrtguhl9rq1p8mu2l7qmzl54059i8gm2o1edfd5yzvw7vpjv04ce0m1svpxjhqin92kz1a4mpfvehsyyxq8gqmgjv28z5ie6y7brput6e6nx3flv7gs66ukoy38o62a5bo6pc2i6soogyg35kl123vgg2u85ernvv7wzihue3saeia15gx1mr2txz7eotwqfp44ri4bouxiqyof2k4lezlul5eaggqogia972vf6fop7g1zyseu11a8qb8ga9je024pesqbqf5o8si2uedfcn4jdxp86qp5d1zh1cwqfne0dwgggyg2zucc7gip84hun7dykvjodvkszb2fqcbnrfpu7duatu140lcbvnavs35awq5jrz7goi8rkhthzyn1iu00nukyn7b0e4thbba8ihw4wc2asbvcvtsatox2xpxbi51rsgfjnj2n6t0766p9apd3r4wv4vadhh8250k24nb7j8bjkt1tvx3kwmp9owiseuq4d0p62j5d0gumdjku1xskhsxep6ptl5y84di6twwxt49g43vf14c4gng1mm9yyekgsl62i30q1i8u9y2db85fd0axi1w8qidmcf4w7lc0mtlnglp9w4badhkmo5xdgl0cxxq0dykk2y271wsqmpm1mof8poa1p71dl24wqsv493igcy2k5kjny7huirobz6835lhhcv9dgwr99fm4cxz4v3q9ar2bo4cdifaonwhvphlak9eambuz6b9zcj3m5o90xrhh6vmqiz8vqxur50iz9ikd40g1tn1hsklrpe0tzym9bqu81hi3omdgmhgrue12onlli1f7lcji9n4uzlggzyvaont5htn5hriay8okndmujn2u8167rtcbd3nt1y348t9a97j4kkwps8j95zdppom059etcmj6ibwtrgmajll4yqix6eezs7gb372174a2nmgo1t9n96dkv89akk 00:35:57.272 14:28:43 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:35:57.272 14:28:43 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:35:57.272 14:28:43 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:35:57.272 14:28:43 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:35:57.272 [2024-07-15 14:28:43.136865] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:35:57.272 [2024-07-15 14:28:43.139226] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid223053 ] 00:35:57.272 { 00:35:57.272 "subsystems": [ 00:35:57.272 { 00:35:57.272 "subsystem": "bdev", 00:35:57.272 "config": [ 00:35:57.272 { 00:35:57.272 "params": { 00:35:57.272 "trtype": "pcie", 00:35:57.272 "traddr": "0000:00:10.0", 00:35:57.272 "name": "Nvme0" 00:35:57.272 }, 00:35:57.272 "method": "bdev_nvme_attach_controller" 00:35:57.272 }, 00:35:57.272 { 00:35:57.272 "method": "bdev_wait_for_examine" 00:35:57.272 } 00:35:57.272 ] 00:35:57.272 } 00:35:57.272 ] 00:35:57.272 } 00:35:57.531 [2024-07-15 14:28:43.293760] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:57.531 [2024-07-15 14:28:43.490671] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:59.033  Copying: 4096/4096 [B] (average 4000 kBps) 00:35:59.033 00:35:59.033 14:28:44 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:35:59.033 14:28:44 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:35:59.033 14:28:44 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:35:59.033 14:28:44 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:35:59.033 [2024-07-15 14:28:44.958106] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:35:59.033 [2024-07-15 14:28:44.958626] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid223075 ] 00:35:59.033 { 00:35:59.033 "subsystems": [ 00:35:59.033 { 00:35:59.033 "subsystem": "bdev", 00:35:59.033 "config": [ 00:35:59.033 { 00:35:59.033 "params": { 00:35:59.033 "trtype": "pcie", 00:35:59.033 "traddr": "0000:00:10.0", 00:35:59.033 "name": "Nvme0" 00:35:59.033 }, 00:35:59.033 "method": "bdev_nvme_attach_controller" 00:35:59.033 }, 00:35:59.033 { 00:35:59.033 "method": "bdev_wait_for_examine" 00:35:59.033 } 00:35:59.033 ] 00:35:59.033 } 00:35:59.033 ] 00:35:59.033 } 00:35:59.291 [2024-07-15 14:28:45.133392] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:59.549 [2024-07-15 14:28:45.318234] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:00.742  Copying: 4096/4096 [B] (average 4000 kBps) 00:36:00.742 00:36:00.742 14:28:46 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:36:00.743 14:28:46 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ qbmclk4jkidiklega9z81ggqrwb355iosuxi3hizv8s35hk8cu3rf1sij7xivsuy4z3qy4xc5st8sixqgl2bp7hk5yekpyj9vbx8t3p6y1b1tla73tiuyi0rrwzli5blqj4k7k0zbqybjjpi9nqjqdrjpq69znarwat7gmrcevu1zelz8q0mzma8029th951hfvc6fo694k1fbgg100q77kajyouy6z3q0dhbgw0uwrirhqy1psrxuuywjoooomtn6s848wxyq6yqwcehta22ovxzo2lu1pc36sabu9si45gcf2z38bbqnshop10wlx2v96uc34ir468f1bk9yb5fnwwrdn2jfzkklnt08t69jbhwxr419i1ulfuuemzcnpco6yp75dv2zjdhkd17re1mvg4yctfy64r3rgifrwai8spjz9sw4f8etx3inpq0nctbzln0o031pf2xvjjch7mr0ohdh5dxbicx3ds8p23mh3cds6e8uv0cgepfh8pxea8rqm4b106ur2s7vi275x64386h4valcdcvcvjg7z4g4bdykuhwb9kdl4lddgqs6tqym24q4owez7so2zwbipm3xischhzc5tfccqyo2ldyb75ww1qarddmfgq8v4qwjkh8qzw6eoy1ighx5z7lje31edvncxid8vvhhgf8z0r5npmiaivevesx8s7mbbt596oyf7umickuxjczqr9r3m2rr06pqhuyqve4kriyle3wrg9n7103b6x99tcjn8167r7bs1f8zj56byti2xl0dr908ck8ivjusc5ld4tzq293tjg0dmwe8lsswbbnxut8cxcue9bzzoxvxw49w0phnic29zq3rtlqd9g4xuw58qs77brq1vjifj87lk5oo2k3mcojp2sshd0ogc4y2tpd93oph6epdt44d66bkkktgl0h4t6j9pij9m7xcpkayvdo27dmxx2x6nfr7v1ckqe000bmootdmmovnp4pgaylljj56kmymjr0jbun9lxf985vs5rlk5n087mkg9jksu284yz8on9caj6cnedb42q1v6jqqb932qveksplgiam1d3nx6hagcap4iemk1x5yccyzfqpeg61in2kgmypsfkemt776xn8hp7ng5z567sk720jijjhcxfcn995nqyg3owooq4yclohd5azx16ccyk7cf9628xgr1znch32jqjn7nhq5y0q77dt7eg2r94qbg7jca0dcf2zvhoqmt6rqvodcsvtynbjpffqfotpyn7t9tj7q5r0bbxjgk74l6lo1p8ox3wkm9jp6n470tyrw9yazmoftozl5q8ipfo5e9wg2wwtn6lbojhixgsiocacvkry6k9pgk77yvzw47vwcing0s2a6skndwuhcyit166v6unnocrkm0rp2h5xb5p2l2h8px76mxrq42ib0keh88afm2uscd1j6szdk5bn9fb846wg3b697sef8donld0o4bm3curxxz14cznwddwgr7v4r57e1tmch14di2v8bqz1zssqfjnqc69e0lwk8gk6bcbi1anj85yn7ff6f302iz33srlcro8zh9g436iw7rjxsw0h6928gmgx2cixnx77gbadwpj597sv9rgbhvg1fjihvkaxnlaiea85z02yzo04saoe4aqta2b9sfq4smhm66uqx1hr98jsi0xs8yhdzwk5thsagjebmnvfpsycazmdzxf1vuwap7k8zh0vb73ysdqqv5pikr3y9khyypbnok7p0ssaxgic7p403mf4wlqsc3rkr6kf98y660j9g0rcsljxzca2ld1bewd2qefn37l0awin5ybh05inf9k06xkj5evmq9ysu7krx5qm3a61s0kz3o74vz7j1peb2ub8i58f0ovehtf0tn3ds6qank3g9k1nb0o5uj9w7r45pcdoy8qi0ckv8rm1c1ouo7wp8epale2nyiqplwhfi6h7d6wliuf88p4vtv07xs2oc3c7f6lai70rsmbw3t1n0ih6n3356uws2kum5vbtwfr9c9v91yoyar1bvviejbbyzpb2wrendxlgkkz4o10p8r9ffuzadtwli1s3s0tzx86u9i3ygsf2yjrr7z84fg2gixleapkgi4o0jg2io5zmszxh68mrl0nmgw51wxq90eot6ft36pazck40qhtwdfgo6rakynlarmzf4uc62qex3tf197xxthoueyaxnanvoy9vgewg44v6mubfnenrf3jfm35n9bbdv7zx3950ft2unhenv2l34hkv0v9s5i5af2grp3st553j3w6izp7s0e7sh275hpbd8jczo519uu6v3guzo7ka9ubc04ytqq1q9y91oxjzm694c9dz78dkqndk4kdufavamxwplzdymfhicjou9r2wy9cmtq2qkb58exgbos1qrnr9wsvnactcmcazxj2il3y4wabgx541br6pih867ftp0mdo6beyvflkirwqpt9kfj87hopoxt7fq9w9nzkb8af46xdv8c65d4of2dx92v7ked6ifex39y5byr7rd7ihnaq5m3qbinv576h650p2a4xo73u2oeunssccu4rs50lqlaruuflffxwbhtskc2fzjsxrtue7iusyfd4f2fhpv9nrk36g8prqujaolae04h5tdhacy1f83gg38lrxhjdwtmz6ob3h81jdnzsgco50rcx4ieqktosj65mpdqse9k9igojwybzcx0wy7vlvv4ypba7ai4obufrz6ue4oktyl07ed5eizkjkeajlo2nhooxqf3l1h9ncpcrqp8wxt0uu0zmeeb8otpqniaa3mo6oykz4hr3xwzod05rc379cyhft3lrkuq6q7sz270sblxqktpxxdz10f4t56jluf7hq4mhruq2f0helvm1pk10ilh9r065idl9ywfe1ehdevf0794gvank8z6t4upkmjybu66i7ihtakgm5qbirub32q0z8gf83om4gpgijy91hxnwcpk66zb01s7eeggkr4hjaup4san4cjzu6qbacn39jj8d7b31h70iob4xwb2tpi93kd47udxidhgdcu3nkglx2p5p4mz5po7mnhxvu4terdxfp7geofy26szytwol25lws3v0qnzcrtguhl9rq1p8mu2l7qmzl54059i8gm2o1edfd5yzvw7vpjv04ce0m1svpxjhqin92kz1a4mpfvehsyyxq8gqmgjv28z5ie6y7brput6e6nx3flv7gs66ukoy38o62a5bo6pc2i6soogyg35kl123vgg2u85ernvv7wzihue3saeia15gx1mr2txz7eotwqfp44ri4bouxiqyof2k4lezlul5eaggqogia972vf6fop7g1zyseu11a8qb8ga9je024pesqbqf5o8si2uedfcn4jdxp86qp5d1zh1cwqfne0dwgggyg2zucc7gip84hun7dykvjodvkszb2fqcbnrfpu7duatu140lcbvnavs35awq5jrz7goi8rkhthzyn1iu00nukyn7b0e4thbba8ihw4wc2asbvcvtsatox2xpxbi51rsgfjnj2n6t0766p9apd3r4wv4vadhh8250k24nb7j8bjkt1tvx3kwmp9owiseuq4d0p62j5d0gumdjku1xskhsxep6ptl5y84di6twwxt49g43vf14c4gng1mm9yyekgsl62i30q1i8u9y2db85fd0axi1w8qidmcf4w7lc0mtlnglp9w4badhkmo5xdgl0cxxq0dykk2y271wsqmpm1mof8poa1p71dl24wqsv493igcy2k5kjny7huirobz6835lhhcv9dgwr99fm4cxz4v3q9ar2bo4cdifaonwhvphlak9eambuz6b9zcj3m5o90xrhh6vmqiz8vqxur50iz9ikd40g1tn1hsklrpe0tzym9bqu81hi3omdgmhgrue12onlli1f7lcji9n4uzlggzyvaont5htn5hriay8okndmujn2u8167rtcbd3nt1y348t9a97j4kkwps8j95zdppom059etcmj6ibwtrgmajll4yqix6eezs7gb372174a2nmgo1t9n96dkv89akk == \q\b\m\c\l\k\4\j\k\i\d\i\k\l\e\g\a\9\z\8\1\g\g\q\r\w\b\3\5\5\i\o\s\u\x\i\3\h\i\z\v\8\s\3\5\h\k\8\c\u\3\r\f\1\s\i\j\7\x\i\v\s\u\y\4\z\3\q\y\4\x\c\5\s\t\8\s\i\x\q\g\l\2\b\p\7\h\k\5\y\e\k\p\y\j\9\v\b\x\8\t\3\p\6\y\1\b\1\t\l\a\7\3\t\i\u\y\i\0\r\r\w\z\l\i\5\b\l\q\j\4\k\7\k\0\z\b\q\y\b\j\j\p\i\9\n\q\j\q\d\r\j\p\q\6\9\z\n\a\r\w\a\t\7\g\m\r\c\e\v\u\1\z\e\l\z\8\q\0\m\z\m\a\8\0\2\9\t\h\9\5\1\h\f\v\c\6\f\o\6\9\4\k\1\f\b\g\g\1\0\0\q\7\7\k\a\j\y\o\u\y\6\z\3\q\0\d\h\b\g\w\0\u\w\r\i\r\h\q\y\1\p\s\r\x\u\u\y\w\j\o\o\o\o\m\t\n\6\s\8\4\8\w\x\y\q\6\y\q\w\c\e\h\t\a\2\2\o\v\x\z\o\2\l\u\1\p\c\3\6\s\a\b\u\9\s\i\4\5\g\c\f\2\z\3\8\b\b\q\n\s\h\o\p\1\0\w\l\x\2\v\9\6\u\c\3\4\i\r\4\6\8\f\1\b\k\9\y\b\5\f\n\w\w\r\d\n\2\j\f\z\k\k\l\n\t\0\8\t\6\9\j\b\h\w\x\r\4\1\9\i\1\u\l\f\u\u\e\m\z\c\n\p\c\o\6\y\p\7\5\d\v\2\z\j\d\h\k\d\1\7\r\e\1\m\v\g\4\y\c\t\f\y\6\4\r\3\r\g\i\f\r\w\a\i\8\s\p\j\z\9\s\w\4\f\8\e\t\x\3\i\n\p\q\0\n\c\t\b\z\l\n\0\o\0\3\1\p\f\2\x\v\j\j\c\h\7\m\r\0\o\h\d\h\5\d\x\b\i\c\x\3\d\s\8\p\2\3\m\h\3\c\d\s\6\e\8\u\v\0\c\g\e\p\f\h\8\p\x\e\a\8\r\q\m\4\b\1\0\6\u\r\2\s\7\v\i\2\7\5\x\6\4\3\8\6\h\4\v\a\l\c\d\c\v\c\v\j\g\7\z\4\g\4\b\d\y\k\u\h\w\b\9\k\d\l\4\l\d\d\g\q\s\6\t\q\y\m\2\4\q\4\o\w\e\z\7\s\o\2\z\w\b\i\p\m\3\x\i\s\c\h\h\z\c\5\t\f\c\c\q\y\o\2\l\d\y\b\7\5\w\w\1\q\a\r\d\d\m\f\g\q\8\v\4\q\w\j\k\h\8\q\z\w\6\e\o\y\1\i\g\h\x\5\z\7\l\j\e\3\1\e\d\v\n\c\x\i\d\8\v\v\h\h\g\f\8\z\0\r\5\n\p\m\i\a\i\v\e\v\e\s\x\8\s\7\m\b\b\t\5\9\6\o\y\f\7\u\m\i\c\k\u\x\j\c\z\q\r\9\r\3\m\2\r\r\0\6\p\q\h\u\y\q\v\e\4\k\r\i\y\l\e\3\w\r\g\9\n\7\1\0\3\b\6\x\9\9\t\c\j\n\8\1\6\7\r\7\b\s\1\f\8\z\j\5\6\b\y\t\i\2\x\l\0\d\r\9\0\8\c\k\8\i\v\j\u\s\c\5\l\d\4\t\z\q\2\9\3\t\j\g\0\d\m\w\e\8\l\s\s\w\b\b\n\x\u\t\8\c\x\c\u\e\9\b\z\z\o\x\v\x\w\4\9\w\0\p\h\n\i\c\2\9\z\q\3\r\t\l\q\d\9\g\4\x\u\w\5\8\q\s\7\7\b\r\q\1\v\j\i\f\j\8\7\l\k\5\o\o\2\k\3\m\c\o\j\p\2\s\s\h\d\0\o\g\c\4\y\2\t\p\d\9\3\o\p\h\6\e\p\d\t\4\4\d\6\6\b\k\k\k\t\g\l\0\h\4\t\6\j\9\p\i\j\9\m\7\x\c\p\k\a\y\v\d\o\2\7\d\m\x\x\2\x\6\n\f\r\7\v\1\c\k\q\e\0\0\0\b\m\o\o\t\d\m\m\o\v\n\p\4\p\g\a\y\l\l\j\j\5\6\k\m\y\m\j\r\0\j\b\u\n\9\l\x\f\9\8\5\v\s\5\r\l\k\5\n\0\8\7\m\k\g\9\j\k\s\u\2\8\4\y\z\8\o\n\9\c\a\j\6\c\n\e\d\b\4\2\q\1\v\6\j\q\q\b\9\3\2\q\v\e\k\s\p\l\g\i\a\m\1\d\3\n\x\6\h\a\g\c\a\p\4\i\e\m\k\1\x\5\y\c\c\y\z\f\q\p\e\g\6\1\i\n\2\k\g\m\y\p\s\f\k\e\m\t\7\7\6\x\n\8\h\p\7\n\g\5\z\5\6\7\s\k\7\2\0\j\i\j\j\h\c\x\f\c\n\9\9\5\n\q\y\g\3\o\w\o\o\q\4\y\c\l\o\h\d\5\a\z\x\1\6\c\c\y\k\7\c\f\9\6\2\8\x\g\r\1\z\n\c\h\3\2\j\q\j\n\7\n\h\q\5\y\0\q\7\7\d\t\7\e\g\2\r\9\4\q\b\g\7\j\c\a\0\d\c\f\2\z\v\h\o\q\m\t\6\r\q\v\o\d\c\s\v\t\y\n\b\j\p\f\f\q\f\o\t\p\y\n\7\t\9\t\j\7\q\5\r\0\b\b\x\j\g\k\7\4\l\6\l\o\1\p\8\o\x\3\w\k\m\9\j\p\6\n\4\7\0\t\y\r\w\9\y\a\z\m\o\f\t\o\z\l\5\q\8\i\p\f\o\5\e\9\w\g\2\w\w\t\n\6\l\b\o\j\h\i\x\g\s\i\o\c\a\c\v\k\r\y\6\k\9\p\g\k\7\7\y\v\z\w\4\7\v\w\c\i\n\g\0\s\2\a\6\s\k\n\d\w\u\h\c\y\i\t\1\6\6\v\6\u\n\n\o\c\r\k\m\0\r\p\2\h\5\x\b\5\p\2\l\2\h\8\p\x\7\6\m\x\r\q\4\2\i\b\0\k\e\h\8\8\a\f\m\2\u\s\c\d\1\j\6\s\z\d\k\5\b\n\9\f\b\8\4\6\w\g\3\b\6\9\7\s\e\f\8\d\o\n\l\d\0\o\4\b\m\3\c\u\r\x\x\z\1\4\c\z\n\w\d\d\w\g\r\7\v\4\r\5\7\e\1\t\m\c\h\1\4\d\i\2\v\8\b\q\z\1\z\s\s\q\f\j\n\q\c\6\9\e\0\l\w\k\8\g\k\6\b\c\b\i\1\a\n\j\8\5\y\n\7\f\f\6\f\3\0\2\i\z\3\3\s\r\l\c\r\o\8\z\h\9\g\4\3\6\i\w\7\r\j\x\s\w\0\h\6\9\2\8\g\m\g\x\2\c\i\x\n\x\7\7\g\b\a\d\w\p\j\5\9\7\s\v\9\r\g\b\h\v\g\1\f\j\i\h\v\k\a\x\n\l\a\i\e\a\8\5\z\0\2\y\z\o\0\4\s\a\o\e\4\a\q\t\a\2\b\9\s\f\q\4\s\m\h\m\6\6\u\q\x\1\h\r\9\8\j\s\i\0\x\s\8\y\h\d\z\w\k\5\t\h\s\a\g\j\e\b\m\n\v\f\p\s\y\c\a\z\m\d\z\x\f\1\v\u\w\a\p\7\k\8\z\h\0\v\b\7\3\y\s\d\q\q\v\5\p\i\k\r\3\y\9\k\h\y\y\p\b\n\o\k\7\p\0\s\s\a\x\g\i\c\7\p\4\0\3\m\f\4\w\l\q\s\c\3\r\k\r\6\k\f\9\8\y\6\6\0\j\9\g\0\r\c\s\l\j\x\z\c\a\2\l\d\1\b\e\w\d\2\q\e\f\n\3\7\l\0\a\w\i\n\5\y\b\h\0\5\i\n\f\9\k\0\6\x\k\j\5\e\v\m\q\9\y\s\u\7\k\r\x\5\q\m\3\a\6\1\s\0\k\z\3\o\7\4\v\z\7\j\1\p\e\b\2\u\b\8\i\5\8\f\0\o\v\e\h\t\f\0\t\n\3\d\s\6\q\a\n\k\3\g\9\k\1\n\b\0\o\5\u\j\9\w\7\r\4\5\p\c\d\o\y\8\q\i\0\c\k\v\8\r\m\1\c\1\o\u\o\7\w\p\8\e\p\a\l\e\2\n\y\i\q\p\l\w\h\f\i\6\h\7\d\6\w\l\i\u\f\8\8\p\4\v\t\v\0\7\x\s\2\o\c\3\c\7\f\6\l\a\i\7\0\r\s\m\b\w\3\t\1\n\0\i\h\6\n\3\3\5\6\u\w\s\2\k\u\m\5\v\b\t\w\f\r\9\c\9\v\9\1\y\o\y\a\r\1\b\v\v\i\e\j\b\b\y\z\p\b\2\w\r\e\n\d\x\l\g\k\k\z\4\o\1\0\p\8\r\9\f\f\u\z\a\d\t\w\l\i\1\s\3\s\0\t\z\x\8\6\u\9\i\3\y\g\s\f\2\y\j\r\r\7\z\8\4\f\g\2\g\i\x\l\e\a\p\k\g\i\4\o\0\j\g\2\i\o\5\z\m\s\z\x\h\6\8\m\r\l\0\n\m\g\w\5\1\w\x\q\9\0\e\o\t\6\f\t\3\6\p\a\z\c\k\4\0\q\h\t\w\d\f\g\o\6\r\a\k\y\n\l\a\r\m\z\f\4\u\c\6\2\q\e\x\3\t\f\1\9\7\x\x\t\h\o\u\e\y\a\x\n\a\n\v\o\y\9\v\g\e\w\g\4\4\v\6\m\u\b\f\n\e\n\r\f\3\j\f\m\3\5\n\9\b\b\d\v\7\z\x\3\9\5\0\f\t\2\u\n\h\e\n\v\2\l\3\4\h\k\v\0\v\9\s\5\i\5\a\f\2\g\r\p\3\s\t\5\5\3\j\3\w\6\i\z\p\7\s\0\e\7\s\h\2\7\5\h\p\b\d\8\j\c\z\o\5\1\9\u\u\6\v\3\g\u\z\o\7\k\a\9\u\b\c\0\4\y\t\q\q\1\q\9\y\9\1\o\x\j\z\m\6\9\4\c\9\d\z\7\8\d\k\q\n\d\k\4\k\d\u\f\a\v\a\m\x\w\p\l\z\d\y\m\f\h\i\c\j\o\u\9\r\2\w\y\9\c\m\t\q\2\q\k\b\5\8\e\x\g\b\o\s\1\q\r\n\r\9\w\s\v\n\a\c\t\c\m\c\a\z\x\j\2\i\l\3\y\4\w\a\b\g\x\5\4\1\b\r\6\p\i\h\8\6\7\f\t\p\0\m\d\o\6\b\e\y\v\f\l\k\i\r\w\q\p\t\9\k\f\j\8\7\h\o\p\o\x\t\7\f\q\9\w\9\n\z\k\b\8\a\f\4\6\x\d\v\8\c\6\5\d\4\o\f\2\d\x\9\2\v\7\k\e\d\6\i\f\e\x\3\9\y\5\b\y\r\7\r\d\7\i\h\n\a\q\5\m\3\q\b\i\n\v\5\7\6\h\6\5\0\p\2\a\4\x\o\7\3\u\2\o\e\u\n\s\s\c\c\u\4\r\s\5\0\l\q\l\a\r\u\u\f\l\f\f\x\w\b\h\t\s\k\c\2\f\z\j\s\x\r\t\u\e\7\i\u\s\y\f\d\4\f\2\f\h\p\v\9\n\r\k\3\6\g\8\p\r\q\u\j\a\o\l\a\e\0\4\h\5\t\d\h\a\c\y\1\f\8\3\g\g\3\8\l\r\x\h\j\d\w\t\m\z\6\o\b\3\h\8\1\j\d\n\z\s\g\c\o\5\0\r\c\x\4\i\e\q\k\t\o\s\j\6\5\m\p\d\q\s\e\9\k\9\i\g\o\j\w\y\b\z\c\x\0\w\y\7\v\l\v\v\4\y\p\b\a\7\a\i\4\o\b\u\f\r\z\6\u\e\4\o\k\t\y\l\0\7\e\d\5\e\i\z\k\j\k\e\a\j\l\o\2\n\h\o\o\x\q\f\3\l\1\h\9\n\c\p\c\r\q\p\8\w\x\t\0\u\u\0\z\m\e\e\b\8\o\t\p\q\n\i\a\a\3\m\o\6\o\y\k\z\4\h\r\3\x\w\z\o\d\0\5\r\c\3\7\9\c\y\h\f\t\3\l\r\k\u\q\6\q\7\s\z\2\7\0\s\b\l\x\q\k\t\p\x\x\d\z\1\0\f\4\t\5\6\j\l\u\f\7\h\q\4\m\h\r\u\q\2\f\0\h\e\l\v\m\1\p\k\1\0\i\l\h\9\r\0\6\5\i\d\l\9\y\w\f\e\1\e\h\d\e\v\f\0\7\9\4\g\v\a\n\k\8\z\6\t\4\u\p\k\m\j\y\b\u\6\6\i\7\i\h\t\a\k\g\m\5\q\b\i\r\u\b\3\2\q\0\z\8\g\f\8\3\o\m\4\g\p\g\i\j\y\9\1\h\x\n\w\c\p\k\6\6\z\b\0\1\s\7\e\e\g\g\k\r\4\h\j\a\u\p\4\s\a\n\4\c\j\z\u\6\q\b\a\c\n\3\9\j\j\8\d\7\b\3\1\h\7\0\i\o\b\4\x\w\b\2\t\p\i\9\3\k\d\4\7\u\d\x\i\d\h\g\d\c\u\3\n\k\g\l\x\2\p\5\p\4\m\z\5\p\o\7\m\n\h\x\v\u\4\t\e\r\d\x\f\p\7\g\e\o\f\y\2\6\s\z\y\t\w\o\l\2\5\l\w\s\3\v\0\q\n\z\c\r\t\g\u\h\l\9\r\q\1\p\8\m\u\2\l\7\q\m\z\l\5\4\0\5\9\i\8\g\m\2\o\1\e\d\f\d\5\y\z\v\w\7\v\p\j\v\0\4\c\e\0\m\1\s\v\p\x\j\h\q\i\n\9\2\k\z\1\a\4\m\p\f\v\e\h\s\y\y\x\q\8\g\q\m\g\j\v\2\8\z\5\i\e\6\y\7\b\r\p\u\t\6\e\6\n\x\3\f\l\v\7\g\s\6\6\u\k\o\y\3\8\o\6\2\a\5\b\o\6\p\c\2\i\6\s\o\o\g\y\g\3\5\k\l\1\2\3\v\g\g\2\u\8\5\e\r\n\v\v\7\w\z\i\h\u\e\3\s\a\e\i\a\1\5\g\x\1\m\r\2\t\x\z\7\e\o\t\w\q\f\p\4\4\r\i\4\b\o\u\x\i\q\y\o\f\2\k\4\l\e\z\l\u\l\5\e\a\g\g\q\o\g\i\a\9\7\2\v\f\6\f\o\p\7\g\1\z\y\s\e\u\1\1\a\8\q\b\8\g\a\9\j\e\0\2\4\p\e\s\q\b\q\f\5\o\8\s\i\2\u\e\d\f\c\n\4\j\d\x\p\8\6\q\p\5\d\1\z\h\1\c\w\q\f\n\e\0\d\w\g\g\g\y\g\2\z\u\c\c\7\g\i\p\8\4\h\u\n\7\d\y\k\v\j\o\d\v\k\s\z\b\2\f\q\c\b\n\r\f\p\u\7\d\u\a\t\u\1\4\0\l\c\b\v\n\a\v\s\3\5\a\w\q\5\j\r\z\7\g\o\i\8\r\k\h\t\h\z\y\n\1\i\u\0\0\n\u\k\y\n\7\b\0\e\4\t\h\b\b\a\8\i\h\w\4\w\c\2\a\s\b\v\c\v\t\s\a\t\o\x\2\x\p\x\b\i\5\1\r\s\g\f\j\n\j\2\n\6\t\0\7\6\6\p\9\a\p\d\3\r\4\w\v\4\v\a\d\h\h\8\2\5\0\k\2\4\n\b\7\j\8\b\j\k\t\1\t\v\x\3\k\w\m\p\9\o\w\i\s\e\u\q\4\d\0\p\6\2\j\5\d\0\g\u\m\d\j\k\u\1\x\s\k\h\s\x\e\p\6\p\t\l\5\y\8\4\d\i\6\t\w\w\x\t\4\9\g\4\3\v\f\1\4\c\4\g\n\g\1\m\m\9\y\y\e\k\g\s\l\6\2\i\3\0\q\1\i\8\u\9\y\2\d\b\8\5\f\d\0\a\x\i\1\w\8\q\i\d\m\c\f\4\w\7\l\c\0\m\t\l\n\g\l\p\9\w\4\b\a\d\h\k\m\o\5\x\d\g\l\0\c\x\x\q\0\d\y\k\k\2\y\2\7\1\w\s\q\m\p\m\1\m\o\f\8\p\o\a\1\p\7\1\d\l\2\4\w\q\s\v\4\9\3\i\g\c\y\2\k\5\k\j\n\y\7\h\u\i\r\o\b\z\6\8\3\5\l\h\h\c\v\9\d\g\w\r\9\9\f\m\4\c\x\z\4\v\3\q\9\a\r\2\b\o\4\c\d\i\f\a\o\n\w\h\v\p\h\l\a\k\9\e\a\m\b\u\z\6\b\9\z\c\j\3\m\5\o\9\0\x\r\h\h\6\v\m\q\i\z\8\v\q\x\u\r\5\0\i\z\9\i\k\d\4\0\g\1\t\n\1\h\s\k\l\r\p\e\0\t\z\y\m\9\b\q\u\8\1\h\i\3\o\m\d\g\m\h\g\r\u\e\1\2\o\n\l\l\i\1\f\7\l\c\j\i\9\n\4\u\z\l\g\g\z\y\v\a\o\n\t\5\h\t\n\5\h\r\i\a\y\8\o\k\n\d\m\u\j\n\2\u\8\1\6\7\r\t\c\b\d\3\n\t\1\y\3\4\8\t\9\a\9\7\j\4\k\k\w\p\s\8\j\9\5\z\d\p\p\o\m\0\5\9\e\t\c\m\j\6\i\b\w\t\r\g\m\a\j\l\l\4\y\q\i\x\6\e\e\z\s\7\g\b\3\7\2\1\7\4\a\2\n\m\g\o\1\t\9\n\9\6\d\k\v\8\9\a\k\k ]] 00:36:00.743 00:36:00.743 real 0m3.671s 00:36:00.743 user 0m3.065s 00:36:00.743 sys 0m0.473s 00:36:00.743 14:28:46 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:00.743 14:28:46 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:36:00.743 ************************************ 00:36:00.743 END TEST dd_rw_offset 00:36:00.743 ************************************ 00:36:01.012 14:28:46 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1142 -- # return 0 00:36:01.012 14:28:46 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:36:01.012 14:28:46 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:36:01.012 14:28:46 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:36:01.012 14:28:46 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:36:01.012 14:28:46 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:36:01.012 14:28:46 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:36:01.012 14:28:46 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:36:01.012 14:28:46 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:36:01.012 14:28:46 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:36:01.012 14:28:46 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:36:01.012 14:28:46 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:36:01.012 [2024-07-15 14:28:46.809045] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:36:01.012 [2024-07-15 14:28:46.809984] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid223117 ] 00:36:01.012 { 00:36:01.012 "subsystems": [ 00:36:01.012 { 00:36:01.012 "subsystem": "bdev", 00:36:01.012 "config": [ 00:36:01.012 { 00:36:01.012 "params": { 00:36:01.012 "trtype": "pcie", 00:36:01.012 "traddr": "0000:00:10.0", 00:36:01.012 "name": "Nvme0" 00:36:01.012 }, 00:36:01.012 "method": "bdev_nvme_attach_controller" 00:36:01.012 }, 00:36:01.012 { 00:36:01.012 "method": "bdev_wait_for_examine" 00:36:01.012 } 00:36:01.012 ] 00:36:01.012 } 00:36:01.012 ] 00:36:01.012 } 00:36:01.012 [2024-07-15 14:28:46.965818] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:01.270 [2024-07-15 14:28:47.172116] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:02.905  Copying: 1024/1024 [kB] (average 1000 MBps) 00:36:02.905 00:36:02.905 14:28:48 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:36:02.905 00:36:02.905 real 0m43.194s 00:36:02.905 user 0m36.051s 00:36:02.905 sys 0m5.596s 00:36:02.905 14:28:48 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:02.905 14:28:48 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:36:02.905 ************************************ 00:36:02.905 END TEST spdk_dd_basic_rw 00:36:02.905 ************************************ 00:36:02.905 14:28:48 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:36:02.905 14:28:48 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:36:02.905 14:28:48 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:02.905 14:28:48 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:02.905 14:28:48 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:36:02.905 ************************************ 00:36:02.905 START TEST spdk_dd_posix 00:36:02.905 ************************************ 00:36:02.905 14:28:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:36:02.905 * Looking for test storage... 00:36:02.905 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:36:02.905 14:28:48 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:36:02.905 14:28:48 spdk_dd.spdk_dd_posix -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:02.905 14:28:48 spdk_dd.spdk_dd_posix -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:02.905 14:28:48 spdk_dd.spdk_dd_posix -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:02.905 14:28:48 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:36:02.905 14:28:48 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:36:02.905 14:28:48 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:36:02.905 14:28:48 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:36:02.905 14:28:48 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:36:02.905 14:28:48 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:36:02.905 14:28:48 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:36:02.905 14:28:48 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:36:02.905 14:28:48 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:36:02.905 14:28:48 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:36:02.905 14:28:48 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:36:02.905 14:28:48 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:36:02.905 14:28:48 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', using AIO' 00:36:02.905 * First test run, using AIO 00:36:02.905 14:28:48 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:36:02.905 14:28:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:02.905 14:28:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:02.905 14:28:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:36:02.905 ************************************ 00:36:02.905 START TEST dd_flag_append 00:36:02.905 ************************************ 00:36:02.905 14:28:48 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1123 -- # append 00:36:02.905 14:28:48 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:36:02.905 14:28:48 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:36:02.905 14:28:48 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:36:02.905 14:28:48 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:36:02.905 14:28:48 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:36:02.905 14:28:48 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=6t45p8ew8n3uxffryoz7jeaz1kvoglmu 00:36:02.905 14:28:48 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:36:02.905 14:28:48 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:36:02.905 14:28:48 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:36:02.905 14:28:48 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=ri4ill9d7odyd3sxfsikg5v0jnollhjp 00:36:02.905 14:28:48 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s 6t45p8ew8n3uxffryoz7jeaz1kvoglmu 00:36:02.905 14:28:48 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s ri4ill9d7odyd3sxfsikg5v0jnollhjp 00:36:02.906 14:28:48 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:36:02.906 [2024-07-15 14:28:48.776293] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:36:02.906 [2024-07-15 14:28:48.776465] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid223203 ] 00:36:03.165 [2024-07-15 14:28:48.929299] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:03.165 [2024-07-15 14:28:49.116607] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:04.873  Copying: 32/32 [B] (average 31 kBps) 00:36:04.873 00:36:04.873 14:28:50 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ ri4ill9d7odyd3sxfsikg5v0jnollhjp6t45p8ew8n3uxffryoz7jeaz1kvoglmu == \r\i\4\i\l\l\9\d\7\o\d\y\d\3\s\x\f\s\i\k\g\5\v\0\j\n\o\l\l\h\j\p\6\t\4\5\p\8\e\w\8\n\3\u\x\f\f\r\y\o\z\7\j\e\a\z\1\k\v\o\g\l\m\u ]] 00:36:04.873 00:36:04.873 real 0m1.713s 00:36:04.873 user 0m1.382s 00:36:04.873 sys 0m0.208s 00:36:04.873 14:28:50 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:04.873 14:28:50 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:36:04.873 ************************************ 00:36:04.873 END TEST dd_flag_append 00:36:04.873 ************************************ 00:36:04.873 14:28:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:36:04.873 14:28:50 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:36:04.873 14:28:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:04.873 14:28:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:04.873 14:28:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:36:04.873 ************************************ 00:36:04.873 START TEST dd_flag_directory 00:36:04.873 ************************************ 00:36:04.873 14:28:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1123 -- # directory 00:36:04.873 14:28:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:36:04.873 14:28:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@648 -- # local es=0 00:36:04.873 14:28:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:36:04.873 14:28:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:36:04.873 14:28:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:04.873 14:28:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:36:04.873 14:28:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:04.873 14:28:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:36:04.873 14:28:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:04.873 14:28:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:36:04.873 14:28:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:36:04.873 14:28:50 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:36:04.873 [2024-07-15 14:28:50.537988] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:36:04.873 [2024-07-15 14:28:50.538554] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid223250 ] 00:36:04.873 [2024-07-15 14:28:50.685634] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:05.132 [2024-07-15 14:28:50.886939] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:05.390 [2024-07-15 14:28:51.159986] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:36:05.390 [2024-07-15 14:28:51.160513] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:36:05.390 [2024-07-15 14:28:51.160832] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:36:05.985 [2024-07-15 14:28:51.837742] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:36:06.245 14:28:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # es=236 00:36:06.245 14:28:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:06.245 14:28:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@660 -- # es=108 00:36:06.245 14:28:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # case "$es" in 00:36:06.245 14:28:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@668 -- # es=1 00:36:06.245 14:28:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:06.245 14:28:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:36:06.245 14:28:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@648 -- # local es=0 00:36:06.245 14:28:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:36:06.245 14:28:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:36:06.245 14:28:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:06.245 14:28:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:36:06.245 14:28:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:06.245 14:28:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:36:06.245 14:28:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:06.245 14:28:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:36:06.245 14:28:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:36:06.245 14:28:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:36:06.245 [2024-07-15 14:28:52.243763] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:36:06.245 [2024-07-15 14:28:52.244514] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid223277 ] 00:36:06.504 [2024-07-15 14:28:52.390308] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:06.764 [2024-07-15 14:28:52.577554] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:07.023 [2024-07-15 14:28:52.860211] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:36:07.023 [2024-07-15 14:28:52.860666] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:36:07.023 [2024-07-15 14:28:52.860920] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:36:07.590 [2024-07-15 14:28:53.533534] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:36:08.156 14:28:53 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # es=236 00:36:08.156 14:28:53 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:08.156 14:28:53 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@660 -- # es=108 00:36:08.156 14:28:53 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # case "$es" in 00:36:08.156 14:28:53 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@668 -- # es=1 00:36:08.156 14:28:53 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:08.156 00:36:08.156 real 0m3.389s 00:36:08.156 user 0m2.763s 00:36:08.156 sys 0m0.399s 00:36:08.156 14:28:53 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:08.156 14:28:53 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:36:08.156 ************************************ 00:36:08.156 END TEST dd_flag_directory 00:36:08.156 ************************************ 00:36:08.156 14:28:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:36:08.156 14:28:53 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:36:08.156 14:28:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:08.156 14:28:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:08.156 14:28:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:36:08.156 ************************************ 00:36:08.156 START TEST dd_flag_nofollow 00:36:08.156 ************************************ 00:36:08.156 14:28:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1123 -- # nofollow 00:36:08.156 14:28:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:36:08.156 14:28:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:36:08.156 14:28:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:36:08.156 14:28:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:36:08.156 14:28:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:36:08.156 14:28:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@648 -- # local es=0 00:36:08.156 14:28:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:36:08.156 14:28:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:36:08.156 14:28:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:08.156 14:28:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:36:08.156 14:28:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:08.156 14:28:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:36:08.156 14:28:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:08.156 14:28:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:36:08.156 14:28:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:36:08.156 14:28:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:36:08.156 [2024-07-15 14:28:54.002517] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:36:08.156 [2024-07-15 14:28:54.002745] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid223316 ] 00:36:08.414 [2024-07-15 14:28:54.166034] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:08.414 [2024-07-15 14:28:54.369966] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:08.672 [2024-07-15 14:28:54.654788] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:36:08.672 [2024-07-15 14:28:54.655078] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:36:08.672 [2024-07-15 14:28:54.655207] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:36:09.620 [2024-07-15 14:28:55.318210] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:36:09.878 14:28:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # es=216 00:36:09.878 14:28:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:09.878 14:28:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@660 -- # es=88 00:36:09.878 14:28:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # case "$es" in 00:36:09.878 14:28:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@668 -- # es=1 00:36:09.878 14:28:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:09.878 14:28:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:36:09.878 14:28:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@648 -- # local es=0 00:36:09.878 14:28:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:36:09.878 14:28:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:36:09.878 14:28:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:09.878 14:28:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:36:09.878 14:28:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:09.878 14:28:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:36:09.878 14:28:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:09.878 14:28:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:36:09.878 14:28:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:36:09.878 14:28:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:36:09.878 [2024-07-15 14:28:55.725377] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:36:09.878 [2024-07-15 14:28:55.726034] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid223348 ] 00:36:10.138 [2024-07-15 14:28:55.890413] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:10.138 [2024-07-15 14:28:56.076816] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:10.395 [2024-07-15 14:28:56.360616] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:36:10.395 [2024-07-15 14:28:56.361318] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:36:10.395 [2024-07-15 14:28:56.361581] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:36:11.331 [2024-07-15 14:28:57.032141] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:36:11.589 14:28:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # es=216 00:36:11.589 14:28:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:11.589 14:28:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@660 -- # es=88 00:36:11.589 14:28:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # case "$es" in 00:36:11.589 14:28:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@668 -- # es=1 00:36:11.589 14:28:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:11.589 14:28:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:36:11.589 14:28:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:36:11.589 14:28:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:36:11.589 14:28:57 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:36:11.589 [2024-07-15 14:28:57.449424] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:36:11.589 [2024-07-15 14:28:57.449832] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid223371 ] 00:36:11.847 [2024-07-15 14:28:57.608248] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:11.847 [2024-07-15 14:28:57.808190] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:13.482  Copying: 512/512 [B] (average 500 kBps) 00:36:13.482 00:36:13.482 14:28:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ hwc1jm1pct6ulu87obcv3hnnlhird0kczst2duiyvylazoml4rx5bruf0wxw22otgb7hp1jdw8ysbo33stvy16ied06vnlo7m7pmuxvswsnzv0bbgvicwewdpuauxjke3ylgzpd76jme4e9pdkf34wq619qzhyu7hl0buqwu369piwxayyqpvq31hcywh444ollob1tmxey5ufrkse17tmssw3w5w7fn3tf9bmcy7asmb1q9526dsk1mzyhd4h7v88v6wvutbzokhsod68xbfquh7p7us596b05qqpr8hbwv7pqqd1g0xp5etxcba5dlm2uhd3qjafyrnl6gdukp1tnw0h4yb85dlbc6wp3cy09gnwgho8bxcentbo5p8gn2imaqmrx00o3pr02v0cllbfluj30k0fvn73221wl8eqk5aehrsfvwacctdyjd036xg6ycqx09xkm1urc1enhrbyerufnazv92hqlibwr360h3zmdiilgti6rjv5hyk1s7 == \h\w\c\1\j\m\1\p\c\t\6\u\l\u\8\7\o\b\c\v\3\h\n\n\l\h\i\r\d\0\k\c\z\s\t\2\d\u\i\y\v\y\l\a\z\o\m\l\4\r\x\5\b\r\u\f\0\w\x\w\2\2\o\t\g\b\7\h\p\1\j\d\w\8\y\s\b\o\3\3\s\t\v\y\1\6\i\e\d\0\6\v\n\l\o\7\m\7\p\m\u\x\v\s\w\s\n\z\v\0\b\b\g\v\i\c\w\e\w\d\p\u\a\u\x\j\k\e\3\y\l\g\z\p\d\7\6\j\m\e\4\e\9\p\d\k\f\3\4\w\q\6\1\9\q\z\h\y\u\7\h\l\0\b\u\q\w\u\3\6\9\p\i\w\x\a\y\y\q\p\v\q\3\1\h\c\y\w\h\4\4\4\o\l\l\o\b\1\t\m\x\e\y\5\u\f\r\k\s\e\1\7\t\m\s\s\w\3\w\5\w\7\f\n\3\t\f\9\b\m\c\y\7\a\s\m\b\1\q\9\5\2\6\d\s\k\1\m\z\y\h\d\4\h\7\v\8\8\v\6\w\v\u\t\b\z\o\k\h\s\o\d\6\8\x\b\f\q\u\h\7\p\7\u\s\5\9\6\b\0\5\q\q\p\r\8\h\b\w\v\7\p\q\q\d\1\g\0\x\p\5\e\t\x\c\b\a\5\d\l\m\2\u\h\d\3\q\j\a\f\y\r\n\l\6\g\d\u\k\p\1\t\n\w\0\h\4\y\b\8\5\d\l\b\c\6\w\p\3\c\y\0\9\g\n\w\g\h\o\8\b\x\c\e\n\t\b\o\5\p\8\g\n\2\i\m\a\q\m\r\x\0\0\o\3\p\r\0\2\v\0\c\l\l\b\f\l\u\j\3\0\k\0\f\v\n\7\3\2\2\1\w\l\8\e\q\k\5\a\e\h\r\s\f\v\w\a\c\c\t\d\y\j\d\0\3\6\x\g\6\y\c\q\x\0\9\x\k\m\1\u\r\c\1\e\n\h\r\b\y\e\r\u\f\n\a\z\v\9\2\h\q\l\i\b\w\r\3\6\0\h\3\z\m\d\i\i\l\g\t\i\6\r\j\v\5\h\y\k\1\s\7 ]] 00:36:13.482 00:36:13.482 real 0m5.216s 00:36:13.482 user 0m4.176s 00:36:13.482 sys 0m0.687s 00:36:13.482 14:28:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:13.482 14:28:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:36:13.482 ************************************ 00:36:13.482 END TEST dd_flag_nofollow 00:36:13.482 ************************************ 00:36:13.482 14:28:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:36:13.482 14:28:59 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:36:13.482 14:28:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:13.482 14:28:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:13.482 14:28:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:36:13.482 ************************************ 00:36:13.482 START TEST dd_flag_noatime 00:36:13.482 ************************************ 00:36:13.482 14:28:59 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1123 -- # noatime 00:36:13.482 14:28:59 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:36:13.482 14:28:59 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:36:13.482 14:28:59 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:36:13.482 14:28:59 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:36:13.482 14:28:59 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:36:13.482 14:28:59 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:36:13.482 14:28:59 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1721053738 00:36:13.482 14:28:59 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:36:13.482 14:28:59 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1721053739 00:36:13.482 14:28:59 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:36:14.467 14:29:00 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:36:14.467 [2024-07-15 14:29:00.288090] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:36:14.467 [2024-07-15 14:29:00.288486] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid223428 ] 00:36:14.467 [2024-07-15 14:29:00.449961] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:14.726 [2024-07-15 14:29:00.639175] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:16.364  Copying: 512/512 [B] (average 500 kBps) 00:36:16.364 00:36:16.364 14:29:01 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:36:16.364 14:29:01 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1721053738 )) 00:36:16.364 14:29:01 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:36:16.364 14:29:01 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1721053739 )) 00:36:16.365 14:29:01 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:36:16.365 [2024-07-15 14:29:02.028704] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:36:16.365 [2024-07-15 14:29:02.029139] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid223454 ] 00:36:16.365 [2024-07-15 14:29:02.189841] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:16.623 [2024-07-15 14:29:02.383958] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:17.817  Copying: 512/512 [B] (average 500 kBps) 00:36:17.817 00:36:17.817 14:29:03 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:36:17.817 14:29:03 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1721053742 )) 00:36:17.817 00:36:17.817 real 0m4.506s 00:36:17.817 user 0m2.804s 00:36:17.817 sys 0m0.456s 00:36:17.817 14:29:03 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:17.817 14:29:03 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:36:17.817 ************************************ 00:36:17.817 END TEST dd_flag_noatime 00:36:17.817 ************************************ 00:36:17.817 14:29:03 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:36:17.817 14:29:03 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:36:17.817 14:29:03 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:17.817 14:29:03 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:17.817 14:29:03 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:36:17.817 ************************************ 00:36:17.817 START TEST dd_flags_misc 00:36:17.817 ************************************ 00:36:17.817 14:29:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1123 -- # io 00:36:17.817 14:29:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:36:17.817 14:29:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:36:17.817 14:29:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:36:17.817 14:29:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:36:17.817 14:29:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:36:17.817 14:29:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:36:17.817 14:29:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:36:17.817 14:29:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:36:17.817 14:29:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:36:18.075 [2024-07-15 14:29:03.830383] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:36:18.075 [2024-07-15 14:29:03.830826] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid223497 ] 00:36:18.075 [2024-07-15 14:29:03.994155] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:18.371 [2024-07-15 14:29:04.194797] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:19.563  Copying: 512/512 [B] (average 500 kBps) 00:36:19.563 00:36:19.563 14:29:05 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ iz29dt9pjaqllm413im8gcuj7id880dfnwpd7bpwky1zew9zikek7nokcsrbqcs7i2evr5ozklxedzs4vk3kjf56k5g343fp9ziwog5y6a7su60qyqeq22gjxciy52sz8a5j0dq81g31jobd2sncms44hnvdecyjugbv82siqc92fhmr6p7fzkpf8ld1dx3eynub11pzd0r2ghy3tw6ee63gv9kwhxnxpogmssq3swl6wfd3vu072tz4dew5pij0f7f8k6bnvi9fti20ftak33fkp07l1an0idant31o13t685zqigtlnqinjq8e5a9f6sm3zn1xfgk1oukubxenced4aa57a6f4cd75niltwrbrc9jgvhccnu9rmounjvq1ep356g0w7hz1orri57ckt506pk2spdq9x32ix78necaihxpref6cde3cihg6ezejvpep3llyeq88ubovqw4q1vxfrbtiqfnoc6zmgpwjnq0ygzab65nnfed5ot3mp8ox == \i\z\2\9\d\t\9\p\j\a\q\l\l\m\4\1\3\i\m\8\g\c\u\j\7\i\d\8\8\0\d\f\n\w\p\d\7\b\p\w\k\y\1\z\e\w\9\z\i\k\e\k\7\n\o\k\c\s\r\b\q\c\s\7\i\2\e\v\r\5\o\z\k\l\x\e\d\z\s\4\v\k\3\k\j\f\5\6\k\5\g\3\4\3\f\p\9\z\i\w\o\g\5\y\6\a\7\s\u\6\0\q\y\q\e\q\2\2\g\j\x\c\i\y\5\2\s\z\8\a\5\j\0\d\q\8\1\g\3\1\j\o\b\d\2\s\n\c\m\s\4\4\h\n\v\d\e\c\y\j\u\g\b\v\8\2\s\i\q\c\9\2\f\h\m\r\6\p\7\f\z\k\p\f\8\l\d\1\d\x\3\e\y\n\u\b\1\1\p\z\d\0\r\2\g\h\y\3\t\w\6\e\e\6\3\g\v\9\k\w\h\x\n\x\p\o\g\m\s\s\q\3\s\w\l\6\w\f\d\3\v\u\0\7\2\t\z\4\d\e\w\5\p\i\j\0\f\7\f\8\k\6\b\n\v\i\9\f\t\i\2\0\f\t\a\k\3\3\f\k\p\0\7\l\1\a\n\0\i\d\a\n\t\3\1\o\1\3\t\6\8\5\z\q\i\g\t\l\n\q\i\n\j\q\8\e\5\a\9\f\6\s\m\3\z\n\1\x\f\g\k\1\o\u\k\u\b\x\e\n\c\e\d\4\a\a\5\7\a\6\f\4\c\d\7\5\n\i\l\t\w\r\b\r\c\9\j\g\v\h\c\c\n\u\9\r\m\o\u\n\j\v\q\1\e\p\3\5\6\g\0\w\7\h\z\1\o\r\r\i\5\7\c\k\t\5\0\6\p\k\2\s\p\d\q\9\x\3\2\i\x\7\8\n\e\c\a\i\h\x\p\r\e\f\6\c\d\e\3\c\i\h\g\6\e\z\e\j\v\p\e\p\3\l\l\y\e\q\8\8\u\b\o\v\q\w\4\q\1\v\x\f\r\b\t\i\q\f\n\o\c\6\z\m\g\p\w\j\n\q\0\y\g\z\a\b\6\5\n\n\f\e\d\5\o\t\3\m\p\8\o\x ]] 00:36:19.563 14:29:05 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:36:19.563 14:29:05 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:36:19.563 [2024-07-15 14:29:05.542520] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:36:19.563 [2024-07-15 14:29:05.542847] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid223525 ] 00:36:19.822 [2024-07-15 14:29:05.693425] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:20.080 [2024-07-15 14:29:05.903324] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:21.272  Copying: 512/512 [B] (average 500 kBps) 00:36:21.272 00:36:21.272 14:29:07 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ iz29dt9pjaqllm413im8gcuj7id880dfnwpd7bpwky1zew9zikek7nokcsrbqcs7i2evr5ozklxedzs4vk3kjf56k5g343fp9ziwog5y6a7su60qyqeq22gjxciy52sz8a5j0dq81g31jobd2sncms44hnvdecyjugbv82siqc92fhmr6p7fzkpf8ld1dx3eynub11pzd0r2ghy3tw6ee63gv9kwhxnxpogmssq3swl6wfd3vu072tz4dew5pij0f7f8k6bnvi9fti20ftak33fkp07l1an0idant31o13t685zqigtlnqinjq8e5a9f6sm3zn1xfgk1oukubxenced4aa57a6f4cd75niltwrbrc9jgvhccnu9rmounjvq1ep356g0w7hz1orri57ckt506pk2spdq9x32ix78necaihxpref6cde3cihg6ezejvpep3llyeq88ubovqw4q1vxfrbtiqfnoc6zmgpwjnq0ygzab65nnfed5ot3mp8ox == \i\z\2\9\d\t\9\p\j\a\q\l\l\m\4\1\3\i\m\8\g\c\u\j\7\i\d\8\8\0\d\f\n\w\p\d\7\b\p\w\k\y\1\z\e\w\9\z\i\k\e\k\7\n\o\k\c\s\r\b\q\c\s\7\i\2\e\v\r\5\o\z\k\l\x\e\d\z\s\4\v\k\3\k\j\f\5\6\k\5\g\3\4\3\f\p\9\z\i\w\o\g\5\y\6\a\7\s\u\6\0\q\y\q\e\q\2\2\g\j\x\c\i\y\5\2\s\z\8\a\5\j\0\d\q\8\1\g\3\1\j\o\b\d\2\s\n\c\m\s\4\4\h\n\v\d\e\c\y\j\u\g\b\v\8\2\s\i\q\c\9\2\f\h\m\r\6\p\7\f\z\k\p\f\8\l\d\1\d\x\3\e\y\n\u\b\1\1\p\z\d\0\r\2\g\h\y\3\t\w\6\e\e\6\3\g\v\9\k\w\h\x\n\x\p\o\g\m\s\s\q\3\s\w\l\6\w\f\d\3\v\u\0\7\2\t\z\4\d\e\w\5\p\i\j\0\f\7\f\8\k\6\b\n\v\i\9\f\t\i\2\0\f\t\a\k\3\3\f\k\p\0\7\l\1\a\n\0\i\d\a\n\t\3\1\o\1\3\t\6\8\5\z\q\i\g\t\l\n\q\i\n\j\q\8\e\5\a\9\f\6\s\m\3\z\n\1\x\f\g\k\1\o\u\k\u\b\x\e\n\c\e\d\4\a\a\5\7\a\6\f\4\c\d\7\5\n\i\l\t\w\r\b\r\c\9\j\g\v\h\c\c\n\u\9\r\m\o\u\n\j\v\q\1\e\p\3\5\6\g\0\w\7\h\z\1\o\r\r\i\5\7\c\k\t\5\0\6\p\k\2\s\p\d\q\9\x\3\2\i\x\7\8\n\e\c\a\i\h\x\p\r\e\f\6\c\d\e\3\c\i\h\g\6\e\z\e\j\v\p\e\p\3\l\l\y\e\q\8\8\u\b\o\v\q\w\4\q\1\v\x\f\r\b\t\i\q\f\n\o\c\6\z\m\g\p\w\j\n\q\0\y\g\z\a\b\6\5\n\n\f\e\d\5\o\t\3\m\p\8\o\x ]] 00:36:21.272 14:29:07 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:36:21.273 14:29:07 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:36:21.273 [2024-07-15 14:29:07.267906] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:36:21.273 [2024-07-15 14:29:07.268331] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid223546 ] 00:36:21.531 [2024-07-15 14:29:07.432967] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:21.790 [2024-07-15 14:29:07.622385] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:23.017  Copying: 512/512 [B] (average 166 kBps) 00:36:23.017 00:36:23.017 14:29:08 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ iz29dt9pjaqllm413im8gcuj7id880dfnwpd7bpwky1zew9zikek7nokcsrbqcs7i2evr5ozklxedzs4vk3kjf56k5g343fp9ziwog5y6a7su60qyqeq22gjxciy52sz8a5j0dq81g31jobd2sncms44hnvdecyjugbv82siqc92fhmr6p7fzkpf8ld1dx3eynub11pzd0r2ghy3tw6ee63gv9kwhxnxpogmssq3swl6wfd3vu072tz4dew5pij0f7f8k6bnvi9fti20ftak33fkp07l1an0idant31o13t685zqigtlnqinjq8e5a9f6sm3zn1xfgk1oukubxenced4aa57a6f4cd75niltwrbrc9jgvhccnu9rmounjvq1ep356g0w7hz1orri57ckt506pk2spdq9x32ix78necaihxpref6cde3cihg6ezejvpep3llyeq88ubovqw4q1vxfrbtiqfnoc6zmgpwjnq0ygzab65nnfed5ot3mp8ox == \i\z\2\9\d\t\9\p\j\a\q\l\l\m\4\1\3\i\m\8\g\c\u\j\7\i\d\8\8\0\d\f\n\w\p\d\7\b\p\w\k\y\1\z\e\w\9\z\i\k\e\k\7\n\o\k\c\s\r\b\q\c\s\7\i\2\e\v\r\5\o\z\k\l\x\e\d\z\s\4\v\k\3\k\j\f\5\6\k\5\g\3\4\3\f\p\9\z\i\w\o\g\5\y\6\a\7\s\u\6\0\q\y\q\e\q\2\2\g\j\x\c\i\y\5\2\s\z\8\a\5\j\0\d\q\8\1\g\3\1\j\o\b\d\2\s\n\c\m\s\4\4\h\n\v\d\e\c\y\j\u\g\b\v\8\2\s\i\q\c\9\2\f\h\m\r\6\p\7\f\z\k\p\f\8\l\d\1\d\x\3\e\y\n\u\b\1\1\p\z\d\0\r\2\g\h\y\3\t\w\6\e\e\6\3\g\v\9\k\w\h\x\n\x\p\o\g\m\s\s\q\3\s\w\l\6\w\f\d\3\v\u\0\7\2\t\z\4\d\e\w\5\p\i\j\0\f\7\f\8\k\6\b\n\v\i\9\f\t\i\2\0\f\t\a\k\3\3\f\k\p\0\7\l\1\a\n\0\i\d\a\n\t\3\1\o\1\3\t\6\8\5\z\q\i\g\t\l\n\q\i\n\j\q\8\e\5\a\9\f\6\s\m\3\z\n\1\x\f\g\k\1\o\u\k\u\b\x\e\n\c\e\d\4\a\a\5\7\a\6\f\4\c\d\7\5\n\i\l\t\w\r\b\r\c\9\j\g\v\h\c\c\n\u\9\r\m\o\u\n\j\v\q\1\e\p\3\5\6\g\0\w\7\h\z\1\o\r\r\i\5\7\c\k\t\5\0\6\p\k\2\s\p\d\q\9\x\3\2\i\x\7\8\n\e\c\a\i\h\x\p\r\e\f\6\c\d\e\3\c\i\h\g\6\e\z\e\j\v\p\e\p\3\l\l\y\e\q\8\8\u\b\o\v\q\w\4\q\1\v\x\f\r\b\t\i\q\f\n\o\c\6\z\m\g\p\w\j\n\q\0\y\g\z\a\b\6\5\n\n\f\e\d\5\o\t\3\m\p\8\o\x ]] 00:36:23.017 14:29:08 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:36:23.017 14:29:08 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:36:23.017 [2024-07-15 14:29:09.013961] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:36:23.017 [2024-07-15 14:29:09.014351] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid223571 ] 00:36:23.281 [2024-07-15 14:29:09.165041] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:23.540 [2024-07-15 14:29:09.368824] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:24.808  Copying: 512/512 [B] (average 250 kBps) 00:36:24.808 00:36:24.808 14:29:10 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ iz29dt9pjaqllm413im8gcuj7id880dfnwpd7bpwky1zew9zikek7nokcsrbqcs7i2evr5ozklxedzs4vk3kjf56k5g343fp9ziwog5y6a7su60qyqeq22gjxciy52sz8a5j0dq81g31jobd2sncms44hnvdecyjugbv82siqc92fhmr6p7fzkpf8ld1dx3eynub11pzd0r2ghy3tw6ee63gv9kwhxnxpogmssq3swl6wfd3vu072tz4dew5pij0f7f8k6bnvi9fti20ftak33fkp07l1an0idant31o13t685zqigtlnqinjq8e5a9f6sm3zn1xfgk1oukubxenced4aa57a6f4cd75niltwrbrc9jgvhccnu9rmounjvq1ep356g0w7hz1orri57ckt506pk2spdq9x32ix78necaihxpref6cde3cihg6ezejvpep3llyeq88ubovqw4q1vxfrbtiqfnoc6zmgpwjnq0ygzab65nnfed5ot3mp8ox == \i\z\2\9\d\t\9\p\j\a\q\l\l\m\4\1\3\i\m\8\g\c\u\j\7\i\d\8\8\0\d\f\n\w\p\d\7\b\p\w\k\y\1\z\e\w\9\z\i\k\e\k\7\n\o\k\c\s\r\b\q\c\s\7\i\2\e\v\r\5\o\z\k\l\x\e\d\z\s\4\v\k\3\k\j\f\5\6\k\5\g\3\4\3\f\p\9\z\i\w\o\g\5\y\6\a\7\s\u\6\0\q\y\q\e\q\2\2\g\j\x\c\i\y\5\2\s\z\8\a\5\j\0\d\q\8\1\g\3\1\j\o\b\d\2\s\n\c\m\s\4\4\h\n\v\d\e\c\y\j\u\g\b\v\8\2\s\i\q\c\9\2\f\h\m\r\6\p\7\f\z\k\p\f\8\l\d\1\d\x\3\e\y\n\u\b\1\1\p\z\d\0\r\2\g\h\y\3\t\w\6\e\e\6\3\g\v\9\k\w\h\x\n\x\p\o\g\m\s\s\q\3\s\w\l\6\w\f\d\3\v\u\0\7\2\t\z\4\d\e\w\5\p\i\j\0\f\7\f\8\k\6\b\n\v\i\9\f\t\i\2\0\f\t\a\k\3\3\f\k\p\0\7\l\1\a\n\0\i\d\a\n\t\3\1\o\1\3\t\6\8\5\z\q\i\g\t\l\n\q\i\n\j\q\8\e\5\a\9\f\6\s\m\3\z\n\1\x\f\g\k\1\o\u\k\u\b\x\e\n\c\e\d\4\a\a\5\7\a\6\f\4\c\d\7\5\n\i\l\t\w\r\b\r\c\9\j\g\v\h\c\c\n\u\9\r\m\o\u\n\j\v\q\1\e\p\3\5\6\g\0\w\7\h\z\1\o\r\r\i\5\7\c\k\t\5\0\6\p\k\2\s\p\d\q\9\x\3\2\i\x\7\8\n\e\c\a\i\h\x\p\r\e\f\6\c\d\e\3\c\i\h\g\6\e\z\e\j\v\p\e\p\3\l\l\y\e\q\8\8\u\b\o\v\q\w\4\q\1\v\x\f\r\b\t\i\q\f\n\o\c\6\z\m\g\p\w\j\n\q\0\y\g\z\a\b\6\5\n\n\f\e\d\5\o\t\3\m\p\8\o\x ]] 00:36:24.808 14:29:10 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:36:24.808 14:29:10 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:36:24.808 14:29:10 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:36:24.808 14:29:10 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:36:24.808 14:29:10 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:36:24.808 14:29:10 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:36:24.808 [2024-07-15 14:29:10.795802] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:36:24.808 [2024-07-15 14:29:10.796215] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid223595 ] 00:36:25.066 [2024-07-15 14:29:10.956054] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:25.327 [2024-07-15 14:29:11.157142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:26.523  Copying: 512/512 [B] (average 500 kBps) 00:36:26.523 00:36:26.523 14:29:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ fyx2un2zvtkivjcn60zw99qdkl912xola32o38t2q0p3c63z2hfv0yha4p2k07y7qc1m3lrkcm8sxlyj6l1xi4dv06ha7mwfhzt3kwo4va4t2ja7ejba2abk2qecjz03b2talb6xbw5z7zvpzib0du6jl73aoa3ekywmkjp5hx924y7ds3rb7exxrcbc4d03nfts7r5690vb3lt8q12wopz5zad5q256p7qgabafj0ucaegz64ndzeceoyclt3zog843vg25dkaf42lahmz8rr0zw3tw0mfryg2hjf1wpz09wfg2zh0gdm9ebj12c9cu2l6uz8kut241ck7nmmw66ogcr7qtuq9g9fivcfa8drbcaks3zqt90bx93gpxllh1rfhd5arc73qw3pnrs220v3d3dfax5f1ecc1jsonsg8bn3x5wq12oq5ow40f9e041kbk7lvassnjb05pik8q63vkm22hk15xffbqoqh9766sl980ryor4305o1p8n9c06 == \f\y\x\2\u\n\2\z\v\t\k\i\v\j\c\n\6\0\z\w\9\9\q\d\k\l\9\1\2\x\o\l\a\3\2\o\3\8\t\2\q\0\p\3\c\6\3\z\2\h\f\v\0\y\h\a\4\p\2\k\0\7\y\7\q\c\1\m\3\l\r\k\c\m\8\s\x\l\y\j\6\l\1\x\i\4\d\v\0\6\h\a\7\m\w\f\h\z\t\3\k\w\o\4\v\a\4\t\2\j\a\7\e\j\b\a\2\a\b\k\2\q\e\c\j\z\0\3\b\2\t\a\l\b\6\x\b\w\5\z\7\z\v\p\z\i\b\0\d\u\6\j\l\7\3\a\o\a\3\e\k\y\w\m\k\j\p\5\h\x\9\2\4\y\7\d\s\3\r\b\7\e\x\x\r\c\b\c\4\d\0\3\n\f\t\s\7\r\5\6\9\0\v\b\3\l\t\8\q\1\2\w\o\p\z\5\z\a\d\5\q\2\5\6\p\7\q\g\a\b\a\f\j\0\u\c\a\e\g\z\6\4\n\d\z\e\c\e\o\y\c\l\t\3\z\o\g\8\4\3\v\g\2\5\d\k\a\f\4\2\l\a\h\m\z\8\r\r\0\z\w\3\t\w\0\m\f\r\y\g\2\h\j\f\1\w\p\z\0\9\w\f\g\2\z\h\0\g\d\m\9\e\b\j\1\2\c\9\c\u\2\l\6\u\z\8\k\u\t\2\4\1\c\k\7\n\m\m\w\6\6\o\g\c\r\7\q\t\u\q\9\g\9\f\i\v\c\f\a\8\d\r\b\c\a\k\s\3\z\q\t\9\0\b\x\9\3\g\p\x\l\l\h\1\r\f\h\d\5\a\r\c\7\3\q\w\3\p\n\r\s\2\2\0\v\3\d\3\d\f\a\x\5\f\1\e\c\c\1\j\s\o\n\s\g\8\b\n\3\x\5\w\q\1\2\o\q\5\o\w\4\0\f\9\e\0\4\1\k\b\k\7\l\v\a\s\s\n\j\b\0\5\p\i\k\8\q\6\3\v\k\m\2\2\h\k\1\5\x\f\f\b\q\o\q\h\9\7\6\6\s\l\9\8\0\r\y\o\r\4\3\0\5\o\1\p\8\n\9\c\0\6 ]] 00:36:26.523 14:29:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:36:26.523 14:29:12 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:36:26.782 [2024-07-15 14:29:12.543521] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:36:26.782 [2024-07-15 14:29:12.544062] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid223612 ] 00:36:26.782 [2024-07-15 14:29:12.692204] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:27.040 [2024-07-15 14:29:12.933322] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:28.676  Copying: 512/512 [B] (average 500 kBps) 00:36:28.676 00:36:28.676 14:29:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ fyx2un2zvtkivjcn60zw99qdkl912xola32o38t2q0p3c63z2hfv0yha4p2k07y7qc1m3lrkcm8sxlyj6l1xi4dv06ha7mwfhzt3kwo4va4t2ja7ejba2abk2qecjz03b2talb6xbw5z7zvpzib0du6jl73aoa3ekywmkjp5hx924y7ds3rb7exxrcbc4d03nfts7r5690vb3lt8q12wopz5zad5q256p7qgabafj0ucaegz64ndzeceoyclt3zog843vg25dkaf42lahmz8rr0zw3tw0mfryg2hjf1wpz09wfg2zh0gdm9ebj12c9cu2l6uz8kut241ck7nmmw66ogcr7qtuq9g9fivcfa8drbcaks3zqt90bx93gpxllh1rfhd5arc73qw3pnrs220v3d3dfax5f1ecc1jsonsg8bn3x5wq12oq5ow40f9e041kbk7lvassnjb05pik8q63vkm22hk15xffbqoqh9766sl980ryor4305o1p8n9c06 == \f\y\x\2\u\n\2\z\v\t\k\i\v\j\c\n\6\0\z\w\9\9\q\d\k\l\9\1\2\x\o\l\a\3\2\o\3\8\t\2\q\0\p\3\c\6\3\z\2\h\f\v\0\y\h\a\4\p\2\k\0\7\y\7\q\c\1\m\3\l\r\k\c\m\8\s\x\l\y\j\6\l\1\x\i\4\d\v\0\6\h\a\7\m\w\f\h\z\t\3\k\w\o\4\v\a\4\t\2\j\a\7\e\j\b\a\2\a\b\k\2\q\e\c\j\z\0\3\b\2\t\a\l\b\6\x\b\w\5\z\7\z\v\p\z\i\b\0\d\u\6\j\l\7\3\a\o\a\3\e\k\y\w\m\k\j\p\5\h\x\9\2\4\y\7\d\s\3\r\b\7\e\x\x\r\c\b\c\4\d\0\3\n\f\t\s\7\r\5\6\9\0\v\b\3\l\t\8\q\1\2\w\o\p\z\5\z\a\d\5\q\2\5\6\p\7\q\g\a\b\a\f\j\0\u\c\a\e\g\z\6\4\n\d\z\e\c\e\o\y\c\l\t\3\z\o\g\8\4\3\v\g\2\5\d\k\a\f\4\2\l\a\h\m\z\8\r\r\0\z\w\3\t\w\0\m\f\r\y\g\2\h\j\f\1\w\p\z\0\9\w\f\g\2\z\h\0\g\d\m\9\e\b\j\1\2\c\9\c\u\2\l\6\u\z\8\k\u\t\2\4\1\c\k\7\n\m\m\w\6\6\o\g\c\r\7\q\t\u\q\9\g\9\f\i\v\c\f\a\8\d\r\b\c\a\k\s\3\z\q\t\9\0\b\x\9\3\g\p\x\l\l\h\1\r\f\h\d\5\a\r\c\7\3\q\w\3\p\n\r\s\2\2\0\v\3\d\3\d\f\a\x\5\f\1\e\c\c\1\j\s\o\n\s\g\8\b\n\3\x\5\w\q\1\2\o\q\5\o\w\4\0\f\9\e\0\4\1\k\b\k\7\l\v\a\s\s\n\j\b\0\5\p\i\k\8\q\6\3\v\k\m\2\2\h\k\1\5\x\f\f\b\q\o\q\h\9\7\6\6\s\l\9\8\0\r\y\o\r\4\3\0\5\o\1\p\8\n\9\c\0\6 ]] 00:36:28.676 14:29:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:36:28.676 14:29:14 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:36:28.676 [2024-07-15 14:29:14.337002] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:36:28.676 [2024-07-15 14:29:14.337377] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid223640 ] 00:36:28.676 [2024-07-15 14:29:14.498496] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:28.935 [2024-07-15 14:29:14.700386] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:30.131  Copying: 512/512 [B] (average 166 kBps) 00:36:30.131 00:36:30.131 14:29:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ fyx2un2zvtkivjcn60zw99qdkl912xola32o38t2q0p3c63z2hfv0yha4p2k07y7qc1m3lrkcm8sxlyj6l1xi4dv06ha7mwfhzt3kwo4va4t2ja7ejba2abk2qecjz03b2talb6xbw5z7zvpzib0du6jl73aoa3ekywmkjp5hx924y7ds3rb7exxrcbc4d03nfts7r5690vb3lt8q12wopz5zad5q256p7qgabafj0ucaegz64ndzeceoyclt3zog843vg25dkaf42lahmz8rr0zw3tw0mfryg2hjf1wpz09wfg2zh0gdm9ebj12c9cu2l6uz8kut241ck7nmmw66ogcr7qtuq9g9fivcfa8drbcaks3zqt90bx93gpxllh1rfhd5arc73qw3pnrs220v3d3dfax5f1ecc1jsonsg8bn3x5wq12oq5ow40f9e041kbk7lvassnjb05pik8q63vkm22hk15xffbqoqh9766sl980ryor4305o1p8n9c06 == \f\y\x\2\u\n\2\z\v\t\k\i\v\j\c\n\6\0\z\w\9\9\q\d\k\l\9\1\2\x\o\l\a\3\2\o\3\8\t\2\q\0\p\3\c\6\3\z\2\h\f\v\0\y\h\a\4\p\2\k\0\7\y\7\q\c\1\m\3\l\r\k\c\m\8\s\x\l\y\j\6\l\1\x\i\4\d\v\0\6\h\a\7\m\w\f\h\z\t\3\k\w\o\4\v\a\4\t\2\j\a\7\e\j\b\a\2\a\b\k\2\q\e\c\j\z\0\3\b\2\t\a\l\b\6\x\b\w\5\z\7\z\v\p\z\i\b\0\d\u\6\j\l\7\3\a\o\a\3\e\k\y\w\m\k\j\p\5\h\x\9\2\4\y\7\d\s\3\r\b\7\e\x\x\r\c\b\c\4\d\0\3\n\f\t\s\7\r\5\6\9\0\v\b\3\l\t\8\q\1\2\w\o\p\z\5\z\a\d\5\q\2\5\6\p\7\q\g\a\b\a\f\j\0\u\c\a\e\g\z\6\4\n\d\z\e\c\e\o\y\c\l\t\3\z\o\g\8\4\3\v\g\2\5\d\k\a\f\4\2\l\a\h\m\z\8\r\r\0\z\w\3\t\w\0\m\f\r\y\g\2\h\j\f\1\w\p\z\0\9\w\f\g\2\z\h\0\g\d\m\9\e\b\j\1\2\c\9\c\u\2\l\6\u\z\8\k\u\t\2\4\1\c\k\7\n\m\m\w\6\6\o\g\c\r\7\q\t\u\q\9\g\9\f\i\v\c\f\a\8\d\r\b\c\a\k\s\3\z\q\t\9\0\b\x\9\3\g\p\x\l\l\h\1\r\f\h\d\5\a\r\c\7\3\q\w\3\p\n\r\s\2\2\0\v\3\d\3\d\f\a\x\5\f\1\e\c\c\1\j\s\o\n\s\g\8\b\n\3\x\5\w\q\1\2\o\q\5\o\w\4\0\f\9\e\0\4\1\k\b\k\7\l\v\a\s\s\n\j\b\0\5\p\i\k\8\q\6\3\v\k\m\2\2\h\k\1\5\x\f\f\b\q\o\q\h\9\7\6\6\s\l\9\8\0\r\y\o\r\4\3\0\5\o\1\p\8\n\9\c\0\6 ]] 00:36:30.131 14:29:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:36:30.131 14:29:16 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:36:30.131 [2024-07-15 14:29:16.079622] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:36:30.131 [2024-07-15 14:29:16.080065] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid223665 ] 00:36:30.389 [2024-07-15 14:29:16.240492] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:30.647 [2024-07-15 14:29:16.427270] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:31.842  Copying: 512/512 [B] (average 250 kBps) 00:36:31.842 00:36:31.842 14:29:17 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ fyx2un2zvtkivjcn60zw99qdkl912xola32o38t2q0p3c63z2hfv0yha4p2k07y7qc1m3lrkcm8sxlyj6l1xi4dv06ha7mwfhzt3kwo4va4t2ja7ejba2abk2qecjz03b2talb6xbw5z7zvpzib0du6jl73aoa3ekywmkjp5hx924y7ds3rb7exxrcbc4d03nfts7r5690vb3lt8q12wopz5zad5q256p7qgabafj0ucaegz64ndzeceoyclt3zog843vg25dkaf42lahmz8rr0zw3tw0mfryg2hjf1wpz09wfg2zh0gdm9ebj12c9cu2l6uz8kut241ck7nmmw66ogcr7qtuq9g9fivcfa8drbcaks3zqt90bx93gpxllh1rfhd5arc73qw3pnrs220v3d3dfax5f1ecc1jsonsg8bn3x5wq12oq5ow40f9e041kbk7lvassnjb05pik8q63vkm22hk15xffbqoqh9766sl980ryor4305o1p8n9c06 == \f\y\x\2\u\n\2\z\v\t\k\i\v\j\c\n\6\0\z\w\9\9\q\d\k\l\9\1\2\x\o\l\a\3\2\o\3\8\t\2\q\0\p\3\c\6\3\z\2\h\f\v\0\y\h\a\4\p\2\k\0\7\y\7\q\c\1\m\3\l\r\k\c\m\8\s\x\l\y\j\6\l\1\x\i\4\d\v\0\6\h\a\7\m\w\f\h\z\t\3\k\w\o\4\v\a\4\t\2\j\a\7\e\j\b\a\2\a\b\k\2\q\e\c\j\z\0\3\b\2\t\a\l\b\6\x\b\w\5\z\7\z\v\p\z\i\b\0\d\u\6\j\l\7\3\a\o\a\3\e\k\y\w\m\k\j\p\5\h\x\9\2\4\y\7\d\s\3\r\b\7\e\x\x\r\c\b\c\4\d\0\3\n\f\t\s\7\r\5\6\9\0\v\b\3\l\t\8\q\1\2\w\o\p\z\5\z\a\d\5\q\2\5\6\p\7\q\g\a\b\a\f\j\0\u\c\a\e\g\z\6\4\n\d\z\e\c\e\o\y\c\l\t\3\z\o\g\8\4\3\v\g\2\5\d\k\a\f\4\2\l\a\h\m\z\8\r\r\0\z\w\3\t\w\0\m\f\r\y\g\2\h\j\f\1\w\p\z\0\9\w\f\g\2\z\h\0\g\d\m\9\e\b\j\1\2\c\9\c\u\2\l\6\u\z\8\k\u\t\2\4\1\c\k\7\n\m\m\w\6\6\o\g\c\r\7\q\t\u\q\9\g\9\f\i\v\c\f\a\8\d\r\b\c\a\k\s\3\z\q\t\9\0\b\x\9\3\g\p\x\l\l\h\1\r\f\h\d\5\a\r\c\7\3\q\w\3\p\n\r\s\2\2\0\v\3\d\3\d\f\a\x\5\f\1\e\c\c\1\j\s\o\n\s\g\8\b\n\3\x\5\w\q\1\2\o\q\5\o\w\4\0\f\9\e\0\4\1\k\b\k\7\l\v\a\s\s\n\j\b\0\5\p\i\k\8\q\6\3\v\k\m\2\2\h\k\1\5\x\f\f\b\q\o\q\h\9\7\6\6\s\l\9\8\0\r\y\o\r\4\3\0\5\o\1\p\8\n\9\c\0\6 ]] 00:36:31.842 00:36:31.842 real 0m14.016s 00:36:31.842 user 0m11.298s 00:36:31.842 sys 0m1.725s 00:36:31.842 14:29:17 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:31.842 14:29:17 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:36:31.842 ************************************ 00:36:31.842 END TEST dd_flags_misc 00:36:31.842 ************************************ 00:36:31.842 14:29:17 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:36:31.842 14:29:17 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:36:31.842 14:29:17 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', using AIO' 00:36:31.842 * Second test run, using AIO 00:36:31.842 14:29:17 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:36:31.842 14:29:17 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:36:31.842 14:29:17 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:31.842 14:29:17 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:31.842 14:29:17 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:36:32.101 ************************************ 00:36:32.101 START TEST dd_flag_append_forced_aio 00:36:32.101 ************************************ 00:36:32.101 14:29:17 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1123 -- # append 00:36:32.101 14:29:17 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:36:32.101 14:29:17 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:36:32.101 14:29:17 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:36:32.101 14:29:17 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:36:32.101 14:29:17 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:36:32.101 14:29:17 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=t7iqlbrwirdd1oxoeowjb4o033o1dp32 00:36:32.101 14:29:17 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:36:32.101 14:29:17 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:36:32.101 14:29:17 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:36:32.101 14:29:17 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=cpb7zkpq7khdtkw4tgwvfrr934hf8qn0 00:36:32.101 14:29:17 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s t7iqlbrwirdd1oxoeowjb4o033o1dp32 00:36:32.101 14:29:17 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s cpb7zkpq7khdtkw4tgwvfrr934hf8qn0 00:36:32.101 14:29:17 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:36:32.101 [2024-07-15 14:29:17.907981] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:36:32.101 [2024-07-15 14:29:17.908165] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid223710 ] 00:36:32.101 [2024-07-15 14:29:18.070274] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:32.360 [2024-07-15 14:29:18.260621] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:33.996  Copying: 32/32 [B] (average 31 kBps) 00:36:33.996 00:36:33.996 14:29:19 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ cpb7zkpq7khdtkw4tgwvfrr934hf8qn0t7iqlbrwirdd1oxoeowjb4o033o1dp32 == \c\p\b\7\z\k\p\q\7\k\h\d\t\k\w\4\t\g\w\v\f\r\r\9\3\4\h\f\8\q\n\0\t\7\i\q\l\b\r\w\i\r\d\d\1\o\x\o\e\o\w\j\b\4\o\0\3\3\o\1\d\p\3\2 ]] 00:36:33.996 00:36:33.996 real 0m1.745s 00:36:33.996 user 0m1.418s 00:36:33.996 sys 0m0.204s 00:36:33.996 14:29:19 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:33.996 14:29:19 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:36:33.996 ************************************ 00:36:33.996 END TEST dd_flag_append_forced_aio 00:36:33.996 ************************************ 00:36:33.996 14:29:19 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:36:33.996 14:29:19 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:36:33.996 14:29:19 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:33.996 14:29:19 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:33.996 14:29:19 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:36:33.996 ************************************ 00:36:33.996 START TEST dd_flag_directory_forced_aio 00:36:33.996 ************************************ 00:36:33.996 14:29:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1123 -- # directory 00:36:33.996 14:29:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:36:33.996 14:29:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:36:33.997 14:29:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:36:33.997 14:29:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:36:33.997 14:29:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:33.997 14:29:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:36:33.997 14:29:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:33.997 14:29:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:36:33.997 14:29:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:33.997 14:29:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:36:33.997 14:29:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:36:33.997 14:29:19 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:36:33.997 [2024-07-15 14:29:19.697560] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:36:33.997 [2024-07-15 14:29:19.697765] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid223751 ] 00:36:33.997 [2024-07-15 14:29:19.858550] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:34.255 [2024-07-15 14:29:20.064607] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:34.513 [2024-07-15 14:29:20.343828] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:36:34.513 [2024-07-15 14:29:20.344125] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:36:34.513 [2024-07-15 14:29:20.344274] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:36:35.079 [2024-07-15 14:29:21.002319] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:36:35.709 14:29:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # es=236 00:36:35.709 14:29:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:35.709 14:29:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@660 -- # es=108 00:36:35.709 14:29:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:36:35.709 14:29:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:36:35.709 14:29:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:35.709 14:29:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:36:35.709 14:29:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:36:35.709 14:29:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:36:35.709 14:29:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:36:35.709 14:29:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:35.709 14:29:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:36:35.709 14:29:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:35.709 14:29:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:36:35.709 14:29:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:35.709 14:29:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:36:35.709 14:29:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:36:35.709 14:29:21 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:36:35.709 [2024-07-15 14:29:21.423581] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:36:35.709 [2024-07-15 14:29:21.424208] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid223783 ] 00:36:35.709 [2024-07-15 14:29:21.571944] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:35.968 [2024-07-15 14:29:21.766022] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:36.226 [2024-07-15 14:29:22.045113] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:36:36.226 [2024-07-15 14:29:22.045492] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:36:36.226 [2024-07-15 14:29:22.045661] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:36:36.795 [2024-07-15 14:29:22.727196] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:36:37.362 14:29:23 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # es=236 00:36:37.362 14:29:23 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:37.362 14:29:23 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@660 -- # es=108 00:36:37.362 14:29:23 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:36:37.362 14:29:23 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:36:37.362 14:29:23 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:37.362 00:36:37.362 real 0m3.436s 00:36:37.362 user 0m2.808s 00:36:37.362 sys 0m0.407s 00:36:37.362 14:29:23 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:37.362 ************************************ 00:36:37.362 END TEST dd_flag_directory_forced_aio 00:36:37.362 ************************************ 00:36:37.362 14:29:23 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:36:37.362 14:29:23 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:36:37.362 14:29:23 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:36:37.362 14:29:23 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:37.362 14:29:23 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:37.362 14:29:23 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:36:37.362 ************************************ 00:36:37.362 START TEST dd_flag_nofollow_forced_aio 00:36:37.362 ************************************ 00:36:37.362 14:29:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1123 -- # nofollow 00:36:37.362 14:29:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:36:37.362 14:29:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:36:37.362 14:29:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:36:37.362 14:29:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:36:37.362 14:29:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:36:37.362 14:29:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:36:37.362 14:29:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:36:37.362 14:29:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:36:37.362 14:29:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:37.362 14:29:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:36:37.362 14:29:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:37.362 14:29:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:36:37.362 14:29:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:37.362 14:29:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:36:37.362 14:29:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:36:37.362 14:29:23 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:36:37.362 [2024-07-15 14:29:23.197793] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:36:37.362 [2024-07-15 14:29:23.198020] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid223829 ] 00:36:37.362 [2024-07-15 14:29:23.356392] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:37.621 [2024-07-15 14:29:23.545598] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:37.879 [2024-07-15 14:29:23.833279] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:36:37.879 [2024-07-15 14:29:23.833599] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:36:37.879 [2024-07-15 14:29:23.833716] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:36:38.813 [2024-07-15 14:29:24.523129] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:36:39.072 14:29:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # es=216 00:36:39.072 14:29:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:39.072 14:29:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@660 -- # es=88 00:36:39.072 14:29:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:36:39.072 14:29:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:36:39.072 14:29:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:39.072 14:29:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:36:39.072 14:29:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:36:39.072 14:29:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:36:39.072 14:29:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:36:39.072 14:29:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:39.072 14:29:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:36:39.072 14:29:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:39.072 14:29:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:36:39.072 14:29:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:39.072 14:29:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:36:39.072 14:29:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:36:39.072 14:29:24 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:36:39.072 [2024-07-15 14:29:24.919698] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:36:39.072 [2024-07-15 14:29:24.919901] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid223850 ] 00:36:39.072 [2024-07-15 14:29:25.070588] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:39.331 [2024-07-15 14:29:25.263888] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:39.590 [2024-07-15 14:29:25.547378] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:36:39.590 [2024-07-15 14:29:25.547660] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:36:39.590 [2024-07-15 14:29:25.547839] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:36:40.525 [2024-07-15 14:29:26.215731] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:36:40.783 14:29:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # es=216 00:36:40.783 14:29:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:40.783 14:29:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@660 -- # es=88 00:36:40.783 14:29:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:36:40.783 14:29:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:36:40.783 14:29:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:40.783 14:29:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:36:40.783 14:29:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:36:40.783 14:29:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:36:40.783 14:29:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:36:40.783 [2024-07-15 14:29:26.627926] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:36:40.783 [2024-07-15 14:29:26.628142] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid223872 ] 00:36:41.041 [2024-07-15 14:29:26.788483] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:41.041 [2024-07-15 14:29:26.995407] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:42.357  Copying: 512/512 [B] (average 500 kBps) 00:36:42.357 00:36:42.357 14:29:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ jr7ee8mr51w6cjknc0hi8niz1bglhlfnbssfjrzy7c8dd2r4tt66dtjp86v2gf4lus6t99an6vajuxfgysp47g3lzsl49vbm1lq61uybq4fyp2zjceuo5gsb0i0cubgquz2mypxp2h1r4zj5xuukr8ksxuksb0t264wxk42tehsvfoti89p1du59cojrv99svni7899rxzbcdlmtll3zxoqk6lsthbgnpoa4etg4r8znzw3dxfuapr3zls5i28ecztpxao27ajpc2zkpjunli7ru3k4krbmu67pmwcaewdjml4gezo45peiawcqcqisj9ifrhnjcoi58n7fnbgwopptwcdirucxmxfvap2sxccwkwggml0wqwiw8ywxzuogrb3c9djcyag1u4vvd19s1666gezf6so8f0xyqm321vlkl7g0z6a6az5lkz4sc1305x8ppuli8qllv6h19qkybxa35a22j97g9caa8iecyhk6rudjh4zw3w1bd01ldccn2 == \j\r\7\e\e\8\m\r\5\1\w\6\c\j\k\n\c\0\h\i\8\n\i\z\1\b\g\l\h\l\f\n\b\s\s\f\j\r\z\y\7\c\8\d\d\2\r\4\t\t\6\6\d\t\j\p\8\6\v\2\g\f\4\l\u\s\6\t\9\9\a\n\6\v\a\j\u\x\f\g\y\s\p\4\7\g\3\l\z\s\l\4\9\v\b\m\1\l\q\6\1\u\y\b\q\4\f\y\p\2\z\j\c\e\u\o\5\g\s\b\0\i\0\c\u\b\g\q\u\z\2\m\y\p\x\p\2\h\1\r\4\z\j\5\x\u\u\k\r\8\k\s\x\u\k\s\b\0\t\2\6\4\w\x\k\4\2\t\e\h\s\v\f\o\t\i\8\9\p\1\d\u\5\9\c\o\j\r\v\9\9\s\v\n\i\7\8\9\9\r\x\z\b\c\d\l\m\t\l\l\3\z\x\o\q\k\6\l\s\t\h\b\g\n\p\o\a\4\e\t\g\4\r\8\z\n\z\w\3\d\x\f\u\a\p\r\3\z\l\s\5\i\2\8\e\c\z\t\p\x\a\o\2\7\a\j\p\c\2\z\k\p\j\u\n\l\i\7\r\u\3\k\4\k\r\b\m\u\6\7\p\m\w\c\a\e\w\d\j\m\l\4\g\e\z\o\4\5\p\e\i\a\w\c\q\c\q\i\s\j\9\i\f\r\h\n\j\c\o\i\5\8\n\7\f\n\b\g\w\o\p\p\t\w\c\d\i\r\u\c\x\m\x\f\v\a\p\2\s\x\c\c\w\k\w\g\g\m\l\0\w\q\w\i\w\8\y\w\x\z\u\o\g\r\b\3\c\9\d\j\c\y\a\g\1\u\4\v\v\d\1\9\s\1\6\6\6\g\e\z\f\6\s\o\8\f\0\x\y\q\m\3\2\1\v\l\k\l\7\g\0\z\6\a\6\a\z\5\l\k\z\4\s\c\1\3\0\5\x\8\p\p\u\l\i\8\q\l\l\v\6\h\1\9\q\k\y\b\x\a\3\5\a\2\2\j\9\7\g\9\c\a\a\8\i\e\c\y\h\k\6\r\u\d\j\h\4\z\w\3\w\1\b\d\0\1\l\d\c\c\n\2 ]] 00:36:42.357 00:36:42.357 real 0m5.158s 00:36:42.357 user 0m4.163s 00:36:42.357 sys 0m0.653s 00:36:42.357 14:29:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:42.357 ************************************ 00:36:42.357 END TEST dd_flag_nofollow_forced_aio 00:36:42.357 ************************************ 00:36:42.357 14:29:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:36:42.357 14:29:28 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:36:42.357 14:29:28 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:36:42.357 14:29:28 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:42.357 14:29:28 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:42.357 14:29:28 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:36:42.357 ************************************ 00:36:42.357 START TEST dd_flag_noatime_forced_aio 00:36:42.357 ************************************ 00:36:42.358 14:29:28 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1123 -- # noatime 00:36:42.358 14:29:28 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:36:42.358 14:29:28 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:36:42.358 14:29:28 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:36:42.358 14:29:28 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:36:42.358 14:29:28 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:36:42.616 14:29:28 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:36:42.616 14:29:28 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1721053767 00:36:42.616 14:29:28 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:36:42.616 14:29:28 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1721053768 00:36:42.616 14:29:28 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:36:43.550 14:29:29 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:36:43.550 [2024-07-15 14:29:29.414566] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:36:43.550 [2024-07-15 14:29:29.414778] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid223936 ] 00:36:43.807 [2024-07-15 14:29:29.565247] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:43.807 [2024-07-15 14:29:29.764187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:45.438  Copying: 512/512 [B] (average 500 kBps) 00:36:45.438 00:36:45.438 14:29:31 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:36:45.438 14:29:31 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1721053767 )) 00:36:45.438 14:29:31 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:36:45.438 14:29:31 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1721053768 )) 00:36:45.438 14:29:31 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:36:45.438 [2024-07-15 14:29:31.126891] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:36:45.438 [2024-07-15 14:29:31.127463] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid223962 ] 00:36:45.438 [2024-07-15 14:29:31.275592] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:45.696 [2024-07-15 14:29:31.485634] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:46.890  Copying: 512/512 [B] (average 500 kBps) 00:36:46.890 00:36:46.890 14:29:32 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:36:46.890 14:29:32 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1721053771 )) 00:36:46.890 00:36:46.890 real 0m4.474s 00:36:46.890 user 0m2.805s 00:36:46.890 sys 0m0.427s 00:36:46.890 14:29:32 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:46.890 14:29:32 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:36:46.890 ************************************ 00:36:46.890 END TEST dd_flag_noatime_forced_aio 00:36:46.890 ************************************ 00:36:46.890 14:29:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:36:46.890 14:29:32 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:36:46.890 14:29:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:46.890 14:29:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:46.890 14:29:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:36:46.890 ************************************ 00:36:46.890 START TEST dd_flags_misc_forced_aio 00:36:46.890 ************************************ 00:36:46.890 14:29:32 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1123 -- # io 00:36:46.890 14:29:32 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:36:46.890 14:29:32 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:36:46.890 14:29:32 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:36:46.890 14:29:32 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:36:46.890 14:29:32 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:36:46.890 14:29:32 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:36:46.890 14:29:32 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:36:47.148 14:29:32 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:36:47.148 14:29:32 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:36:47.148 [2024-07-15 14:29:32.931848] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:36:47.149 [2024-07-15 14:29:32.932400] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid224004 ] 00:36:47.149 [2024-07-15 14:29:33.081637] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:47.413 [2024-07-15 14:29:33.276606] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:48.608  Copying: 512/512 [B] (average 500 kBps) 00:36:48.608 00:36:48.866 14:29:34 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ i41enppqak06cvj5w5rflsa8fj7uxb9mniotvjmq6thiy7gxzdq72oa6scrj6hmjeav0y5ydmm7d2zsis43uqx4kzpi56z5jci4gr1rd8g1ixnvze61pi6wf2umatcpu8jhgeqstvw7hnackpotv1ccw9ctq963j4uky98lhxigtg5scvm23tk6u04wxp2jm5suzd2aol5j75mucbjv8jptrjvktkhu4vq3gin347ebsdatsxiopro95mo3rhglwmrny2leuie1xw56wdhlsaxntd6g6cdqywhixawikfu9mn50l0hgcwk718qw3qd81uvtdjzbqzf0bnar0ayd97e4xssvmz8ro755xlh6p622150y5qy3uviwa7vvbuic5m4erg9n6gnrj2ovifjuhtbc7j1677t2w4x4wqlckez5zyzishnhwdn2yautr5g2f1lsexb1t2i5i0ax8hl5nlqzdjufhe1pmux7hf14zydc4c4mt3itp0ukhgcu6453p == \i\4\1\e\n\p\p\q\a\k\0\6\c\v\j\5\w\5\r\f\l\s\a\8\f\j\7\u\x\b\9\m\n\i\o\t\v\j\m\q\6\t\h\i\y\7\g\x\z\d\q\7\2\o\a\6\s\c\r\j\6\h\m\j\e\a\v\0\y\5\y\d\m\m\7\d\2\z\s\i\s\4\3\u\q\x\4\k\z\p\i\5\6\z\5\j\c\i\4\g\r\1\r\d\8\g\1\i\x\n\v\z\e\6\1\p\i\6\w\f\2\u\m\a\t\c\p\u\8\j\h\g\e\q\s\t\v\w\7\h\n\a\c\k\p\o\t\v\1\c\c\w\9\c\t\q\9\6\3\j\4\u\k\y\9\8\l\h\x\i\g\t\g\5\s\c\v\m\2\3\t\k\6\u\0\4\w\x\p\2\j\m\5\s\u\z\d\2\a\o\l\5\j\7\5\m\u\c\b\j\v\8\j\p\t\r\j\v\k\t\k\h\u\4\v\q\3\g\i\n\3\4\7\e\b\s\d\a\t\s\x\i\o\p\r\o\9\5\m\o\3\r\h\g\l\w\m\r\n\y\2\l\e\u\i\e\1\x\w\5\6\w\d\h\l\s\a\x\n\t\d\6\g\6\c\d\q\y\w\h\i\x\a\w\i\k\f\u\9\m\n\5\0\l\0\h\g\c\w\k\7\1\8\q\w\3\q\d\8\1\u\v\t\d\j\z\b\q\z\f\0\b\n\a\r\0\a\y\d\9\7\e\4\x\s\s\v\m\z\8\r\o\7\5\5\x\l\h\6\p\6\2\2\1\5\0\y\5\q\y\3\u\v\i\w\a\7\v\v\b\u\i\c\5\m\4\e\r\g\9\n\6\g\n\r\j\2\o\v\i\f\j\u\h\t\b\c\7\j\1\6\7\7\t\2\w\4\x\4\w\q\l\c\k\e\z\5\z\y\z\i\s\h\n\h\w\d\n\2\y\a\u\t\r\5\g\2\f\1\l\s\e\x\b\1\t\2\i\5\i\0\a\x\8\h\l\5\n\l\q\z\d\j\u\f\h\e\1\p\m\u\x\7\h\f\1\4\z\y\d\c\4\c\4\m\t\3\i\t\p\0\u\k\h\g\c\u\6\4\5\3\p ]] 00:36:48.866 14:29:34 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:36:48.867 14:29:34 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:36:48.867 [2024-07-15 14:29:34.657370] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:36:48.867 [2024-07-15 14:29:34.658000] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid224031 ] 00:36:48.867 [2024-07-15 14:29:34.803248] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:49.124 [2024-07-15 14:29:35.000999] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:50.758  Copying: 512/512 [B] (average 500 kBps) 00:36:50.758 00:36:50.758 14:29:36 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ i41enppqak06cvj5w5rflsa8fj7uxb9mniotvjmq6thiy7gxzdq72oa6scrj6hmjeav0y5ydmm7d2zsis43uqx4kzpi56z5jci4gr1rd8g1ixnvze61pi6wf2umatcpu8jhgeqstvw7hnackpotv1ccw9ctq963j4uky98lhxigtg5scvm23tk6u04wxp2jm5suzd2aol5j75mucbjv8jptrjvktkhu4vq3gin347ebsdatsxiopro95mo3rhglwmrny2leuie1xw56wdhlsaxntd6g6cdqywhixawikfu9mn50l0hgcwk718qw3qd81uvtdjzbqzf0bnar0ayd97e4xssvmz8ro755xlh6p622150y5qy3uviwa7vvbuic5m4erg9n6gnrj2ovifjuhtbc7j1677t2w4x4wqlckez5zyzishnhwdn2yautr5g2f1lsexb1t2i5i0ax8hl5nlqzdjufhe1pmux7hf14zydc4c4mt3itp0ukhgcu6453p == \i\4\1\e\n\p\p\q\a\k\0\6\c\v\j\5\w\5\r\f\l\s\a\8\f\j\7\u\x\b\9\m\n\i\o\t\v\j\m\q\6\t\h\i\y\7\g\x\z\d\q\7\2\o\a\6\s\c\r\j\6\h\m\j\e\a\v\0\y\5\y\d\m\m\7\d\2\z\s\i\s\4\3\u\q\x\4\k\z\p\i\5\6\z\5\j\c\i\4\g\r\1\r\d\8\g\1\i\x\n\v\z\e\6\1\p\i\6\w\f\2\u\m\a\t\c\p\u\8\j\h\g\e\q\s\t\v\w\7\h\n\a\c\k\p\o\t\v\1\c\c\w\9\c\t\q\9\6\3\j\4\u\k\y\9\8\l\h\x\i\g\t\g\5\s\c\v\m\2\3\t\k\6\u\0\4\w\x\p\2\j\m\5\s\u\z\d\2\a\o\l\5\j\7\5\m\u\c\b\j\v\8\j\p\t\r\j\v\k\t\k\h\u\4\v\q\3\g\i\n\3\4\7\e\b\s\d\a\t\s\x\i\o\p\r\o\9\5\m\o\3\r\h\g\l\w\m\r\n\y\2\l\e\u\i\e\1\x\w\5\6\w\d\h\l\s\a\x\n\t\d\6\g\6\c\d\q\y\w\h\i\x\a\w\i\k\f\u\9\m\n\5\0\l\0\h\g\c\w\k\7\1\8\q\w\3\q\d\8\1\u\v\t\d\j\z\b\q\z\f\0\b\n\a\r\0\a\y\d\9\7\e\4\x\s\s\v\m\z\8\r\o\7\5\5\x\l\h\6\p\6\2\2\1\5\0\y\5\q\y\3\u\v\i\w\a\7\v\v\b\u\i\c\5\m\4\e\r\g\9\n\6\g\n\r\j\2\o\v\i\f\j\u\h\t\b\c\7\j\1\6\7\7\t\2\w\4\x\4\w\q\l\c\k\e\z\5\z\y\z\i\s\h\n\h\w\d\n\2\y\a\u\t\r\5\g\2\f\1\l\s\e\x\b\1\t\2\i\5\i\0\a\x\8\h\l\5\n\l\q\z\d\j\u\f\h\e\1\p\m\u\x\7\h\f\1\4\z\y\d\c\4\c\4\m\t\3\i\t\p\0\u\k\h\g\c\u\6\4\5\3\p ]] 00:36:50.758 14:29:36 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:36:50.758 14:29:36 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:36:50.758 [2024-07-15 14:29:36.397372] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:36:50.758 [2024-07-15 14:29:36.397557] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid224055 ] 00:36:50.758 [2024-07-15 14:29:36.548260] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:50.758 [2024-07-15 14:29:36.743467] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:52.268  Copying: 512/512 [B] (average 250 kBps) 00:36:52.268 00:36:52.268 14:29:38 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ i41enppqak06cvj5w5rflsa8fj7uxb9mniotvjmq6thiy7gxzdq72oa6scrj6hmjeav0y5ydmm7d2zsis43uqx4kzpi56z5jci4gr1rd8g1ixnvze61pi6wf2umatcpu8jhgeqstvw7hnackpotv1ccw9ctq963j4uky98lhxigtg5scvm23tk6u04wxp2jm5suzd2aol5j75mucbjv8jptrjvktkhu4vq3gin347ebsdatsxiopro95mo3rhglwmrny2leuie1xw56wdhlsaxntd6g6cdqywhixawikfu9mn50l0hgcwk718qw3qd81uvtdjzbqzf0bnar0ayd97e4xssvmz8ro755xlh6p622150y5qy3uviwa7vvbuic5m4erg9n6gnrj2ovifjuhtbc7j1677t2w4x4wqlckez5zyzishnhwdn2yautr5g2f1lsexb1t2i5i0ax8hl5nlqzdjufhe1pmux7hf14zydc4c4mt3itp0ukhgcu6453p == \i\4\1\e\n\p\p\q\a\k\0\6\c\v\j\5\w\5\r\f\l\s\a\8\f\j\7\u\x\b\9\m\n\i\o\t\v\j\m\q\6\t\h\i\y\7\g\x\z\d\q\7\2\o\a\6\s\c\r\j\6\h\m\j\e\a\v\0\y\5\y\d\m\m\7\d\2\z\s\i\s\4\3\u\q\x\4\k\z\p\i\5\6\z\5\j\c\i\4\g\r\1\r\d\8\g\1\i\x\n\v\z\e\6\1\p\i\6\w\f\2\u\m\a\t\c\p\u\8\j\h\g\e\q\s\t\v\w\7\h\n\a\c\k\p\o\t\v\1\c\c\w\9\c\t\q\9\6\3\j\4\u\k\y\9\8\l\h\x\i\g\t\g\5\s\c\v\m\2\3\t\k\6\u\0\4\w\x\p\2\j\m\5\s\u\z\d\2\a\o\l\5\j\7\5\m\u\c\b\j\v\8\j\p\t\r\j\v\k\t\k\h\u\4\v\q\3\g\i\n\3\4\7\e\b\s\d\a\t\s\x\i\o\p\r\o\9\5\m\o\3\r\h\g\l\w\m\r\n\y\2\l\e\u\i\e\1\x\w\5\6\w\d\h\l\s\a\x\n\t\d\6\g\6\c\d\q\y\w\h\i\x\a\w\i\k\f\u\9\m\n\5\0\l\0\h\g\c\w\k\7\1\8\q\w\3\q\d\8\1\u\v\t\d\j\z\b\q\z\f\0\b\n\a\r\0\a\y\d\9\7\e\4\x\s\s\v\m\z\8\r\o\7\5\5\x\l\h\6\p\6\2\2\1\5\0\y\5\q\y\3\u\v\i\w\a\7\v\v\b\u\i\c\5\m\4\e\r\g\9\n\6\g\n\r\j\2\o\v\i\f\j\u\h\t\b\c\7\j\1\6\7\7\t\2\w\4\x\4\w\q\l\c\k\e\z\5\z\y\z\i\s\h\n\h\w\d\n\2\y\a\u\t\r\5\g\2\f\1\l\s\e\x\b\1\t\2\i\5\i\0\a\x\8\h\l\5\n\l\q\z\d\j\u\f\h\e\1\p\m\u\x\7\h\f\1\4\z\y\d\c\4\c\4\m\t\3\i\t\p\0\u\k\h\g\c\u\6\4\5\3\p ]] 00:36:52.268 14:29:38 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:36:52.268 14:29:38 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:36:52.268 [2024-07-15 14:29:38.101540] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:36:52.268 [2024-07-15 14:29:38.101771] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid224072 ] 00:36:52.268 [2024-07-15 14:29:38.265517] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:52.527 [2024-07-15 14:29:38.471847] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:54.164  Copying: 512/512 [B] (average 250 kBps) 00:36:54.164 00:36:54.164 14:29:39 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ i41enppqak06cvj5w5rflsa8fj7uxb9mniotvjmq6thiy7gxzdq72oa6scrj6hmjeav0y5ydmm7d2zsis43uqx4kzpi56z5jci4gr1rd8g1ixnvze61pi6wf2umatcpu8jhgeqstvw7hnackpotv1ccw9ctq963j4uky98lhxigtg5scvm23tk6u04wxp2jm5suzd2aol5j75mucbjv8jptrjvktkhu4vq3gin347ebsdatsxiopro95mo3rhglwmrny2leuie1xw56wdhlsaxntd6g6cdqywhixawikfu9mn50l0hgcwk718qw3qd81uvtdjzbqzf0bnar0ayd97e4xssvmz8ro755xlh6p622150y5qy3uviwa7vvbuic5m4erg9n6gnrj2ovifjuhtbc7j1677t2w4x4wqlckez5zyzishnhwdn2yautr5g2f1lsexb1t2i5i0ax8hl5nlqzdjufhe1pmux7hf14zydc4c4mt3itp0ukhgcu6453p == \i\4\1\e\n\p\p\q\a\k\0\6\c\v\j\5\w\5\r\f\l\s\a\8\f\j\7\u\x\b\9\m\n\i\o\t\v\j\m\q\6\t\h\i\y\7\g\x\z\d\q\7\2\o\a\6\s\c\r\j\6\h\m\j\e\a\v\0\y\5\y\d\m\m\7\d\2\z\s\i\s\4\3\u\q\x\4\k\z\p\i\5\6\z\5\j\c\i\4\g\r\1\r\d\8\g\1\i\x\n\v\z\e\6\1\p\i\6\w\f\2\u\m\a\t\c\p\u\8\j\h\g\e\q\s\t\v\w\7\h\n\a\c\k\p\o\t\v\1\c\c\w\9\c\t\q\9\6\3\j\4\u\k\y\9\8\l\h\x\i\g\t\g\5\s\c\v\m\2\3\t\k\6\u\0\4\w\x\p\2\j\m\5\s\u\z\d\2\a\o\l\5\j\7\5\m\u\c\b\j\v\8\j\p\t\r\j\v\k\t\k\h\u\4\v\q\3\g\i\n\3\4\7\e\b\s\d\a\t\s\x\i\o\p\r\o\9\5\m\o\3\r\h\g\l\w\m\r\n\y\2\l\e\u\i\e\1\x\w\5\6\w\d\h\l\s\a\x\n\t\d\6\g\6\c\d\q\y\w\h\i\x\a\w\i\k\f\u\9\m\n\5\0\l\0\h\g\c\w\k\7\1\8\q\w\3\q\d\8\1\u\v\t\d\j\z\b\q\z\f\0\b\n\a\r\0\a\y\d\9\7\e\4\x\s\s\v\m\z\8\r\o\7\5\5\x\l\h\6\p\6\2\2\1\5\0\y\5\q\y\3\u\v\i\w\a\7\v\v\b\u\i\c\5\m\4\e\r\g\9\n\6\g\n\r\j\2\o\v\i\f\j\u\h\t\b\c\7\j\1\6\7\7\t\2\w\4\x\4\w\q\l\c\k\e\z\5\z\y\z\i\s\h\n\h\w\d\n\2\y\a\u\t\r\5\g\2\f\1\l\s\e\x\b\1\t\2\i\5\i\0\a\x\8\h\l\5\n\l\q\z\d\j\u\f\h\e\1\p\m\u\x\7\h\f\1\4\z\y\d\c\4\c\4\m\t\3\i\t\p\0\u\k\h\g\c\u\6\4\5\3\p ]] 00:36:54.164 14:29:39 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:36:54.164 14:29:39 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:36:54.164 14:29:39 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:36:54.164 14:29:39 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:36:54.164 14:29:39 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:36:54.164 14:29:39 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:36:54.164 [2024-07-15 14:29:39.867586] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:36:54.164 [2024-07-15 14:29:39.867793] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid224100 ] 00:36:54.164 [2024-07-15 14:29:40.012761] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:54.424 [2024-07-15 14:29:40.214598] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:55.620  Copying: 512/512 [B] (average 500 kBps) 00:36:55.620 00:36:55.621 14:29:41 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 0okwam5r9mbhk5w0mjti8zrn65xl6ej2slzvim7wukerlca4la0fintzzmpdzklx1pmh1x5j03xkl0dtde9fsdsj8i3o7q8wl8cq53qx7n3ec6eaiyrfa6p2i628ajfmn1oermpq5j824272b6fexzzeuhn34elg1xzbm89fswhtt1i0uhmwxvbo7wpvgu7i2mcuygsrt4r5cinfpsmlz2yna7spzeyccgpfvswft3zc3huyuqpwxmdbia3xkmm82n7xec6ukcssz9rb74on078wk841yk92f03kbkoecteonr5le6dqrgtinrrlxlvokqpqzntofqac83wvwncspib8u6sgz5chny04t0le1nrjgj7qe8rib5297gqbefr54zrp37t67j0g9763dabc7r1253ctjqg8dfq8svkk0jem5mow0g8kbv2cqmp3ae8ut2zgrnnev23dli577r0dv508lwvl5g0q5tanfw9lipp8wu2fb4afs3y6tlcxp37l == \0\o\k\w\a\m\5\r\9\m\b\h\k\5\w\0\m\j\t\i\8\z\r\n\6\5\x\l\6\e\j\2\s\l\z\v\i\m\7\w\u\k\e\r\l\c\a\4\l\a\0\f\i\n\t\z\z\m\p\d\z\k\l\x\1\p\m\h\1\x\5\j\0\3\x\k\l\0\d\t\d\e\9\f\s\d\s\j\8\i\3\o\7\q\8\w\l\8\c\q\5\3\q\x\7\n\3\e\c\6\e\a\i\y\r\f\a\6\p\2\i\6\2\8\a\j\f\m\n\1\o\e\r\m\p\q\5\j\8\2\4\2\7\2\b\6\f\e\x\z\z\e\u\h\n\3\4\e\l\g\1\x\z\b\m\8\9\f\s\w\h\t\t\1\i\0\u\h\m\w\x\v\b\o\7\w\p\v\g\u\7\i\2\m\c\u\y\g\s\r\t\4\r\5\c\i\n\f\p\s\m\l\z\2\y\n\a\7\s\p\z\e\y\c\c\g\p\f\v\s\w\f\t\3\z\c\3\h\u\y\u\q\p\w\x\m\d\b\i\a\3\x\k\m\m\8\2\n\7\x\e\c\6\u\k\c\s\s\z\9\r\b\7\4\o\n\0\7\8\w\k\8\4\1\y\k\9\2\f\0\3\k\b\k\o\e\c\t\e\o\n\r\5\l\e\6\d\q\r\g\t\i\n\r\r\l\x\l\v\o\k\q\p\q\z\n\t\o\f\q\a\c\8\3\w\v\w\n\c\s\p\i\b\8\u\6\s\g\z\5\c\h\n\y\0\4\t\0\l\e\1\n\r\j\g\j\7\q\e\8\r\i\b\5\2\9\7\g\q\b\e\f\r\5\4\z\r\p\3\7\t\6\7\j\0\g\9\7\6\3\d\a\b\c\7\r\1\2\5\3\c\t\j\q\g\8\d\f\q\8\s\v\k\k\0\j\e\m\5\m\o\w\0\g\8\k\b\v\2\c\q\m\p\3\a\e\8\u\t\2\z\g\r\n\n\e\v\2\3\d\l\i\5\7\7\r\0\d\v\5\0\8\l\w\v\l\5\g\0\q\5\t\a\n\f\w\9\l\i\p\p\8\w\u\2\f\b\4\a\f\s\3\y\6\t\l\c\x\p\3\7\l ]] 00:36:55.621 14:29:41 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:36:55.621 14:29:41 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:36:55.621 [2024-07-15 14:29:41.578662] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:36:55.621 [2024-07-15 14:29:41.578874] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid224124 ] 00:36:55.880 [2024-07-15 14:29:41.727255] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:56.140 [2024-07-15 14:29:41.927728] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:57.336  Copying: 512/512 [B] (average 500 kBps) 00:36:57.336 00:36:57.336 14:29:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 0okwam5r9mbhk5w0mjti8zrn65xl6ej2slzvim7wukerlca4la0fintzzmpdzklx1pmh1x5j03xkl0dtde9fsdsj8i3o7q8wl8cq53qx7n3ec6eaiyrfa6p2i628ajfmn1oermpq5j824272b6fexzzeuhn34elg1xzbm89fswhtt1i0uhmwxvbo7wpvgu7i2mcuygsrt4r5cinfpsmlz2yna7spzeyccgpfvswft3zc3huyuqpwxmdbia3xkmm82n7xec6ukcssz9rb74on078wk841yk92f03kbkoecteonr5le6dqrgtinrrlxlvokqpqzntofqac83wvwncspib8u6sgz5chny04t0le1nrjgj7qe8rib5297gqbefr54zrp37t67j0g9763dabc7r1253ctjqg8dfq8svkk0jem5mow0g8kbv2cqmp3ae8ut2zgrnnev23dli577r0dv508lwvl5g0q5tanfw9lipp8wu2fb4afs3y6tlcxp37l == \0\o\k\w\a\m\5\r\9\m\b\h\k\5\w\0\m\j\t\i\8\z\r\n\6\5\x\l\6\e\j\2\s\l\z\v\i\m\7\w\u\k\e\r\l\c\a\4\l\a\0\f\i\n\t\z\z\m\p\d\z\k\l\x\1\p\m\h\1\x\5\j\0\3\x\k\l\0\d\t\d\e\9\f\s\d\s\j\8\i\3\o\7\q\8\w\l\8\c\q\5\3\q\x\7\n\3\e\c\6\e\a\i\y\r\f\a\6\p\2\i\6\2\8\a\j\f\m\n\1\o\e\r\m\p\q\5\j\8\2\4\2\7\2\b\6\f\e\x\z\z\e\u\h\n\3\4\e\l\g\1\x\z\b\m\8\9\f\s\w\h\t\t\1\i\0\u\h\m\w\x\v\b\o\7\w\p\v\g\u\7\i\2\m\c\u\y\g\s\r\t\4\r\5\c\i\n\f\p\s\m\l\z\2\y\n\a\7\s\p\z\e\y\c\c\g\p\f\v\s\w\f\t\3\z\c\3\h\u\y\u\q\p\w\x\m\d\b\i\a\3\x\k\m\m\8\2\n\7\x\e\c\6\u\k\c\s\s\z\9\r\b\7\4\o\n\0\7\8\w\k\8\4\1\y\k\9\2\f\0\3\k\b\k\o\e\c\t\e\o\n\r\5\l\e\6\d\q\r\g\t\i\n\r\r\l\x\l\v\o\k\q\p\q\z\n\t\o\f\q\a\c\8\3\w\v\w\n\c\s\p\i\b\8\u\6\s\g\z\5\c\h\n\y\0\4\t\0\l\e\1\n\r\j\g\j\7\q\e\8\r\i\b\5\2\9\7\g\q\b\e\f\r\5\4\z\r\p\3\7\t\6\7\j\0\g\9\7\6\3\d\a\b\c\7\r\1\2\5\3\c\t\j\q\g\8\d\f\q\8\s\v\k\k\0\j\e\m\5\m\o\w\0\g\8\k\b\v\2\c\q\m\p\3\a\e\8\u\t\2\z\g\r\n\n\e\v\2\3\d\l\i\5\7\7\r\0\d\v\5\0\8\l\w\v\l\5\g\0\q\5\t\a\n\f\w\9\l\i\p\p\8\w\u\2\f\b\4\a\f\s\3\y\6\t\l\c\x\p\3\7\l ]] 00:36:57.336 14:29:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:36:57.336 14:29:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:36:57.336 [2024-07-15 14:29:43.301051] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:36:57.336 [2024-07-15 14:29:43.301298] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid224142 ] 00:36:57.594 [2024-07-15 14:29:43.463871] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:57.852 [2024-07-15 14:29:43.650737] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:59.048  Copying: 512/512 [B] (average 166 kBps) 00:36:59.048 00:36:59.048 14:29:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 0okwam5r9mbhk5w0mjti8zrn65xl6ej2slzvim7wukerlca4la0fintzzmpdzklx1pmh1x5j03xkl0dtde9fsdsj8i3o7q8wl8cq53qx7n3ec6eaiyrfa6p2i628ajfmn1oermpq5j824272b6fexzzeuhn34elg1xzbm89fswhtt1i0uhmwxvbo7wpvgu7i2mcuygsrt4r5cinfpsmlz2yna7spzeyccgpfvswft3zc3huyuqpwxmdbia3xkmm82n7xec6ukcssz9rb74on078wk841yk92f03kbkoecteonr5le6dqrgtinrrlxlvokqpqzntofqac83wvwncspib8u6sgz5chny04t0le1nrjgj7qe8rib5297gqbefr54zrp37t67j0g9763dabc7r1253ctjqg8dfq8svkk0jem5mow0g8kbv2cqmp3ae8ut2zgrnnev23dli577r0dv508lwvl5g0q5tanfw9lipp8wu2fb4afs3y6tlcxp37l == \0\o\k\w\a\m\5\r\9\m\b\h\k\5\w\0\m\j\t\i\8\z\r\n\6\5\x\l\6\e\j\2\s\l\z\v\i\m\7\w\u\k\e\r\l\c\a\4\l\a\0\f\i\n\t\z\z\m\p\d\z\k\l\x\1\p\m\h\1\x\5\j\0\3\x\k\l\0\d\t\d\e\9\f\s\d\s\j\8\i\3\o\7\q\8\w\l\8\c\q\5\3\q\x\7\n\3\e\c\6\e\a\i\y\r\f\a\6\p\2\i\6\2\8\a\j\f\m\n\1\o\e\r\m\p\q\5\j\8\2\4\2\7\2\b\6\f\e\x\z\z\e\u\h\n\3\4\e\l\g\1\x\z\b\m\8\9\f\s\w\h\t\t\1\i\0\u\h\m\w\x\v\b\o\7\w\p\v\g\u\7\i\2\m\c\u\y\g\s\r\t\4\r\5\c\i\n\f\p\s\m\l\z\2\y\n\a\7\s\p\z\e\y\c\c\g\p\f\v\s\w\f\t\3\z\c\3\h\u\y\u\q\p\w\x\m\d\b\i\a\3\x\k\m\m\8\2\n\7\x\e\c\6\u\k\c\s\s\z\9\r\b\7\4\o\n\0\7\8\w\k\8\4\1\y\k\9\2\f\0\3\k\b\k\o\e\c\t\e\o\n\r\5\l\e\6\d\q\r\g\t\i\n\r\r\l\x\l\v\o\k\q\p\q\z\n\t\o\f\q\a\c\8\3\w\v\w\n\c\s\p\i\b\8\u\6\s\g\z\5\c\h\n\y\0\4\t\0\l\e\1\n\r\j\g\j\7\q\e\8\r\i\b\5\2\9\7\g\q\b\e\f\r\5\4\z\r\p\3\7\t\6\7\j\0\g\9\7\6\3\d\a\b\c\7\r\1\2\5\3\c\t\j\q\g\8\d\f\q\8\s\v\k\k\0\j\e\m\5\m\o\w\0\g\8\k\b\v\2\c\q\m\p\3\a\e\8\u\t\2\z\g\r\n\n\e\v\2\3\d\l\i\5\7\7\r\0\d\v\5\0\8\l\w\v\l\5\g\0\q\5\t\a\n\f\w\9\l\i\p\p\8\w\u\2\f\b\4\a\f\s\3\y\6\t\l\c\x\p\3\7\l ]] 00:36:59.048 14:29:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:36:59.048 14:29:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:36:59.048 [2024-07-15 14:29:45.000981] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:36:59.048 [2024-07-15 14:29:45.001156] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid224166 ] 00:36:59.307 [2024-07-15 14:29:45.150703] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:59.566 [2024-07-15 14:29:45.347303] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:00.761  Copying: 512/512 [B] (average 250 kBps) 00:37:00.761 00:37:00.761 14:29:46 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 0okwam5r9mbhk5w0mjti8zrn65xl6ej2slzvim7wukerlca4la0fintzzmpdzklx1pmh1x5j03xkl0dtde9fsdsj8i3o7q8wl8cq53qx7n3ec6eaiyrfa6p2i628ajfmn1oermpq5j824272b6fexzzeuhn34elg1xzbm89fswhtt1i0uhmwxvbo7wpvgu7i2mcuygsrt4r5cinfpsmlz2yna7spzeyccgpfvswft3zc3huyuqpwxmdbia3xkmm82n7xec6ukcssz9rb74on078wk841yk92f03kbkoecteonr5le6dqrgtinrrlxlvokqpqzntofqac83wvwncspib8u6sgz5chny04t0le1nrjgj7qe8rib5297gqbefr54zrp37t67j0g9763dabc7r1253ctjqg8dfq8svkk0jem5mow0g8kbv2cqmp3ae8ut2zgrnnev23dli577r0dv508lwvl5g0q5tanfw9lipp8wu2fb4afs3y6tlcxp37l == \0\o\k\w\a\m\5\r\9\m\b\h\k\5\w\0\m\j\t\i\8\z\r\n\6\5\x\l\6\e\j\2\s\l\z\v\i\m\7\w\u\k\e\r\l\c\a\4\l\a\0\f\i\n\t\z\z\m\p\d\z\k\l\x\1\p\m\h\1\x\5\j\0\3\x\k\l\0\d\t\d\e\9\f\s\d\s\j\8\i\3\o\7\q\8\w\l\8\c\q\5\3\q\x\7\n\3\e\c\6\e\a\i\y\r\f\a\6\p\2\i\6\2\8\a\j\f\m\n\1\o\e\r\m\p\q\5\j\8\2\4\2\7\2\b\6\f\e\x\z\z\e\u\h\n\3\4\e\l\g\1\x\z\b\m\8\9\f\s\w\h\t\t\1\i\0\u\h\m\w\x\v\b\o\7\w\p\v\g\u\7\i\2\m\c\u\y\g\s\r\t\4\r\5\c\i\n\f\p\s\m\l\z\2\y\n\a\7\s\p\z\e\y\c\c\g\p\f\v\s\w\f\t\3\z\c\3\h\u\y\u\q\p\w\x\m\d\b\i\a\3\x\k\m\m\8\2\n\7\x\e\c\6\u\k\c\s\s\z\9\r\b\7\4\o\n\0\7\8\w\k\8\4\1\y\k\9\2\f\0\3\k\b\k\o\e\c\t\e\o\n\r\5\l\e\6\d\q\r\g\t\i\n\r\r\l\x\l\v\o\k\q\p\q\z\n\t\o\f\q\a\c\8\3\w\v\w\n\c\s\p\i\b\8\u\6\s\g\z\5\c\h\n\y\0\4\t\0\l\e\1\n\r\j\g\j\7\q\e\8\r\i\b\5\2\9\7\g\q\b\e\f\r\5\4\z\r\p\3\7\t\6\7\j\0\g\9\7\6\3\d\a\b\c\7\r\1\2\5\3\c\t\j\q\g\8\d\f\q\8\s\v\k\k\0\j\e\m\5\m\o\w\0\g\8\k\b\v\2\c\q\m\p\3\a\e\8\u\t\2\z\g\r\n\n\e\v\2\3\d\l\i\5\7\7\r\0\d\v\5\0\8\l\w\v\l\5\g\0\q\5\t\a\n\f\w\9\l\i\p\p\8\w\u\2\f\b\4\a\f\s\3\y\6\t\l\c\x\p\3\7\l ]] 00:37:00.761 00:37:00.761 real 0m13.819s 00:37:00.761 user 0m11.154s 00:37:00.761 sys 0m1.690s 00:37:00.761 14:29:46 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:00.761 14:29:46 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:37:00.761 ************************************ 00:37:00.761 END TEST dd_flags_misc_forced_aio 00:37:00.761 ************************************ 00:37:00.761 14:29:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1142 -- # return 0 00:37:00.761 14:29:46 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:37:00.761 14:29:46 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:37:00.761 14:29:46 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:37:00.761 00:37:00.761 real 0m58.124s 00:37:00.761 user 0m44.999s 00:37:00.761 sys 0m7.218s 00:37:00.761 14:29:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:00.761 14:29:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:37:00.761 ************************************ 00:37:00.761 END TEST spdk_dd_posix 00:37:00.761 ************************************ 00:37:01.019 14:29:46 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:37:01.019 14:29:46 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:37:01.019 14:29:46 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:37:01.019 14:29:46 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:01.019 14:29:46 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:37:01.019 ************************************ 00:37:01.019 START TEST spdk_dd_malloc 00:37:01.019 ************************************ 00:37:01.019 14:29:46 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:37:01.019 * Looking for test storage... 00:37:01.019 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:37:01.019 14:29:46 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:37:01.019 14:29:46 spdk_dd.spdk_dd_malloc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:01.019 14:29:46 spdk_dd.spdk_dd_malloc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:01.019 14:29:46 spdk_dd.spdk_dd_malloc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:01.019 14:29:46 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:37:01.019 14:29:46 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:37:01.019 14:29:46 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:37:01.019 14:29:46 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:37:01.019 14:29:46 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:37:01.019 14:29:46 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:37:01.019 14:29:46 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:37:01.019 14:29:46 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:01.019 14:29:46 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:37:01.019 ************************************ 00:37:01.019 START TEST dd_malloc_copy 00:37:01.019 ************************************ 00:37:01.019 14:29:46 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1123 -- # malloc_copy 00:37:01.019 14:29:46 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:37:01.019 14:29:46 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:37:01.019 14:29:46 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:37:01.019 14:29:46 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:37:01.019 14:29:46 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:37:01.019 14:29:46 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:37:01.019 14:29:46 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:37:01.019 14:29:46 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:37:01.019 14:29:46 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:37:01.019 14:29:46 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:37:01.019 [2024-07-15 14:29:46.944839] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:37:01.019 [2024-07-15 14:29:46.945023] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid224257 ] 00:37:01.019 { 00:37:01.019 "subsystems": [ 00:37:01.019 { 00:37:01.019 "subsystem": "bdev", 00:37:01.019 "config": [ 00:37:01.019 { 00:37:01.019 "params": { 00:37:01.019 "block_size": 512, 00:37:01.019 "num_blocks": 1048576, 00:37:01.019 "name": "malloc0" 00:37:01.019 }, 00:37:01.019 "method": "bdev_malloc_create" 00:37:01.019 }, 00:37:01.019 { 00:37:01.019 "params": { 00:37:01.019 "block_size": 512, 00:37:01.019 "num_blocks": 1048576, 00:37:01.019 "name": "malloc1" 00:37:01.019 }, 00:37:01.019 "method": "bdev_malloc_create" 00:37:01.019 }, 00:37:01.019 { 00:37:01.019 "method": "bdev_wait_for_examine" 00:37:01.019 } 00:37:01.019 ] 00:37:01.019 } 00:37:01.019 ] 00:37:01.019 } 00:37:01.277 [2024-07-15 14:29:47.096982] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:01.535 [2024-07-15 14:29:47.299407] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:07.371  Copying: 511/512 [MB] (511 MBps) Copying: 512/512 [MB] (average 510 MBps) 00:37:07.371 00:37:07.371 14:29:52 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:37:07.371 14:29:52 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:37:07.371 14:29:52 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:37:07.371 14:29:52 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:37:07.371 [2024-07-15 14:29:53.009180] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:37:07.371 [2024-07-15 14:29:53.009803] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid224337 ] 00:37:07.371 { 00:37:07.371 "subsystems": [ 00:37:07.371 { 00:37:07.371 "subsystem": "bdev", 00:37:07.371 "config": [ 00:37:07.371 { 00:37:07.371 "params": { 00:37:07.371 "block_size": 512, 00:37:07.371 "num_blocks": 1048576, 00:37:07.371 "name": "malloc0" 00:37:07.371 }, 00:37:07.371 "method": "bdev_malloc_create" 00:37:07.371 }, 00:37:07.371 { 00:37:07.371 "params": { 00:37:07.371 "block_size": 512, 00:37:07.371 "num_blocks": 1048576, 00:37:07.371 "name": "malloc1" 00:37:07.371 }, 00:37:07.371 "method": "bdev_malloc_create" 00:37:07.371 }, 00:37:07.371 { 00:37:07.371 "method": "bdev_wait_for_examine" 00:37:07.371 } 00:37:07.371 ] 00:37:07.371 } 00:37:07.371 ] 00:37:07.371 } 00:37:07.371 [2024-07-15 14:29:53.163241] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:07.371 [2024-07-15 14:29:53.362435] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:13.197  Copying: 511/512 [MB] (511 MBps) Copying: 512/512 [MB] (average 512 MBps) 00:37:13.197 00:37:13.197 00:37:13.197 real 0m12.187s 00:37:13.197 user 0m10.891s 00:37:13.197 sys 0m1.120s 00:37:13.197 14:29:59 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:13.197 14:29:59 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:37:13.197 ************************************ 00:37:13.197 END TEST dd_malloc_copy 00:37:13.197 ************************************ 00:37:13.197 14:29:59 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1142 -- # return 0 00:37:13.197 00:37:13.197 real 0m12.324s 00:37:13.197 user 0m10.955s 00:37:13.197 sys 0m1.195s 00:37:13.197 14:29:59 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:13.197 14:29:59 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:37:13.197 ************************************ 00:37:13.197 END TEST spdk_dd_malloc 00:37:13.197 ************************************ 00:37:13.197 14:29:59 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:37:13.197 14:29:59 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 00:37:13.197 14:29:59 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:37:13.197 14:29:59 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:13.197 14:29:59 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:37:13.197 ************************************ 00:37:13.197 START TEST spdk_dd_bdev_to_bdev 00:37:13.197 ************************************ 00:37:13.197 14:29:59 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 00:37:13.456 * Looking for test storage... 00:37:13.456 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:37:13.456 14:29:59 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:37:13.456 14:29:59 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:13.456 14:29:59 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:13.456 14:29:59 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:13.457 14:29:59 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:37:13.457 14:29:59 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:37:13.457 14:29:59 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:37:13.457 14:29:59 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:37:13.457 14:29:59 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:37:13.457 14:29:59 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:37:13.457 14:29:59 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:37:13.457 14:29:59 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:37:13.457 14:29:59 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 1 > 1 )) 00:37:13.457 14:29:59 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@67 -- # nvme0=Nvme0 00:37:13.457 14:29:59 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@67 -- # bdev0=Nvme0n1 00:37:13.457 14:29:59 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@67 -- # nvme0_pci=0000:00:10.0 00:37:13.457 14:29:59 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@68 -- # aio1=/home/vagrant/spdk_repo/spdk/test/dd/aio1 00:37:13.457 14:29:59 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@68 -- # bdev1=aio1 00:37:13.457 14:29:59 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@70 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:37:13.457 14:29:59 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@70 -- # declare -A method_bdev_nvme_attach_controller_1 00:37:13.457 14:29:59 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@75 -- # method_bdev_aio_create_0=(['name']='aio1' ['filename']='/home/vagrant/spdk_repo/spdk/test/dd/aio1' ['block_size']='4096') 00:37:13.457 14:29:59 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@75 -- # declare -A method_bdev_aio_create_0 00:37:13.457 14:29:59 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/aio1 --bs=1048576 --count=256 00:37:13.457 [2024-07-15 14:29:59.308642] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:37:13.457 [2024-07-15 14:29:59.308904] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid224469 ] 00:37:13.715 [2024-07-15 14:29:59.471023] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:13.974 [2024-07-15 14:29:59.722642] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:15.607  Copying: 256/256 [MB] (average 1815 MBps) 00:37:15.607 00:37:15.607 14:30:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:37:15.607 14:30:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:37:15.607 14:30:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:37:15.607 14:30:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:37:15.607 14:30:01 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:37:15.607 14:30:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:37:15.607 14:30:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:15.607 14:30:01 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:37:15.607 ************************************ 00:37:15.607 START TEST dd_inflate_file 00:37:15.607 ************************************ 00:37:15.607 14:30:01 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:37:15.607 [2024-07-15 14:30:01.349408] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:37:15.607 [2024-07-15 14:30:01.349619] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid224499 ] 00:37:15.607 [2024-07-15 14:30:01.508525] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:15.865 [2024-07-15 14:30:01.713561] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:17.499  Copying: 64/64 [MB] (average 2000 MBps) 00:37:17.499 00:37:17.499 00:37:17.499 real 0m1.785s 00:37:17.499 user 0m1.413s 00:37:17.499 sys 0m0.247s 00:37:17.499 14:30:03 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:17.499 14:30:03 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:37:17.499 ************************************ 00:37:17.499 END TEST dd_inflate_file 00:37:17.499 ************************************ 00:37:17.499 14:30:03 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:37:17.499 14:30:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:37:17.499 14:30:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:37:17.500 14:30:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:37:17.500 14:30:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:37:17.500 14:30:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:37:17.500 14:30:03 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:37:17.500 14:30:03 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:37:17.500 14:30:03 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:17.500 14:30:03 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:37:17.500 ************************************ 00:37:17.500 START TEST dd_copy_to_out_bdev 00:37:17.500 ************************************ 00:37:17.500 14:30:03 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:37:17.500 [2024-07-15 14:30:03.197942] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:37:17.500 [2024-07-15 14:30:03.198154] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid224558 ] 00:37:17.500 { 00:37:17.500 "subsystems": [ 00:37:17.500 { 00:37:17.500 "subsystem": "bdev", 00:37:17.500 "config": [ 00:37:17.500 { 00:37:17.500 "params": { 00:37:17.500 "block_size": 4096, 00:37:17.500 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:37:17.500 "name": "aio1" 00:37:17.500 }, 00:37:17.500 "method": "bdev_aio_create" 00:37:17.500 }, 00:37:17.500 { 00:37:17.500 "params": { 00:37:17.500 "trtype": "pcie", 00:37:17.500 "traddr": "0000:00:10.0", 00:37:17.500 "name": "Nvme0" 00:37:17.500 }, 00:37:17.500 "method": "bdev_nvme_attach_controller" 00:37:17.500 }, 00:37:17.500 { 00:37:17.500 "method": "bdev_wait_for_examine" 00:37:17.500 } 00:37:17.500 ] 00:37:17.500 } 00:37:17.500 ] 00:37:17.500 } 00:37:17.500 [2024-07-15 14:30:03.359834] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:17.758 [2024-07-15 14:30:03.559456] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:20.104  Copying: 62/64 [MB] (62 MBps) Copying: 64/64 [MB] (average 62 MBps) 00:37:20.104 00:37:20.104 00:37:20.104 real 0m2.926s 00:37:20.104 user 0m2.584s 00:37:20.104 sys 0m0.268s 00:37:20.104 14:30:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:20.104 14:30:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:37:20.104 ************************************ 00:37:20.104 END TEST dd_copy_to_out_bdev 00:37:20.104 ************************************ 00:37:20.363 14:30:06 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:37:20.363 14:30:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:37:20.363 14:30:06 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:37:20.363 14:30:06 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:37:20.363 14:30:06 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:20.363 14:30:06 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:37:20.363 ************************************ 00:37:20.363 START TEST dd_offset_magic 00:37:20.363 ************************************ 00:37:20.363 14:30:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1123 -- # offset_magic 00:37:20.363 14:30:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:37:20.363 14:30:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:37:20.363 14:30:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:37:20.363 14:30:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:37:20.363 14:30:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:37:20.363 14:30:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:37:20.363 14:30:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:37:20.363 14:30:06 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:37:20.364 [2024-07-15 14:30:06.177075] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:37:20.364 [2024-07-15 14:30:06.177308] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid224617 ] 00:37:20.364 { 00:37:20.364 "subsystems": [ 00:37:20.364 { 00:37:20.364 "subsystem": "bdev", 00:37:20.364 "config": [ 00:37:20.364 { 00:37:20.364 "params": { 00:37:20.364 "block_size": 4096, 00:37:20.364 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:37:20.364 "name": "aio1" 00:37:20.364 }, 00:37:20.364 "method": "bdev_aio_create" 00:37:20.364 }, 00:37:20.364 { 00:37:20.364 "params": { 00:37:20.364 "trtype": "pcie", 00:37:20.364 "traddr": "0000:00:10.0", 00:37:20.364 "name": "Nvme0" 00:37:20.364 }, 00:37:20.364 "method": "bdev_nvme_attach_controller" 00:37:20.364 }, 00:37:20.364 { 00:37:20.364 "method": "bdev_wait_for_examine" 00:37:20.364 } 00:37:20.364 ] 00:37:20.364 } 00:37:20.364 ] 00:37:20.364 } 00:37:20.364 [2024-07-15 14:30:06.336028] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:20.623 [2024-07-15 14:30:06.526566] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:22.565  Copying: 65/65 [MB] (average 251 MBps) 00:37:22.565 00:37:22.565 14:30:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:37:22.565 14:30:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:37:22.566 14:30:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:37:22.566 14:30:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:37:22.566 [2024-07-15 14:30:08.289039] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:37:22.566 [2024-07-15 14:30:08.289346] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid224649 ] 00:37:22.566 { 00:37:22.566 "subsystems": [ 00:37:22.566 { 00:37:22.566 "subsystem": "bdev", 00:37:22.566 "config": [ 00:37:22.566 { 00:37:22.566 "params": { 00:37:22.566 "block_size": 4096, 00:37:22.566 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:37:22.566 "name": "aio1" 00:37:22.566 }, 00:37:22.566 "method": "bdev_aio_create" 00:37:22.566 }, 00:37:22.566 { 00:37:22.566 "params": { 00:37:22.566 "trtype": "pcie", 00:37:22.566 "traddr": "0000:00:10.0", 00:37:22.566 "name": "Nvme0" 00:37:22.566 }, 00:37:22.566 "method": "bdev_nvme_attach_controller" 00:37:22.566 }, 00:37:22.566 { 00:37:22.566 "method": "bdev_wait_for_examine" 00:37:22.566 } 00:37:22.566 ] 00:37:22.566 } 00:37:22.566 ] 00:37:22.566 } 00:37:22.566 [2024-07-15 14:30:08.461004] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:22.824 [2024-07-15 14:30:08.668094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:24.453  Copying: 1024/1024 [kB] (average 1000 MBps) 00:37:24.453 00:37:24.453 14:30:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:37:24.453 14:30:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:37:24.453 14:30:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:37:24.453 14:30:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:37:24.453 14:30:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:37:24.453 14:30:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:37:24.453 14:30:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:37:24.453 [2024-07-15 14:30:10.174107] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:37:24.453 [2024-07-15 14:30:10.174346] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid224682 ] 00:37:24.453 { 00:37:24.453 "subsystems": [ 00:37:24.453 { 00:37:24.453 "subsystem": "bdev", 00:37:24.453 "config": [ 00:37:24.453 { 00:37:24.453 "params": { 00:37:24.453 "block_size": 4096, 00:37:24.453 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:37:24.453 "name": "aio1" 00:37:24.453 }, 00:37:24.453 "method": "bdev_aio_create" 00:37:24.453 }, 00:37:24.453 { 00:37:24.453 "params": { 00:37:24.453 "trtype": "pcie", 00:37:24.453 "traddr": "0000:00:10.0", 00:37:24.453 "name": "Nvme0" 00:37:24.453 }, 00:37:24.453 "method": "bdev_nvme_attach_controller" 00:37:24.453 }, 00:37:24.453 { 00:37:24.453 "method": "bdev_wait_for_examine" 00:37:24.453 } 00:37:24.453 ] 00:37:24.453 } 00:37:24.453 ] 00:37:24.453 } 00:37:24.453 [2024-07-15 14:30:10.337718] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:24.710 [2024-07-15 14:30:10.538584] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:26.339  Copying: 65/65 [MB] (average 1625 MBps) 00:37:26.339 00:37:26.339 14:30:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:37:26.339 14:30:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:37:26.339 14:30:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:37:26.339 14:30:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:37:26.339 [2024-07-15 14:30:12.079158] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:37:26.339 [2024-07-15 14:30:12.079400] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid224712 ] 00:37:26.339 { 00:37:26.339 "subsystems": [ 00:37:26.339 { 00:37:26.339 "subsystem": "bdev", 00:37:26.339 "config": [ 00:37:26.339 { 00:37:26.339 "params": { 00:37:26.339 "block_size": 4096, 00:37:26.339 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:37:26.339 "name": "aio1" 00:37:26.339 }, 00:37:26.339 "method": "bdev_aio_create" 00:37:26.339 }, 00:37:26.339 { 00:37:26.339 "params": { 00:37:26.339 "trtype": "pcie", 00:37:26.339 "traddr": "0000:00:10.0", 00:37:26.339 "name": "Nvme0" 00:37:26.339 }, 00:37:26.339 "method": "bdev_nvme_attach_controller" 00:37:26.339 }, 00:37:26.339 { 00:37:26.339 "method": "bdev_wait_for_examine" 00:37:26.339 } 00:37:26.339 ] 00:37:26.339 } 00:37:26.339 ] 00:37:26.339 } 00:37:26.339 [2024-07-15 14:30:12.234103] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:26.598 [2024-07-15 14:30:12.442265] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:28.100  Copying: 1024/1024 [kB] (average 1000 MBps) 00:37:28.101 00:37:28.101 14:30:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:37:28.101 14:30:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:37:28.101 00:37:28.101 real 0m7.888s 00:37:28.101 user 0m6.338s 00:37:28.101 sys 0m1.032s 00:37:28.101 14:30:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:28.101 14:30:14 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:37:28.101 ************************************ 00:37:28.101 END TEST dd_offset_magic 00:37:28.101 ************************************ 00:37:28.101 14:30:14 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1142 -- # return 0 00:37:28.101 14:30:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:37:28.101 14:30:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:37:28.101 14:30:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:37:28.101 14:30:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:37:28.101 14:30:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:37:28.101 14:30:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:37:28.101 14:30:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:37:28.101 14:30:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:37:28.101 14:30:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:37:28.101 14:30:14 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:37:28.101 14:30:14 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:37:28.101 [2024-07-15 14:30:14.094809] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:37:28.101 [2024-07-15 14:30:14.094979] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid224756 ] 00:37:28.360 { 00:37:28.360 "subsystems": [ 00:37:28.360 { 00:37:28.360 "subsystem": "bdev", 00:37:28.360 "config": [ 00:37:28.360 { 00:37:28.360 "params": { 00:37:28.360 "block_size": 4096, 00:37:28.360 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:37:28.360 "name": "aio1" 00:37:28.360 }, 00:37:28.360 "method": "bdev_aio_create" 00:37:28.360 }, 00:37:28.360 { 00:37:28.360 "params": { 00:37:28.360 "trtype": "pcie", 00:37:28.360 "traddr": "0000:00:10.0", 00:37:28.360 "name": "Nvme0" 00:37:28.360 }, 00:37:28.360 "method": "bdev_nvme_attach_controller" 00:37:28.360 }, 00:37:28.360 { 00:37:28.360 "method": "bdev_wait_for_examine" 00:37:28.360 } 00:37:28.360 ] 00:37:28.360 } 00:37:28.360 ] 00:37:28.360 } 00:37:28.360 [2024-07-15 14:30:14.248182] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:28.619 [2024-07-15 14:30:14.461916] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:30.268  Copying: 5120/5120 [kB] (average 1000 MBps) 00:37:30.268 00:37:30.268 14:30:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme aio1 '' 4194330 00:37:30.268 14:30:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=aio1 00:37:30.268 14:30:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:37:30.268 14:30:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:37:30.268 14:30:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:37:30.268 14:30:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:37:30.268 14:30:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=aio1 --count=5 --json /dev/fd/62 00:37:30.268 14:30:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:37:30.268 14:30:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:37:30.268 14:30:16 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:37:30.268 [2024-07-15 14:30:16.041350] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:37:30.268 [2024-07-15 14:30:16.041549] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid224788 ] 00:37:30.268 { 00:37:30.268 "subsystems": [ 00:37:30.268 { 00:37:30.268 "subsystem": "bdev", 00:37:30.268 "config": [ 00:37:30.268 { 00:37:30.268 "params": { 00:37:30.268 "block_size": 4096, 00:37:30.268 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:37:30.268 "name": "aio1" 00:37:30.268 }, 00:37:30.268 "method": "bdev_aio_create" 00:37:30.268 }, 00:37:30.268 { 00:37:30.268 "params": { 00:37:30.268 "trtype": "pcie", 00:37:30.268 "traddr": "0000:00:10.0", 00:37:30.268 "name": "Nvme0" 00:37:30.268 }, 00:37:30.268 "method": "bdev_nvme_attach_controller" 00:37:30.268 }, 00:37:30.268 { 00:37:30.268 "method": "bdev_wait_for_examine" 00:37:30.268 } 00:37:30.268 ] 00:37:30.268 } 00:37:30.268 ] 00:37:30.268 } 00:37:30.268 [2024-07-15 14:30:16.204438] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:30.526 [2024-07-15 14:30:16.411310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:32.041  Copying: 5120/5120 [kB] (average 1250 MBps) 00:37:32.041 00:37:32.041 14:30:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/aio1 00:37:32.041 00:37:32.041 real 0m18.802s 00:37:32.041 user 0m15.194s 00:37:32.041 sys 0m2.585s 00:37:32.041 14:30:17 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:32.041 ************************************ 00:37:32.041 END TEST spdk_dd_bdev_to_bdev 00:37:32.041 ************************************ 00:37:32.041 14:30:17 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:37:32.041 14:30:18 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:37:32.041 14:30:18 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:37:32.041 14:30:18 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:37:32.041 14:30:18 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:37:32.041 14:30:18 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:32.041 14:30:18 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:37:32.041 ************************************ 00:37:32.041 START TEST spdk_dd_sparse 00:37:32.041 ************************************ 00:37:32.041 14:30:18 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:37:32.300 * Looking for test storage... 00:37:32.300 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:37:32.300 14:30:18 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:37:32.300 14:30:18 spdk_dd.spdk_dd_sparse -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:32.300 14:30:18 spdk_dd.spdk_dd_sparse -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:32.300 14:30:18 spdk_dd.spdk_dd_sparse -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:32.300 14:30:18 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:37:32.300 14:30:18 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:37:32.300 14:30:18 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:37:32.300 14:30:18 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:37:32.300 14:30:18 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:37:32.300 14:30:18 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:37:32.300 14:30:18 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:37:32.300 14:30:18 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:37:32.300 14:30:18 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:37:32.300 14:30:18 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:37:32.300 14:30:18 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:37:32.300 14:30:18 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:37:32.300 14:30:18 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:37:32.300 14:30:18 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:37:32.300 14:30:18 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:37:32.300 14:30:18 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:37:32.300 1+0 records in 00:37:32.300 1+0 records out 00:37:32.300 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00437557 s, 959 MB/s 00:37:32.300 14:30:18 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:37:32.300 1+0 records in 00:37:32.300 1+0 records out 00:37:32.300 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00305919 s, 1.4 GB/s 00:37:32.300 14:30:18 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:37:32.300 1+0 records in 00:37:32.300 1+0 records out 00:37:32.300 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00457328 s, 917 MB/s 00:37:32.300 14:30:18 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:37:32.300 14:30:18 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:37:32.300 14:30:18 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:32.300 14:30:18 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:37:32.300 ************************************ 00:37:32.300 START TEST dd_sparse_file_to_file 00:37:32.300 ************************************ 00:37:32.300 14:30:18 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1123 -- # file_to_file 00:37:32.300 14:30:18 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:37:32.300 14:30:18 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:37:32.300 14:30:18 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:37:32.300 14:30:18 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:37:32.300 14:30:18 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:37:32.300 14:30:18 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:37:32.300 14:30:18 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:37:32.300 14:30:18 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:37:32.300 14:30:18 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:37:32.300 14:30:18 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:37:32.300 [2024-07-15 14:30:18.201959] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:37:32.300 [2024-07-15 14:30:18.202576] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid224875 ] 00:37:32.300 { 00:37:32.300 "subsystems": [ 00:37:32.300 { 00:37:32.300 "subsystem": "bdev", 00:37:32.300 "config": [ 00:37:32.300 { 00:37:32.300 "params": { 00:37:32.300 "block_size": 4096, 00:37:32.300 "filename": "dd_sparse_aio_disk", 00:37:32.300 "name": "dd_aio" 00:37:32.300 }, 00:37:32.300 "method": "bdev_aio_create" 00:37:32.300 }, 00:37:32.300 { 00:37:32.300 "params": { 00:37:32.300 "lvs_name": "dd_lvstore", 00:37:32.300 "bdev_name": "dd_aio" 00:37:32.300 }, 00:37:32.300 "method": "bdev_lvol_create_lvstore" 00:37:32.300 }, 00:37:32.300 { 00:37:32.300 "method": "bdev_wait_for_examine" 00:37:32.300 } 00:37:32.300 ] 00:37:32.300 } 00:37:32.300 ] 00:37:32.301 } 00:37:32.559 [2024-07-15 14:30:18.352460] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:32.818 [2024-07-15 14:30:18.645479] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:34.457  Copying: 12/36 [MB] (average 1500 MBps) 00:37:34.457 00:37:34.457 14:30:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:37:34.457 14:30:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:37:34.457 14:30:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:37:34.457 14:30:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:37:34.457 14:30:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:37:34.457 14:30:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:37:34.457 14:30:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:37:34.457 14:30:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:37:34.457 14:30:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:37:34.457 14:30:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:37:34.457 00:37:34.457 real 0m2.064s 00:37:34.457 user 0m1.702s 00:37:34.457 sys 0m0.258s 00:37:34.457 14:30:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:34.457 14:30:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:37:34.457 ************************************ 00:37:34.457 END TEST dd_sparse_file_to_file 00:37:34.457 ************************************ 00:37:34.457 14:30:20 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:37:34.457 14:30:20 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:37:34.457 14:30:20 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:37:34.457 14:30:20 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:34.457 14:30:20 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:37:34.457 ************************************ 00:37:34.457 START TEST dd_sparse_file_to_bdev 00:37:34.457 ************************************ 00:37:34.457 14:30:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1123 -- # file_to_bdev 00:37:34.457 14:30:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:37:34.457 14:30:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:37:34.457 14:30:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:37:34.457 14:30:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:37:34.457 14:30:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:37:34.457 14:30:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:37:34.457 14:30:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:37:34.457 14:30:20 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:37:34.457 [2024-07-15 14:30:20.309970] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:37:34.457 [2024-07-15 14:30:20.310166] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid224941 ] 00:37:34.457 { 00:37:34.457 "subsystems": [ 00:37:34.457 { 00:37:34.457 "subsystem": "bdev", 00:37:34.457 "config": [ 00:37:34.457 { 00:37:34.457 "params": { 00:37:34.457 "block_size": 4096, 00:37:34.457 "filename": "dd_sparse_aio_disk", 00:37:34.457 "name": "dd_aio" 00:37:34.457 }, 00:37:34.457 "method": "bdev_aio_create" 00:37:34.457 }, 00:37:34.457 { 00:37:34.457 "params": { 00:37:34.457 "lvs_name": "dd_lvstore", 00:37:34.457 "lvol_name": "dd_lvol", 00:37:34.457 "size_in_mib": 36, 00:37:34.457 "thin_provision": true 00:37:34.457 }, 00:37:34.457 "method": "bdev_lvol_create" 00:37:34.457 }, 00:37:34.457 { 00:37:34.457 "method": "bdev_wait_for_examine" 00:37:34.457 } 00:37:34.457 ] 00:37:34.457 } 00:37:34.457 ] 00:37:34.457 } 00:37:34.457 [2024-07-15 14:30:20.459292] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:34.716 [2024-07-15 14:30:20.672332] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:36.220  Copying: 12/36 [MB] (average 461 MBps) 00:37:36.220 00:37:36.480 00:37:36.480 real 0m1.957s 00:37:36.480 user 0m1.628s 00:37:36.480 sys 0m0.255s 00:37:36.480 14:30:22 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:36.480 14:30:22 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:37:36.480 ************************************ 00:37:36.480 END TEST dd_sparse_file_to_bdev 00:37:36.480 ************************************ 00:37:36.480 14:30:22 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:37:36.480 14:30:22 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:37:36.480 14:30:22 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:37:36.480 14:30:22 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:36.480 14:30:22 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:37:36.480 ************************************ 00:37:36.480 START TEST dd_sparse_bdev_to_file 00:37:36.480 ************************************ 00:37:36.480 14:30:22 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1123 -- # bdev_to_file 00:37:36.480 14:30:22 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:37:36.480 14:30:22 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:37:36.480 14:30:22 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:37:36.480 14:30:22 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:37:36.480 14:30:22 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:37:36.480 14:30:22 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:37:36.480 14:30:22 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:37:36.480 14:30:22 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:37:36.480 [2024-07-15 14:30:22.318498] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:37:36.480 [2024-07-15 14:30:22.318703] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid224993 ] 00:37:36.480 { 00:37:36.480 "subsystems": [ 00:37:36.480 { 00:37:36.480 "subsystem": "bdev", 00:37:36.480 "config": [ 00:37:36.480 { 00:37:36.480 "params": { 00:37:36.480 "block_size": 4096, 00:37:36.480 "filename": "dd_sparse_aio_disk", 00:37:36.480 "name": "dd_aio" 00:37:36.480 }, 00:37:36.480 "method": "bdev_aio_create" 00:37:36.480 }, 00:37:36.480 { 00:37:36.480 "method": "bdev_wait_for_examine" 00:37:36.480 } 00:37:36.480 ] 00:37:36.480 } 00:37:36.480 ] 00:37:36.480 } 00:37:36.480 [2024-07-15 14:30:22.474430] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:36.739 [2024-07-15 14:30:22.679186] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:38.253  Copying: 12/36 [MB] (average 1500 MBps) 00:37:38.253 00:37:38.253 14:30:24 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:37:38.253 14:30:24 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:37:38.253 14:30:24 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:37:38.253 14:30:24 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:37:38.253 14:30:24 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:37:38.254 14:30:24 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:37:38.254 14:30:24 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:37:38.254 14:30:24 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:37:38.513 14:30:24 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:37:38.513 14:30:24 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:37:38.513 00:37:38.513 real 0m1.976s 00:37:38.513 user 0m1.634s 00:37:38.513 sys 0m0.266s 00:37:38.513 14:30:24 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:38.513 14:30:24 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:37:38.513 ************************************ 00:37:38.513 END TEST dd_sparse_bdev_to_file 00:37:38.513 ************************************ 00:37:38.513 14:30:24 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1142 -- # return 0 00:37:38.513 14:30:24 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:37:38.513 14:30:24 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:37:38.513 14:30:24 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:37:38.513 14:30:24 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:37:38.513 14:30:24 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:37:38.513 00:37:38.513 real 0m6.287s 00:37:38.513 user 0m5.081s 00:37:38.513 sys 0m0.936s 00:37:38.513 14:30:24 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:38.513 ************************************ 00:37:38.513 END TEST spdk_dd_sparse 00:37:38.513 14:30:24 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:37:38.513 ************************************ 00:37:38.513 14:30:24 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:37:38.513 14:30:24 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:37:38.513 14:30:24 spdk_dd -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:37:38.513 14:30:24 spdk_dd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:38.513 14:30:24 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:37:38.513 ************************************ 00:37:38.513 START TEST spdk_dd_negative 00:37:38.513 ************************************ 00:37:38.513 14:30:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:37:38.513 * Looking for test storage... 00:37:38.513 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:37:38.513 14:30:24 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:37:38.513 14:30:24 spdk_dd.spdk_dd_negative -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:38.513 14:30:24 spdk_dd.spdk_dd_negative -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:38.513 14:30:24 spdk_dd.spdk_dd_negative -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:38.513 14:30:24 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:37:38.513 14:30:24 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:37:38.513 14:30:24 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:37:38.513 14:30:24 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:37:38.513 14:30:24 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:37:38.513 14:30:24 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:37:38.513 14:30:24 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:37:38.513 14:30:24 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:37:38.513 14:30:24 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:37:38.513 14:30:24 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:37:38.513 14:30:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:37:38.513 14:30:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:38.513 14:30:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:37:38.513 ************************************ 00:37:38.513 START TEST dd_invalid_arguments 00:37:38.513 ************************************ 00:37:38.513 14:30:24 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1123 -- # invalid_arguments 00:37:38.513 14:30:24 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:37:38.514 14:30:24 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@648 -- # local es=0 00:37:38.514 14:30:24 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:37:38.514 14:30:24 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:38.514 14:30:24 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:38.514 14:30:24 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:38.514 14:30:24 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:38.514 14:30:24 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:38.514 14:30:24 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:38.514 14:30:24 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:38.514 14:30:24 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:37:38.514 14:30:24 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:37:38.514 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:37:38.514 00:37:38.514 CPU options: 00:37:38.514 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:37:38.514 (like [0,1,10]) 00:37:38.514 --lcores lcore to CPU mapping list. The list is in the format: 00:37:38.514 [<,lcores[@CPUs]>...] 00:37:38.514 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:37:38.514 Within the group, '-' is used for range separator, 00:37:38.514 ',' is used for single number separator. 00:37:38.514 '( )' can be omitted for single element group, 00:37:38.514 '@' can be omitted if cpus and lcores have the same value 00:37:38.514 --disable-cpumask-locks Disable CPU core lock files. 00:37:38.514 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:37:38.514 pollers in the app support interrupt mode) 00:37:38.514 -p, --main-core main (primary) core for DPDK 00:37:38.514 00:37:38.514 Configuration options: 00:37:38.514 -c, --config, --json JSON config file 00:37:38.514 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:37:38.514 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:37:38.514 --wait-for-rpc wait for RPCs to initialize subsystems 00:37:38.514 --rpcs-allowed comma-separated list of permitted RPCS 00:37:38.514 --json-ignore-init-errors don't exit on invalid config entry 00:37:38.514 00:37:38.514 Memory options: 00:37:38.514 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:37:38.514 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:37:38.514 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:37:38.514 -R, --huge-unlink unlink huge files after initialization 00:37:38.514 -n, --mem-channels number of memory channels used for DPDK 00:37:38.514 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:37:38.514 --msg-mempool-size global message memory pool size in count (default: 262143) 00:37:38.514 --no-huge run without using hugepages 00:37:38.514 -i, --shm-id shared memory ID (optional) 00:37:38.514 -g, --single-file-segments force creating just one hugetlbfs file 00:37:38.514 00:37:38.514 PCI options: 00:37:38.514 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:37:38.514 -B, --pci-blocked pci addr to block (can be used more than once) 00:37:38.514 -u, --no-pci disable PCI access 00:37:38.514 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:37:38.514 00:37:38.514 Log options: 00:37:38.514 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:37:38.514 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:37:38.514 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:37:38.514 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:37:38.514 blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, 00:37:38.514 json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, 00:37:38.514 nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, sock_posix, 00:37:38.514 thread, trace, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, 00:37:38.514 vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, 00:37:38.514 virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, 00:37:38.514 virtio_vfio_user, vmd) 00:37:38.514 --silence-noticelog disable notice level logging to stderr 00:37:38.514 00:37:38.514 Trace options: 00:37:38.514 --num-trace-entries number of trace entries for each core, must be power of 2, 00:37:38.514 setting 0 to disable trace (default 32768) 00:37:38.514 Tracepoints vary in size and can use more than one trace entry. 00:37:38.514 -e, --tpoint-group [:] 00:37:38.514 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:37:38.514 blob/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:37:38.514 [2024-07-15 14:30:24.511948] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:37:38.773 fs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, all). 00:37:38.773 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:37:38.773 a tracepoint group. First tpoint inside a group can be enabled by 00:37:38.774 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:37:38.774 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:37:38.774 in /include/spdk_internal/trace_defs.h 00:37:38.774 00:37:38.774 Other options: 00:37:38.774 -h, --help show this usage 00:37:38.774 -v, --version print SPDK version 00:37:38.774 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:37:38.774 --env-context Opaque context for use of the env implementation 00:37:38.774 00:37:38.774 Application specific: 00:37:38.774 [--------- DD Options ---------] 00:37:38.774 --if Input file. Must specify either --if or --ib. 00:37:38.774 --ib Input bdev. Must specifier either --if or --ib 00:37:38.774 --of Output file. Must specify either --of or --ob. 00:37:38.774 --ob Output bdev. Must specify either --of or --ob. 00:37:38.774 --iflag Input file flags. 00:37:38.774 --oflag Output file flags. 00:37:38.774 --bs I/O unit size (default: 4096) 00:37:38.774 --qd Queue depth (default: 2) 00:37:38.774 --count I/O unit count. The number of I/O units to copy. (default: all) 00:37:38.774 --skip Skip this many I/O units at start of input. (default: 0) 00:37:38.774 --seek Skip this many I/O units at start of output. (default: 0) 00:37:38.774 --aio Force usage of AIO. (by default io_uring is used if available) 00:37:38.774 --sparse Enable hole skipping in input target 00:37:38.774 Available iflag and oflag values: 00:37:38.774 append - append mode 00:37:38.774 direct - use direct I/O for data 00:37:38.774 directory - fail unless a directory 00:37:38.774 dsync - use synchronized I/O for data 00:37:38.774 noatime - do not update access time 00:37:38.774 noctty - do not assign controlling terminal from file 00:37:38.774 nofollow - do not follow symlinks 00:37:38.774 nonblock - use non-blocking I/O 00:37:38.774 sync - use synchronized I/O for data and metadata 00:37:38.774 14:30:24 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@651 -- # es=2 00:37:38.774 14:30:24 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:38.774 14:30:24 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:38.774 14:30:24 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:38.774 00:37:38.774 real 0m0.075s 00:37:38.774 user 0m0.040s 00:37:38.774 sys 0m0.034s 00:37:38.774 14:30:24 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:38.774 14:30:24 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:37:38.774 ************************************ 00:37:38.774 END TEST dd_invalid_arguments 00:37:38.774 ************************************ 00:37:38.774 14:30:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:37:38.774 14:30:24 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:37:38.774 14:30:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:37:38.774 14:30:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:38.774 14:30:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:37:38.774 ************************************ 00:37:38.774 START TEST dd_double_input 00:37:38.774 ************************************ 00:37:38.774 14:30:24 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1123 -- # double_input 00:37:38.774 14:30:24 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:37:38.774 14:30:24 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@648 -- # local es=0 00:37:38.774 14:30:24 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:37:38.774 14:30:24 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:38.774 14:30:24 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:38.774 14:30:24 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:38.774 14:30:24 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:38.774 14:30:24 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:38.774 14:30:24 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:38.774 14:30:24 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:38.774 14:30:24 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:37:38.774 14:30:24 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:37:38.774 [2024-07-15 14:30:24.639453] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:37:38.774 14:30:24 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@651 -- # es=22 00:37:38.774 14:30:24 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:38.774 14:30:24 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:38.774 14:30:24 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:38.774 00:37:38.774 real 0m0.077s 00:37:38.774 user 0m0.042s 00:37:38.774 sys 0m0.034s 00:37:38.774 14:30:24 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:38.774 14:30:24 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:37:38.774 ************************************ 00:37:38.774 END TEST dd_double_input 00:37:38.774 ************************************ 00:37:38.774 14:30:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:37:38.774 14:30:24 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:37:38.774 14:30:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:37:38.774 14:30:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:38.774 14:30:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:37:38.774 ************************************ 00:37:38.774 START TEST dd_double_output 00:37:38.774 ************************************ 00:37:38.774 14:30:24 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1123 -- # double_output 00:37:38.774 14:30:24 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:37:38.774 14:30:24 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@648 -- # local es=0 00:37:38.774 14:30:24 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:37:38.774 14:30:24 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:38.774 14:30:24 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:38.774 14:30:24 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:38.774 14:30:24 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:38.774 14:30:24 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:38.774 14:30:24 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:38.774 14:30:24 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:38.774 14:30:24 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:37:38.774 14:30:24 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:37:38.775 [2024-07-15 14:30:24.772035] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:37:39.034 14:30:24 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@651 -- # es=22 00:37:39.034 14:30:24 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:39.034 14:30:24 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:39.034 14:30:24 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:39.034 00:37:39.034 real 0m0.075s 00:37:39.034 user 0m0.041s 00:37:39.034 sys 0m0.033s 00:37:39.034 14:30:24 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:39.034 14:30:24 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:37:39.034 ************************************ 00:37:39.034 END TEST dd_double_output 00:37:39.034 ************************************ 00:37:39.034 14:30:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:37:39.034 14:30:24 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:37:39.034 14:30:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:37:39.034 14:30:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:39.034 14:30:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:37:39.034 ************************************ 00:37:39.034 START TEST dd_no_input 00:37:39.034 ************************************ 00:37:39.034 14:30:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1123 -- # no_input 00:37:39.034 14:30:24 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:37:39.034 14:30:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@648 -- # local es=0 00:37:39.034 14:30:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:37:39.034 14:30:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:39.034 14:30:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:39.034 14:30:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:39.034 14:30:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:39.034 14:30:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:39.034 14:30:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:39.034 14:30:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:39.034 14:30:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:37:39.034 14:30:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:37:39.034 [2024-07-15 14:30:24.894789] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:37:39.034 14:30:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@651 -- # es=22 00:37:39.034 14:30:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:39.034 14:30:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:39.034 14:30:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:39.034 00:37:39.034 real 0m0.071s 00:37:39.034 user 0m0.041s 00:37:39.034 sys 0m0.029s 00:37:39.034 14:30:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:39.034 14:30:24 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:37:39.034 ************************************ 00:37:39.034 END TEST dd_no_input 00:37:39.034 ************************************ 00:37:39.034 14:30:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:37:39.034 14:30:24 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:37:39.034 14:30:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:37:39.034 14:30:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:39.034 14:30:24 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:37:39.034 ************************************ 00:37:39.034 START TEST dd_no_output 00:37:39.034 ************************************ 00:37:39.034 14:30:24 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1123 -- # no_output 00:37:39.034 14:30:24 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:37:39.034 14:30:24 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@648 -- # local es=0 00:37:39.034 14:30:24 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:37:39.034 14:30:24 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:39.034 14:30:24 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:39.034 14:30:24 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:39.035 14:30:24 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:39.035 14:30:24 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:39.035 14:30:24 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:39.035 14:30:24 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:39.035 14:30:24 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:37:39.035 14:30:24 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:37:39.035 [2024-07-15 14:30:25.025293] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:37:39.294 14:30:25 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@651 -- # es=22 00:37:39.294 14:30:25 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:39.294 14:30:25 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:39.294 14:30:25 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:39.294 00:37:39.294 real 0m0.084s 00:37:39.294 user 0m0.043s 00:37:39.294 sys 0m0.040s 00:37:39.294 14:30:25 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:39.294 14:30:25 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:37:39.294 ************************************ 00:37:39.294 END TEST dd_no_output 00:37:39.294 ************************************ 00:37:39.294 14:30:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:37:39.294 14:30:25 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:37:39.294 14:30:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:37:39.294 14:30:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:39.294 14:30:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:37:39.294 ************************************ 00:37:39.294 START TEST dd_wrong_blocksize 00:37:39.294 ************************************ 00:37:39.294 14:30:25 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1123 -- # wrong_blocksize 00:37:39.294 14:30:25 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:37:39.294 14:30:25 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@648 -- # local es=0 00:37:39.294 14:30:25 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:37:39.294 14:30:25 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:39.294 14:30:25 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:39.294 14:30:25 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:39.295 14:30:25 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:39.295 14:30:25 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:39.295 14:30:25 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:39.295 14:30:25 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:39.295 14:30:25 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:37:39.295 14:30:25 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:37:39.295 [2024-07-15 14:30:25.161561] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:37:39.295 14:30:25 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@651 -- # es=22 00:37:39.295 14:30:25 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:39.295 14:30:25 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:39.295 14:30:25 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:39.295 00:37:39.295 real 0m0.081s 00:37:39.295 user 0m0.049s 00:37:39.295 sys 0m0.031s 00:37:39.295 14:30:25 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:39.295 14:30:25 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:37:39.295 ************************************ 00:37:39.295 END TEST dd_wrong_blocksize 00:37:39.295 ************************************ 00:37:39.295 14:30:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:37:39.295 14:30:25 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:37:39.295 14:30:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:37:39.295 14:30:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:39.295 14:30:25 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:37:39.295 ************************************ 00:37:39.295 START TEST dd_smaller_blocksize 00:37:39.295 ************************************ 00:37:39.295 14:30:25 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1123 -- # smaller_blocksize 00:37:39.295 14:30:25 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:37:39.295 14:30:25 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@648 -- # local es=0 00:37:39.295 14:30:25 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:37:39.295 14:30:25 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:39.295 14:30:25 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:39.295 14:30:25 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:39.295 14:30:25 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:39.295 14:30:25 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:39.295 14:30:25 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:39.295 14:30:25 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:39.295 14:30:25 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:37:39.295 14:30:25 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:37:39.295 [2024-07-15 14:30:25.288629] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:37:39.295 [2024-07-15 14:30:25.288908] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid225267 ] 00:37:39.553 [2024-07-15 14:30:25.444501] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:39.811 [2024-07-15 14:30:25.699939] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:40.380 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:37:40.380 [2024-07-15 14:30:26.215395] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:37:40.380 [2024-07-15 14:30:26.215655] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:37:40.947 [2024-07-15 14:30:26.906720] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:37:41.515 14:30:27 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@651 -- # es=244 00:37:41.515 14:30:27 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:41.515 14:30:27 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@660 -- # es=116 00:37:41.515 14:30:27 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@661 -- # case "$es" in 00:37:41.515 14:30:27 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@668 -- # es=1 00:37:41.515 14:30:27 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:41.515 00:37:41.515 real 0m2.010s 00:37:41.515 user 0m1.518s 00:37:41.515 sys 0m0.380s 00:37:41.515 14:30:27 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:41.515 14:30:27 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:37:41.515 ************************************ 00:37:41.515 END TEST dd_smaller_blocksize 00:37:41.515 ************************************ 00:37:41.515 14:30:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:37:41.515 14:30:27 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:37:41.515 14:30:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:37:41.515 14:30:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:41.515 14:30:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:37:41.515 ************************************ 00:37:41.515 START TEST dd_invalid_count 00:37:41.515 ************************************ 00:37:41.515 14:30:27 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1123 -- # invalid_count 00:37:41.515 14:30:27 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:37:41.515 14:30:27 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@648 -- # local es=0 00:37:41.515 14:30:27 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:37:41.515 14:30:27 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:41.515 14:30:27 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:41.515 14:30:27 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:41.515 14:30:27 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:41.515 14:30:27 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:41.515 14:30:27 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:41.515 14:30:27 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:41.515 14:30:27 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:37:41.515 14:30:27 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:37:41.515 [2024-07-15 14:30:27.354142] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:37:41.515 14:30:27 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@651 -- # es=22 00:37:41.515 14:30:27 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:41.515 14:30:27 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:41.515 14:30:27 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:41.515 00:37:41.515 real 0m0.073s 00:37:41.515 user 0m0.038s 00:37:41.515 sys 0m0.034s 00:37:41.515 14:30:27 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:41.515 14:30:27 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:37:41.515 ************************************ 00:37:41.515 END TEST dd_invalid_count 00:37:41.515 ************************************ 00:37:41.515 14:30:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:37:41.515 14:30:27 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:37:41.515 14:30:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:37:41.515 14:30:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:41.515 14:30:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:37:41.515 ************************************ 00:37:41.515 START TEST dd_invalid_oflag 00:37:41.515 ************************************ 00:37:41.515 14:30:27 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1123 -- # invalid_oflag 00:37:41.515 14:30:27 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:37:41.515 14:30:27 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@648 -- # local es=0 00:37:41.515 14:30:27 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:37:41.515 14:30:27 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:41.515 14:30:27 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:41.515 14:30:27 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:41.515 14:30:27 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:41.515 14:30:27 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:41.515 14:30:27 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:41.515 14:30:27 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:41.515 14:30:27 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:37:41.515 14:30:27 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:37:41.515 [2024-07-15 14:30:27.480482] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:37:41.515 14:30:27 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@651 -- # es=22 00:37:41.515 14:30:27 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:41.515 14:30:27 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:41.515 14:30:27 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:41.515 00:37:41.515 real 0m0.071s 00:37:41.515 user 0m0.034s 00:37:41.515 sys 0m0.037s 00:37:41.515 14:30:27 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:41.515 14:30:27 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:37:41.515 ************************************ 00:37:41.515 END TEST dd_invalid_oflag 00:37:41.515 ************************************ 00:37:41.774 14:30:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:37:41.774 14:30:27 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:37:41.774 14:30:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:37:41.774 14:30:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:41.774 14:30:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:37:41.774 ************************************ 00:37:41.774 START TEST dd_invalid_iflag 00:37:41.774 ************************************ 00:37:41.774 14:30:27 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1123 -- # invalid_iflag 00:37:41.774 14:30:27 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:37:41.774 14:30:27 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@648 -- # local es=0 00:37:41.774 14:30:27 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:37:41.774 14:30:27 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:41.774 14:30:27 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:41.774 14:30:27 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:41.774 14:30:27 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:41.774 14:30:27 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:41.774 14:30:27 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:41.774 14:30:27 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:41.774 14:30:27 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:37:41.774 14:30:27 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:37:41.774 [2024-07-15 14:30:27.599710] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:37:41.774 14:30:27 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@651 -- # es=22 00:37:41.774 14:30:27 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:41.774 14:30:27 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:41.774 14:30:27 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:41.774 00:37:41.774 real 0m0.067s 00:37:41.774 user 0m0.036s 00:37:41.774 sys 0m0.031s 00:37:41.774 14:30:27 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:41.774 14:30:27 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:37:41.774 ************************************ 00:37:41.774 END TEST dd_invalid_iflag 00:37:41.774 ************************************ 00:37:41.774 14:30:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:37:41.774 14:30:27 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:37:41.774 14:30:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:37:41.774 14:30:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:41.774 14:30:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:37:41.774 ************************************ 00:37:41.774 START TEST dd_unknown_flag 00:37:41.774 ************************************ 00:37:41.774 14:30:27 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1123 -- # unknown_flag 00:37:41.774 14:30:27 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:37:41.774 14:30:27 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@648 -- # local es=0 00:37:41.774 14:30:27 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:37:41.774 14:30:27 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:41.774 14:30:27 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:41.774 14:30:27 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:41.774 14:30:27 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:41.774 14:30:27 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:41.774 14:30:27 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:41.774 14:30:27 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:41.774 14:30:27 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:37:41.774 14:30:27 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:37:41.774 [2024-07-15 14:30:27.734087] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:37:41.774 [2024-07-15 14:30:27.734397] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid225391 ] 00:37:42.032 [2024-07-15 14:30:27.912858] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:42.291 [2024-07-15 14:30:28.109044] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:42.550 [2024-07-15 14:30:28.389842] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:37:42.550 [2024-07-15 14:30:28.390121] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:37:42.550  Copying: 0/0 [B] (average 0 Bps)[2024-07-15 14:30:28.390375] app.c:1039:app_stop: *NOTICE*: spdk_app_stop called twice 00:37:43.117 [2024-07-15 14:30:29.055292] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:37:43.684 00:37:43.684 00:37:43.684 14:30:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@651 -- # es=234 00:37:43.684 14:30:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:43.684 14:30:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@660 -- # es=106 00:37:43.684 14:30:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@661 -- # case "$es" in 00:37:43.684 14:30:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@668 -- # es=1 00:37:43.684 14:30:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:43.684 00:37:43.684 real 0m1.775s 00:37:43.684 user 0m1.416s 00:37:43.684 sys 0m0.237s 00:37:43.684 14:30:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:43.684 14:30:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:37:43.684 ************************************ 00:37:43.684 END TEST dd_unknown_flag 00:37:43.684 ************************************ 00:37:43.684 14:30:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:37:43.684 14:30:29 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:37:43.684 14:30:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:37:43.684 14:30:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:43.684 14:30:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:37:43.684 ************************************ 00:37:43.684 START TEST dd_invalid_json 00:37:43.684 ************************************ 00:37:43.684 14:30:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1123 -- # invalid_json 00:37:43.684 14:30:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:37:43.684 14:30:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # : 00:37:43.684 14:30:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@648 -- # local es=0 00:37:43.684 14:30:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:37:43.684 14:30:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:43.684 14:30:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:43.684 14:30:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:43.684 14:30:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:43.684 14:30:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:43.684 14:30:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:43.684 14:30:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:43.684 14:30:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:37:43.684 14:30:29 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:37:43.684 [2024-07-15 14:30:29.558300] Starting SPDK v24.09-pre git sha1 255871c19 / DPDK 24.03.0 initialization... 00:37:43.684 [2024-07-15 14:30:29.558551] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid225439 ] 00:37:43.943 [2024-07-15 14:30:29.721718] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:43.943 [2024-07-15 14:30:29.905982] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:43.943 [2024-07-15 14:30:29.906259] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:37:43.943 [2024-07-15 14:30:29.906416] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:37:43.943 [2024-07-15 14:30:29.906505] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:37:43.943 [2024-07-15 14:30:29.906625] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:37:44.508 14:30:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@651 -- # es=234 00:37:44.508 14:30:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:44.508 14:30:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@660 -- # es=106 00:37:44.508 14:30:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@661 -- # case "$es" in 00:37:44.508 14:30:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@668 -- # es=1 00:37:44.508 14:30:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:44.508 00:37:44.508 real 0m0.772s 00:37:44.508 user 0m0.543s 00:37:44.508 sys 0m0.125s 00:37:44.508 14:30:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:44.508 14:30:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:37:44.508 ************************************ 00:37:44.508 END TEST dd_invalid_json 00:37:44.508 ************************************ 00:37:44.508 14:30:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1142 -- # return 0 00:37:44.508 00:37:44.508 real 0m5.956s 00:37:44.508 user 0m4.085s 00:37:44.508 sys 0m1.485s 00:37:44.508 14:30:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:44.508 14:30:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:37:44.508 ************************************ 00:37:44.508 END TEST spdk_dd_negative 00:37:44.508 ************************************ 00:37:44.508 14:30:30 spdk_dd -- common/autotest_common.sh@1142 -- # return 0 00:37:44.508 00:37:44.508 real 2m25.406s 00:37:44.508 user 1m56.627s 00:37:44.508 sys 0m19.471s 00:37:44.508 14:30:30 spdk_dd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:44.508 14:30:30 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:37:44.508 ************************************ 00:37:44.508 END TEST spdk_dd 00:37:44.508 ************************************ 00:37:44.508 14:30:30 -- common/autotest_common.sh@1142 -- # return 0 00:37:44.508 14:30:30 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:37:44.508 14:30:30 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:37:44.508 14:30:30 -- spdk/autotest.sh@260 -- # timing_exit lib 00:37:44.508 14:30:30 -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:44.508 14:30:30 -- common/autotest_common.sh@10 -- # set +x 00:37:44.508 14:30:30 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:37:44.508 14:30:30 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:37:44.508 14:30:30 -- spdk/autotest.sh@279 -- # '[' 0 -eq 1 ']' 00:37:44.508 14:30:30 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:37:44.508 14:30:30 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:37:44.508 14:30:30 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:37:44.508 14:30:30 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:37:44.508 14:30:30 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:37:44.508 14:30:30 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:37:44.508 14:30:30 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:37:44.508 14:30:30 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:37:44.509 14:30:30 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:37:44.509 14:30:30 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:37:44.509 14:30:30 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:37:44.509 14:30:30 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:37:44.509 14:30:30 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:37:44.509 14:30:30 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:37:44.509 14:30:30 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:37:44.509 14:30:30 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:37:44.509 14:30:30 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:37:44.509 14:30:30 -- common/autotest_common.sh@722 -- # xtrace_disable 00:37:44.509 14:30:30 -- common/autotest_common.sh@10 -- # set +x 00:37:44.509 14:30:30 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:37:44.509 14:30:30 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:37:44.509 14:30:30 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:37:44.509 14:30:30 -- common/autotest_common.sh@10 -- # set +x 00:37:45.887 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda2,mount@vda:vda5, so not binding PCI dev 00:37:45.887 Waiting for block devices as requested 00:37:45.887 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:37:46.455 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda2,mount@vda:vda5, so not binding PCI dev 00:37:46.455 Cleaning 00:37:46.455 Removing: /var/run/dpdk/spdk0/config 00:37:46.455 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:37:46.455 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:37:46.455 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:37:46.455 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:37:46.455 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:37:46.455 Removing: /var/run/dpdk/spdk0/hugepage_info 00:37:46.455 Removing: /dev/shm/spdk_tgt_trace.pid177659 00:37:46.455 Removing: /var/run/dpdk/spdk0 00:37:46.455 Removing: /var/run/dpdk/spdk_pid177412 00:37:46.455 Removing: /var/run/dpdk/spdk_pid177659 00:37:46.455 Removing: /var/run/dpdk/spdk_pid177911 00:37:46.455 Removing: /var/run/dpdk/spdk_pid178028 00:37:46.455 Removing: /var/run/dpdk/spdk_pid178092 00:37:46.455 Removing: /var/run/dpdk/spdk_pid178231 00:37:46.455 Removing: /var/run/dpdk/spdk_pid178254 00:37:46.455 Removing: /var/run/dpdk/spdk_pid178415 00:37:46.455 Removing: /var/run/dpdk/spdk_pid178696 00:37:46.455 Removing: /var/run/dpdk/spdk_pid178878 00:37:46.455 Removing: /var/run/dpdk/spdk_pid178995 00:37:46.455 Removing: /var/run/dpdk/spdk_pid179103 00:37:46.455 Removing: /var/run/dpdk/spdk_pid179242 00:37:46.455 Removing: /var/run/dpdk/spdk_pid179349 00:37:46.455 Removing: /var/run/dpdk/spdk_pid179402 00:37:46.455 Removing: /var/run/dpdk/spdk_pid179453 00:37:46.455 Removing: /var/run/dpdk/spdk_pid179525 00:37:46.455 Removing: /var/run/dpdk/spdk_pid179629 00:37:46.455 Removing: /var/run/dpdk/spdk_pid180116 00:37:46.455 Removing: /var/run/dpdk/spdk_pid180196 00:37:46.455 Removing: /var/run/dpdk/spdk_pid180273 00:37:46.455 Removing: /var/run/dpdk/spdk_pid180301 00:37:46.455 Removing: /var/run/dpdk/spdk_pid180456 00:37:46.455 Removing: /var/run/dpdk/spdk_pid180482 00:37:46.455 Removing: /var/run/dpdk/spdk_pid180649 00:37:46.455 Removing: /var/run/dpdk/spdk_pid180670 00:37:46.455 Removing: /var/run/dpdk/spdk_pid180746 00:37:46.455 Removing: /var/run/dpdk/spdk_pid180769 00:37:46.455 Removing: /var/run/dpdk/spdk_pid180838 00:37:46.455 Removing: /var/run/dpdk/spdk_pid180865 00:37:46.455 Removing: /var/run/dpdk/spdk_pid181072 00:37:46.455 Removing: /var/run/dpdk/spdk_pid181115 00:37:46.455 Removing: /var/run/dpdk/spdk_pid181163 00:37:46.455 Removing: /var/run/dpdk/spdk_pid181250 00:37:46.455 Removing: /var/run/dpdk/spdk_pid181340 00:37:46.455 Removing: /var/run/dpdk/spdk_pid181385 00:37:46.455 Removing: /var/run/dpdk/spdk_pid181486 00:37:46.455 Removing: /var/run/dpdk/spdk_pid181537 00:37:46.455 Removing: /var/run/dpdk/spdk_pid181595 00:37:46.455 Removing: /var/run/dpdk/spdk_pid181651 00:37:46.455 Removing: /var/run/dpdk/spdk_pid181709 00:37:46.455 Removing: /var/run/dpdk/spdk_pid181772 00:37:46.455 Removing: /var/run/dpdk/spdk_pid181823 00:37:46.455 Removing: /var/run/dpdk/spdk_pid181881 00:37:46.455 Removing: /var/run/dpdk/spdk_pid181937 00:37:46.455 Removing: /var/run/dpdk/spdk_pid181995 00:37:46.455 Removing: /var/run/dpdk/spdk_pid182050 00:37:46.455 Removing: /var/run/dpdk/spdk_pid182107 00:37:46.455 Removing: /var/run/dpdk/spdk_pid182158 00:37:46.455 Removing: /var/run/dpdk/spdk_pid182222 00:37:46.455 Removing: /var/run/dpdk/spdk_pid182272 00:37:46.455 Removing: /var/run/dpdk/spdk_pid182331 00:37:46.455 Removing: /var/run/dpdk/spdk_pid182393 00:37:46.455 Removing: /var/run/dpdk/spdk_pid182447 00:37:46.455 Removing: /var/run/dpdk/spdk_pid182513 00:37:46.455 Removing: /var/run/dpdk/spdk_pid182564 00:37:46.455 Removing: /var/run/dpdk/spdk_pid182622 00:37:46.455 Removing: /var/run/dpdk/spdk_pid182714 00:37:46.455 Removing: /var/run/dpdk/spdk_pid182848 00:37:46.714 Removing: /var/run/dpdk/spdk_pid183038 00:37:46.714 Removing: /var/run/dpdk/spdk_pid183140 00:37:46.714 Removing: /var/run/dpdk/spdk_pid183202 00:37:46.714 Removing: /var/run/dpdk/spdk_pid184216 00:37:46.714 Removing: /var/run/dpdk/spdk_pid184437 00:37:46.714 Removing: /var/run/dpdk/spdk_pid184647 00:37:46.714 Removing: /var/run/dpdk/spdk_pid184770 00:37:46.714 Removing: /var/run/dpdk/spdk_pid184912 00:37:46.714 Removing: /var/run/dpdk/spdk_pid184993 00:37:46.714 Removing: /var/run/dpdk/spdk_pid185031 00:37:46.714 Removing: /var/run/dpdk/spdk_pid185069 00:37:46.714 Removing: /var/run/dpdk/spdk_pid185540 00:37:46.714 Removing: /var/run/dpdk/spdk_pid185640 00:37:46.714 Removing: /var/run/dpdk/spdk_pid185759 00:37:46.714 Removing: /var/run/dpdk/spdk_pid185822 00:37:46.714 Removing: /var/run/dpdk/spdk_pid187139 00:37:46.714 Removing: /var/run/dpdk/spdk_pid187532 00:37:46.714 Removing: /var/run/dpdk/spdk_pid187720 00:37:46.714 Removing: /var/run/dpdk/spdk_pid188698 00:37:46.714 Removing: /var/run/dpdk/spdk_pid189077 00:37:46.714 Removing: /var/run/dpdk/spdk_pid189279 00:37:46.714 Removing: /var/run/dpdk/spdk_pid190260 00:37:46.714 Removing: /var/run/dpdk/spdk_pid190815 00:37:46.714 Removing: /var/run/dpdk/spdk_pid191017 00:37:46.714 Removing: /var/run/dpdk/spdk_pid193238 00:37:46.714 Removing: /var/run/dpdk/spdk_pid193726 00:37:46.714 Removing: /var/run/dpdk/spdk_pid193937 00:37:46.714 Removing: /var/run/dpdk/spdk_pid196205 00:37:46.714 Removing: /var/run/dpdk/spdk_pid196728 00:37:46.714 Removing: /var/run/dpdk/spdk_pid196946 00:37:46.714 Removing: /var/run/dpdk/spdk_pid199163 00:37:46.714 Removing: /var/run/dpdk/spdk_pid199933 00:37:46.714 Removing: /var/run/dpdk/spdk_pid200138 00:37:46.714 Removing: /var/run/dpdk/spdk_pid202605 00:37:46.714 Removing: /var/run/dpdk/spdk_pid203165 00:37:46.714 Removing: /var/run/dpdk/spdk_pid203383 00:37:46.714 Removing: /var/run/dpdk/spdk_pid205842 00:37:46.714 Removing: /var/run/dpdk/spdk_pid206406 00:37:46.714 Removing: /var/run/dpdk/spdk_pid206627 00:37:46.714 Removing: /var/run/dpdk/spdk_pid209102 00:37:46.714 Removing: /var/run/dpdk/spdk_pid209977 00:37:46.714 Removing: /var/run/dpdk/spdk_pid210192 00:37:46.714 Removing: /var/run/dpdk/spdk_pid210412 00:37:46.714 Removing: /var/run/dpdk/spdk_pid210929 00:37:46.714 Removing: /var/run/dpdk/spdk_pid211850 00:37:46.714 Removing: /var/run/dpdk/spdk_pid212333 00:37:46.714 Removing: /var/run/dpdk/spdk_pid213217 00:37:46.714 Removing: /var/run/dpdk/spdk_pid213755 00:37:46.714 Removing: /var/run/dpdk/spdk_pid214694 00:37:46.714 Removing: /var/run/dpdk/spdk_pid215208 00:37:46.714 Removing: /var/run/dpdk/spdk_pid216517 00:37:46.714 Removing: /var/run/dpdk/spdk_pid217055 00:37:46.714 Removing: /var/run/dpdk/spdk_pid218328 00:37:46.714 Removing: /var/run/dpdk/spdk_pid218859 00:37:46.714 Removing: /var/run/dpdk/spdk_pid220128 00:37:46.714 Removing: /var/run/dpdk/spdk_pid220652 00:37:46.714 Removing: /var/run/dpdk/spdk_pid221504 00:37:46.714 Removing: /var/run/dpdk/spdk_pid221557 00:37:46.714 Removing: /var/run/dpdk/spdk_pid221608 00:37:46.714 Removing: /var/run/dpdk/spdk_pid221666 00:37:46.714 Removing: /var/run/dpdk/spdk_pid221807 00:37:46.714 Removing: /var/run/dpdk/spdk_pid221953 00:37:46.714 Removing: /var/run/dpdk/spdk_pid222179 00:37:46.714 Removing: /var/run/dpdk/spdk_pid222439 00:37:46.714 Removing: /var/run/dpdk/spdk_pid222462 00:37:46.714 Removing: /var/run/dpdk/spdk_pid222507 00:37:46.714 Removing: /var/run/dpdk/spdk_pid222538 00:37:46.714 Removing: /var/run/dpdk/spdk_pid222570 00:37:46.714 Removing: /var/run/dpdk/spdk_pid222598 00:37:46.714 Removing: /var/run/dpdk/spdk_pid222629 00:37:46.714 Removing: /var/run/dpdk/spdk_pid222658 00:37:46.714 Removing: /var/run/dpdk/spdk_pid222689 00:37:46.714 Removing: /var/run/dpdk/spdk_pid222716 00:37:46.714 Removing: /var/run/dpdk/spdk_pid222738 00:37:46.714 Removing: /var/run/dpdk/spdk_pid222776 00:37:46.714 Removing: /var/run/dpdk/spdk_pid222797 00:37:46.714 Removing: /var/run/dpdk/spdk_pid222825 00:37:46.714 Removing: /var/run/dpdk/spdk_pid222856 00:37:46.714 Removing: /var/run/dpdk/spdk_pid222884 00:37:46.714 Removing: /var/run/dpdk/spdk_pid222912 00:37:46.714 Removing: /var/run/dpdk/spdk_pid222944 00:37:46.714 Removing: /var/run/dpdk/spdk_pid222971 00:37:46.714 Removing: /var/run/dpdk/spdk_pid222999 00:37:46.714 Removing: /var/run/dpdk/spdk_pid223053 00:37:46.714 Removing: /var/run/dpdk/spdk_pid223075 00:37:46.714 Removing: /var/run/dpdk/spdk_pid223117 00:37:46.714 Removing: /var/run/dpdk/spdk_pid223203 00:37:46.714 Removing: /var/run/dpdk/spdk_pid223250 00:37:46.714 Removing: /var/run/dpdk/spdk_pid223277 00:37:46.714 Removing: /var/run/dpdk/spdk_pid223316 00:37:46.714 Removing: /var/run/dpdk/spdk_pid223348 00:37:46.714 Removing: /var/run/dpdk/spdk_pid223371 00:37:46.714 Removing: /var/run/dpdk/spdk_pid223428 00:37:46.714 Removing: /var/run/dpdk/spdk_pid223454 00:37:46.714 Removing: /var/run/dpdk/spdk_pid223497 00:37:46.714 Removing: /var/run/dpdk/spdk_pid223525 00:37:46.714 Removing: /var/run/dpdk/spdk_pid223546 00:37:46.972 Removing: /var/run/dpdk/spdk_pid223571 00:37:46.972 Removing: /var/run/dpdk/spdk_pid223595 00:37:46.973 Removing: /var/run/dpdk/spdk_pid223612 00:37:46.973 Removing: /var/run/dpdk/spdk_pid223640 00:37:46.973 Removing: /var/run/dpdk/spdk_pid223665 00:37:46.973 Removing: /var/run/dpdk/spdk_pid223710 00:37:46.973 Removing: /var/run/dpdk/spdk_pid223751 00:37:46.973 Removing: /var/run/dpdk/spdk_pid223783 00:37:46.973 Removing: /var/run/dpdk/spdk_pid223829 00:37:46.973 Removing: /var/run/dpdk/spdk_pid223850 00:37:46.973 Removing: /var/run/dpdk/spdk_pid223872 00:37:46.973 Removing: /var/run/dpdk/spdk_pid223936 00:37:46.973 Removing: /var/run/dpdk/spdk_pid223962 00:37:46.973 Removing: /var/run/dpdk/spdk_pid224004 00:37:46.973 Removing: /var/run/dpdk/spdk_pid224031 00:37:46.973 Removing: /var/run/dpdk/spdk_pid224055 00:37:46.973 Removing: /var/run/dpdk/spdk_pid224072 00:37:46.973 Removing: /var/run/dpdk/spdk_pid224100 00:37:46.973 Removing: /var/run/dpdk/spdk_pid224124 00:37:46.973 Removing: /var/run/dpdk/spdk_pid224142 00:37:46.973 Removing: /var/run/dpdk/spdk_pid224166 00:37:46.973 Removing: /var/run/dpdk/spdk_pid224257 00:37:46.973 Removing: /var/run/dpdk/spdk_pid224337 00:37:46.973 Removing: /var/run/dpdk/spdk_pid224469 00:37:46.973 Removing: /var/run/dpdk/spdk_pid224499 00:37:46.973 Removing: /var/run/dpdk/spdk_pid224558 00:37:46.973 Removing: /var/run/dpdk/spdk_pid224617 00:37:46.973 Removing: /var/run/dpdk/spdk_pid224649 00:37:46.973 Removing: /var/run/dpdk/spdk_pid224682 00:37:46.973 Removing: /var/run/dpdk/spdk_pid224712 00:37:46.973 Removing: /var/run/dpdk/spdk_pid224756 00:37:46.973 Removing: /var/run/dpdk/spdk_pid224788 00:37:46.973 Removing: /var/run/dpdk/spdk_pid224875 00:37:46.973 Removing: /var/run/dpdk/spdk_pid224941 00:37:46.973 Removing: /var/run/dpdk/spdk_pid224993 00:37:46.973 Removing: /var/run/dpdk/spdk_pid225267 00:37:46.973 Removing: /var/run/dpdk/spdk_pid225391 00:37:46.973 Removing: /var/run/dpdk/spdk_pid225439 00:37:46.973 Clean 00:37:46.973 14:30:32 -- common/autotest_common.sh@1451 -- # return 0 00:37:46.973 14:30:32 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:37:46.973 14:30:32 -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:46.973 14:30:32 -- common/autotest_common.sh@10 -- # set +x 00:37:46.973 14:30:32 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:37:46.973 14:30:32 -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:46.973 14:30:32 -- common/autotest_common.sh@10 -- # set +x 00:37:46.973 14:30:32 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:37:46.973 14:30:32 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:37:46.973 14:30:32 -- spdk/autotest.sh@389 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:37:46.973 14:30:32 -- spdk/autotest.sh@391 -- # hash lcov 00:37:46.973 14:30:32 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:37:46.973 14:30:32 -- spdk/autotest.sh@393 -- # hostname 00:37:47.231 14:30:32 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t rocky9-cloud-1711172311-2200 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:37:47.231 geninfo: WARNING: invalid characters removed from testname! 00:38:43.527 14:31:20 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:38:43.527 14:31:25 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:38:43.527 14:31:28 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:38:46.813 14:31:32 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:38:49.344 14:31:34 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:38:51.889 14:31:37 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:38:55.169 14:31:40 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:38:55.169 14:31:40 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:38:55.169 14:31:40 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:38:55.169 14:31:40 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:55.169 14:31:40 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:55.169 14:31:40 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:38:55.169 14:31:40 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:38:55.169 14:31:40 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:38:55.169 14:31:40 -- paths/export.sh@5 -- $ export PATH 00:38:55.169 14:31:40 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/sbin:/bin:/usr/sbin:/usr/bin 00:38:55.169 14:31:40 -- common/autobuild_common.sh@472 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:38:55.169 14:31:40 -- common/autobuild_common.sh@473 -- $ date +%s 00:38:55.169 14:31:40 -- common/autobuild_common.sh@473 -- $ mktemp -dt spdk_1721053900.XXXXXX 00:38:55.169 14:31:40 -- common/autobuild_common.sh@473 -- $ SPDK_WORKSPACE=/tmp/spdk_1721053900.JbvAwd 00:38:55.169 14:31:40 -- common/autobuild_common.sh@475 -- $ [[ -n '' ]] 00:38:55.169 14:31:40 -- common/autobuild_common.sh@479 -- $ '[' -n '' ']' 00:38:55.169 14:31:40 -- common/autobuild_common.sh@482 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:38:55.169 14:31:40 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:38:55.169 14:31:40 -- common/autobuild_common.sh@488 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:38:55.169 14:31:40 -- common/autobuild_common.sh@489 -- $ get_config_params 00:38:55.169 14:31:40 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:38:55.169 14:31:40 -- common/autotest_common.sh@10 -- $ set +x 00:38:55.169 14:31:41 -- common/autobuild_common.sh@489 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-asan --enable-coverage' 00:38:55.169 14:31:41 -- common/autobuild_common.sh@491 -- $ start_monitor_resources 00:38:55.169 14:31:41 -- pm/common@17 -- $ local monitor 00:38:55.170 14:31:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:55.170 14:31:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:55.170 14:31:41 -- pm/common@25 -- $ sleep 1 00:38:55.170 14:31:41 -- pm/common@21 -- $ date +%s 00:38:55.170 14:31:41 -- pm/common@21 -- $ date +%s 00:38:55.170 14:31:41 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721053901 00:38:55.170 14:31:41 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721053901 00:38:55.170 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721053901_collect-vmstat.pm.log 00:38:55.170 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721053901_collect-cpu-load.pm.log 00:38:56.106 14:31:42 -- common/autobuild_common.sh@492 -- $ trap stop_monitor_resources EXIT 00:38:56.106 14:31:42 -- spdk/autopackage.sh@10 -- $ [[ 1 -eq 1 ]] 00:38:56.106 14:31:42 -- spdk/autopackage.sh@11 -- $ build_release 00:38:56.106 14:31:42 -- common/autobuild_common.sh@469 -- $ run_test build_release _build_release 00:38:56.106 14:31:42 -- common/autotest_common.sh@1099 -- $ '[' 2 -le 1 ']' 00:38:56.106 14:31:42 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:38:56.106 14:31:42 -- common/autotest_common.sh@10 -- $ set +x 00:38:56.106 ************************************ 00:38:56.106 START TEST build_release 00:38:56.106 ************************************ 00:38:56.106 14:31:42 build_release -- common/autotest_common.sh@1123 -- $ _build_release 00:38:56.106 14:31:42 build_release -- common/autobuild_common.sh@444 -- $ local jobs LD 00:38:56.106 14:31:42 build_release -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:38:56.106 14:31:42 build_release -- common/autobuild_common.sh@450 -- $ [[ '' == *clang* ]] 00:38:56.106 14:31:42 build_release -- common/autobuild_common.sh@460 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-asan --enable-coverage --disable-debug --disable-unit-tests --enable-lto 00:38:56.366 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:38:56.366 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:38:56.625 Using 'verbs' RDMA provider 00:39:09.413 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:39:21.611 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:39:21.611 Creating mk/config.mk...done. 00:39:21.611 Creating mk/cc.flags.mk...done. 00:39:21.611 Type 'make' to build. 00:39:21.611 14:32:06 build_release -- common/autobuild_common.sh@465 -- $ make -C /home/vagrant/spdk_repo/spdk -j10 00:39:21.611 make: Entering directory '/home/vagrant/spdk_repo/spdk' 00:39:21.611 make[1]: Nothing to be done for 'all'. 00:39:26.882 The Meson build system 00:39:26.882 Version: 1.4.0 00:39:26.882 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:39:26.882 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:39:26.882 Build type: native build 00:39:26.882 Program cat found: YES (/bin/cat) 00:39:26.882 Project name: DPDK 00:39:26.882 Project version: 24.03.0 00:39:26.882 C compiler for the host machine: cc (gcc 11.4.1 "cc (GCC) 11.4.1 20230605 (Red Hat 11.4.1-2)") 00:39:26.882 C linker for the host machine: cc ld.bfd 2.35.2-42 00:39:26.882 Host machine cpu family: x86_64 00:39:26.882 Host machine cpu: x86_64 00:39:26.882 Message: ## Building in Developer Mode ## 00:39:26.882 Program pkg-config found: YES (/bin/pkg-config) 00:39:26.882 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:39:26.882 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:39:26.882 Program python3 found: YES (/usr/bin/python3) 00:39:26.882 Program cat found: YES (/bin/cat) 00:39:26.882 Compiler for C supports arguments -march=native: YES 00:39:26.882 Checking for size of "void *" : 8 00:39:26.882 Checking for size of "void *" : 8 (cached) 00:39:26.882 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:39:26.882 Library m found: YES 00:39:26.882 Library numa found: YES 00:39:26.882 Has header "numaif.h" : YES 00:39:26.882 Library fdt found: NO 00:39:26.882 Library execinfo found: NO 00:39:26.882 Has header "execinfo.h" : YES 00:39:26.882 Found pkg-config: YES (/bin/pkg-config) 1.7.3 00:39:26.882 Run-time dependency libarchive found: NO (tried pkgconfig) 00:39:26.882 Run-time dependency libbsd found: NO (tried pkgconfig) 00:39:26.882 Run-time dependency jansson found: NO (tried pkgconfig) 00:39:26.882 Run-time dependency openssl found: YES 3.0.7 00:39:26.882 Run-time dependency libpcap found: NO (tried pkgconfig) 00:39:26.882 Library pcap found: NO 00:39:26.882 Compiler for C supports arguments -Wcast-qual: YES 00:39:26.882 Compiler for C supports arguments -Wdeprecated: YES 00:39:26.882 Compiler for C supports arguments -Wformat: YES 00:39:26.882 Compiler for C supports arguments -Wformat-nonliteral: NO 00:39:26.882 Compiler for C supports arguments -Wformat-security: NO 00:39:26.882 Compiler for C supports arguments -Wmissing-declarations: YES 00:39:26.882 Compiler for C supports arguments -Wmissing-prototypes: YES 00:39:26.882 Compiler for C supports arguments -Wnested-externs: YES 00:39:26.882 Compiler for C supports arguments -Wold-style-definition: YES 00:39:26.882 Compiler for C supports arguments -Wpointer-arith: YES 00:39:26.882 Compiler for C supports arguments -Wsign-compare: YES 00:39:26.882 Compiler for C supports arguments -Wstrict-prototypes: YES 00:39:26.882 Compiler for C supports arguments -Wundef: YES 00:39:26.882 Compiler for C supports arguments -Wwrite-strings: YES 00:39:26.882 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:39:26.882 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:39:26.882 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:39:26.882 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:39:26.882 Program objdump found: YES (/bin/objdump) 00:39:26.882 Compiler for C supports arguments -mavx512f: YES 00:39:26.882 Checking if "AVX512 checking" compiles: YES 00:39:26.882 Fetching value of define "__SSE4_2__" : 1 00:39:26.882 Fetching value of define "__AES__" : 1 00:39:26.882 Fetching value of define "__AVX__" : 1 00:39:26.882 Fetching value of define "__AVX2__" : 1 00:39:26.882 Fetching value of define "__AVX512BW__" : (undefined) 00:39:26.882 Fetching value of define "__AVX512CD__" : (undefined) 00:39:26.882 Fetching value of define "__AVX512DQ__" : (undefined) 00:39:26.882 Fetching value of define "__AVX512F__" : (undefined) 00:39:26.882 Fetching value of define "__AVX512VL__" : (undefined) 00:39:26.882 Fetching value of define "__PCLMUL__" : 1 00:39:26.882 Fetching value of define "__RDRND__" : 1 00:39:26.882 Fetching value of define "__RDSEED__" : 1 00:39:26.882 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:39:26.882 Fetching value of define "__znver1__" : (undefined) 00:39:26.882 Fetching value of define "__znver2__" : (undefined) 00:39:26.882 Fetching value of define "__znver3__" : (undefined) 00:39:26.882 Fetching value of define "__znver4__" : (undefined) 00:39:26.882 Compiler for C supports arguments -ffat-lto-objects: YES 00:39:26.882 Library asan found: YES 00:39:26.882 Compiler for C supports arguments -Wno-format-truncation: YES 00:39:26.882 Message: lib/log: Defining dependency "log" 00:39:26.882 Message: lib/kvargs: Defining dependency "kvargs" 00:39:26.882 Message: lib/telemetry: Defining dependency "telemetry" 00:39:26.882 Library rt found: YES 00:39:26.882 Checking for function "getentropy" : NO 00:39:26.882 Message: lib/eal: Defining dependency "eal" 00:39:26.882 Message: lib/ring: Defining dependency "ring" 00:39:26.882 Message: lib/rcu: Defining dependency "rcu" 00:39:26.882 Message: lib/mempool: Defining dependency "mempool" 00:39:26.882 Message: lib/mbuf: Defining dependency "mbuf" 00:39:26.882 Fetching value of define "__PCLMUL__" : 1 (cached) 00:39:26.882 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:39:26.882 Compiler for C supports arguments -mpclmul: YES 00:39:26.882 Compiler for C supports arguments -maes: YES 00:39:26.882 Compiler for C supports arguments -mavx512f: YES (cached) 00:39:26.882 Compiler for C supports arguments -mavx512bw: YES 00:39:26.882 Compiler for C supports arguments -mavx512dq: YES 00:39:26.882 Compiler for C supports arguments -mavx512vl: YES 00:39:26.882 Compiler for C supports arguments -mvpclmulqdq: YES 00:39:26.882 Compiler for C supports arguments -mavx2: YES 00:39:26.882 Compiler for C supports arguments -mavx: YES 00:39:26.882 Message: lib/net: Defining dependency "net" 00:39:26.882 Message: lib/meter: Defining dependency "meter" 00:39:26.882 Message: lib/ethdev: Defining dependency "ethdev" 00:39:26.882 Message: lib/pci: Defining dependency "pci" 00:39:26.882 Message: lib/cmdline: Defining dependency "cmdline" 00:39:26.882 Message: lib/hash: Defining dependency "hash" 00:39:26.882 Message: lib/timer: Defining dependency "timer" 00:39:26.882 Message: lib/compressdev: Defining dependency "compressdev" 00:39:26.882 Message: lib/cryptodev: Defining dependency "cryptodev" 00:39:26.882 Message: lib/dmadev: Defining dependency "dmadev" 00:39:26.882 Compiler for C supports arguments -Wno-cast-qual: YES 00:39:26.882 Message: lib/power: Defining dependency "power" 00:39:26.882 Message: lib/reorder: Defining dependency "reorder" 00:39:26.882 Message: lib/security: Defining dependency "security" 00:39:26.882 Has header "linux/userfaultfd.h" : YES 00:39:26.882 Has header "linux/vduse.h" : NO 00:39:26.882 Message: lib/vhost: Defining dependency "vhost" 00:39:26.882 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:39:26.882 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:39:26.882 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:39:26.882 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:39:26.882 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:39:26.882 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:39:26.882 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:39:26.882 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:39:26.882 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:39:26.882 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:39:26.882 Program doxygen found: YES (/bin/doxygen) 00:39:26.882 Configuring doxy-api-html.conf using configuration 00:39:26.882 Configuring doxy-api-man.conf using configuration 00:39:26.882 Program mandb found: YES (/bin/mandb) 00:39:26.882 Program sphinx-build found: NO 00:39:26.883 Configuring rte_build_config.h using configuration 00:39:26.883 Message: 00:39:26.883 ================= 00:39:26.883 Applications Enabled 00:39:26.883 ================= 00:39:26.883 00:39:26.883 apps: 00:39:26.883 00:39:26.883 00:39:26.883 Message: 00:39:26.883 ================= 00:39:26.883 Libraries Enabled 00:39:26.883 ================= 00:39:26.883 00:39:26.883 libs: 00:39:26.883 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:39:26.883 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:39:26.883 cryptodev, dmadev, power, reorder, security, vhost, 00:39:26.883 00:39:26.883 Message: 00:39:26.883 =============== 00:39:26.883 Drivers Enabled 00:39:26.883 =============== 00:39:26.883 00:39:26.883 common: 00:39:26.883 00:39:26.883 bus: 00:39:26.883 pci, vdev, 00:39:26.883 mempool: 00:39:26.883 ring, 00:39:26.883 dma: 00:39:26.883 00:39:26.883 net: 00:39:26.883 00:39:26.883 crypto: 00:39:26.883 00:39:26.883 compress: 00:39:26.883 00:39:26.883 vdpa: 00:39:26.883 00:39:26.883 00:39:26.883 Message: 00:39:26.883 ================= 00:39:26.883 Content Skipped 00:39:26.883 ================= 00:39:26.883 00:39:26.883 apps: 00:39:26.883 dumpcap: explicitly disabled via build config 00:39:26.883 graph: explicitly disabled via build config 00:39:26.883 pdump: explicitly disabled via build config 00:39:26.883 proc-info: explicitly disabled via build config 00:39:26.883 test-acl: explicitly disabled via build config 00:39:26.883 test-bbdev: explicitly disabled via build config 00:39:26.883 test-cmdline: explicitly disabled via build config 00:39:26.883 test-compress-perf: explicitly disabled via build config 00:39:26.883 test-crypto-perf: explicitly disabled via build config 00:39:26.883 test-dma-perf: explicitly disabled via build config 00:39:26.883 test-eventdev: explicitly disabled via build config 00:39:26.883 test-fib: explicitly disabled via build config 00:39:26.883 test-flow-perf: explicitly disabled via build config 00:39:26.883 test-gpudev: explicitly disabled via build config 00:39:26.883 test-mldev: explicitly disabled via build config 00:39:26.883 test-pipeline: explicitly disabled via build config 00:39:26.883 test-pmd: explicitly disabled via build config 00:39:26.883 test-regex: explicitly disabled via build config 00:39:26.883 test-sad: explicitly disabled via build config 00:39:26.883 test-security-perf: explicitly disabled via build config 00:39:26.883 00:39:26.883 libs: 00:39:26.883 argparse: explicitly disabled via build config 00:39:26.883 metrics: explicitly disabled via build config 00:39:26.883 acl: explicitly disabled via build config 00:39:26.883 bbdev: explicitly disabled via build config 00:39:26.883 bitratestats: explicitly disabled via build config 00:39:26.883 bpf: explicitly disabled via build config 00:39:26.883 cfgfile: explicitly disabled via build config 00:39:26.883 distributor: explicitly disabled via build config 00:39:26.883 efd: explicitly disabled via build config 00:39:26.883 eventdev: explicitly disabled via build config 00:39:26.883 dispatcher: explicitly disabled via build config 00:39:26.883 gpudev: explicitly disabled via build config 00:39:26.883 gro: explicitly disabled via build config 00:39:26.883 gso: explicitly disabled via build config 00:39:26.883 ip_frag: explicitly disabled via build config 00:39:26.883 jobstats: explicitly disabled via build config 00:39:26.883 latencystats: explicitly disabled via build config 00:39:26.883 lpm: explicitly disabled via build config 00:39:26.883 member: explicitly disabled via build config 00:39:26.883 pcapng: explicitly disabled via build config 00:39:26.883 rawdev: explicitly disabled via build config 00:39:26.883 regexdev: explicitly disabled via build config 00:39:26.883 mldev: explicitly disabled via build config 00:39:26.883 rib: explicitly disabled via build config 00:39:26.883 sched: explicitly disabled via build config 00:39:26.883 stack: explicitly disabled via build config 00:39:26.883 ipsec: explicitly disabled via build config 00:39:26.883 pdcp: explicitly disabled via build config 00:39:26.883 fib: explicitly disabled via build config 00:39:26.883 port: explicitly disabled via build config 00:39:26.883 pdump: explicitly disabled via build config 00:39:26.883 table: explicitly disabled via build config 00:39:26.883 pipeline: explicitly disabled via build config 00:39:26.883 graph: explicitly disabled via build config 00:39:26.883 node: explicitly disabled via build config 00:39:26.883 00:39:26.883 drivers: 00:39:26.883 common/cpt: not in enabled drivers build config 00:39:26.883 common/dpaax: not in enabled drivers build config 00:39:26.883 common/iavf: not in enabled drivers build config 00:39:26.883 common/idpf: not in enabled drivers build config 00:39:26.883 common/ionic: not in enabled drivers build config 00:39:26.883 common/mvep: not in enabled drivers build config 00:39:26.883 common/octeontx: not in enabled drivers build config 00:39:26.883 bus/auxiliary: not in enabled drivers build config 00:39:26.883 bus/cdx: not in enabled drivers build config 00:39:26.883 bus/dpaa: not in enabled drivers build config 00:39:26.883 bus/fslmc: not in enabled drivers build config 00:39:26.883 bus/ifpga: not in enabled drivers build config 00:39:26.883 bus/platform: not in enabled drivers build config 00:39:26.883 bus/uacce: not in enabled drivers build config 00:39:26.883 bus/vmbus: not in enabled drivers build config 00:39:26.883 common/cnxk: not in enabled drivers build config 00:39:26.883 common/mlx5: not in enabled drivers build config 00:39:26.883 common/nfp: not in enabled drivers build config 00:39:26.883 common/nitrox: not in enabled drivers build config 00:39:26.883 common/qat: not in enabled drivers build config 00:39:26.883 common/sfc_efx: not in enabled drivers build config 00:39:26.883 mempool/bucket: not in enabled drivers build config 00:39:26.883 mempool/cnxk: not in enabled drivers build config 00:39:26.883 mempool/dpaa: not in enabled drivers build config 00:39:26.883 mempool/dpaa2: not in enabled drivers build config 00:39:26.883 mempool/octeontx: not in enabled drivers build config 00:39:26.883 mempool/stack: not in enabled drivers build config 00:39:26.883 dma/cnxk: not in enabled drivers build config 00:39:26.883 dma/dpaa: not in enabled drivers build config 00:39:26.883 dma/dpaa2: not in enabled drivers build config 00:39:26.883 dma/hisilicon: not in enabled drivers build config 00:39:26.883 dma/idxd: not in enabled drivers build config 00:39:26.883 dma/ioat: not in enabled drivers build config 00:39:26.883 dma/skeleton: not in enabled drivers build config 00:39:26.883 net/af_packet: not in enabled drivers build config 00:39:26.883 net/af_xdp: not in enabled drivers build config 00:39:26.883 net/ark: not in enabled drivers build config 00:39:26.883 net/atlantic: not in enabled drivers build config 00:39:26.883 net/avp: not in enabled drivers build config 00:39:26.883 net/axgbe: not in enabled drivers build config 00:39:26.883 net/bnx2x: not in enabled drivers build config 00:39:26.883 net/bnxt: not in enabled drivers build config 00:39:26.883 net/bonding: not in enabled drivers build config 00:39:26.883 net/cnxk: not in enabled drivers build config 00:39:26.883 net/cpfl: not in enabled drivers build config 00:39:26.883 net/cxgbe: not in enabled drivers build config 00:39:26.883 net/dpaa: not in enabled drivers build config 00:39:26.883 net/dpaa2: not in enabled drivers build config 00:39:26.883 net/e1000: not in enabled drivers build config 00:39:26.883 net/ena: not in enabled drivers build config 00:39:26.883 net/enetc: not in enabled drivers build config 00:39:26.883 net/enetfec: not in enabled drivers build config 00:39:26.883 net/enic: not in enabled drivers build config 00:39:26.883 net/failsafe: not in enabled drivers build config 00:39:26.883 net/fm10k: not in enabled drivers build config 00:39:26.883 net/gve: not in enabled drivers build config 00:39:26.883 net/hinic: not in enabled drivers build config 00:39:26.883 net/hns3: not in enabled drivers build config 00:39:26.883 net/i40e: not in enabled drivers build config 00:39:26.883 net/iavf: not in enabled drivers build config 00:39:26.883 net/ice: not in enabled drivers build config 00:39:26.883 net/idpf: not in enabled drivers build config 00:39:26.883 net/igc: not in enabled drivers build config 00:39:26.883 net/ionic: not in enabled drivers build config 00:39:26.883 net/ipn3ke: not in enabled drivers build config 00:39:26.883 net/ixgbe: not in enabled drivers build config 00:39:26.883 net/mana: not in enabled drivers build config 00:39:26.883 net/memif: not in enabled drivers build config 00:39:26.883 net/mlx4: not in enabled drivers build config 00:39:26.883 net/mlx5: not in enabled drivers build config 00:39:26.883 net/mvneta: not in enabled drivers build config 00:39:26.883 net/mvpp2: not in enabled drivers build config 00:39:26.883 net/netvsc: not in enabled drivers build config 00:39:26.883 net/nfb: not in enabled drivers build config 00:39:26.883 net/nfp: not in enabled drivers build config 00:39:26.883 net/ngbe: not in enabled drivers build config 00:39:26.883 net/null: not in enabled drivers build config 00:39:26.883 net/octeontx: not in enabled drivers build config 00:39:26.883 net/octeon_ep: not in enabled drivers build config 00:39:26.883 net/pcap: not in enabled drivers build config 00:39:26.883 net/pfe: not in enabled drivers build config 00:39:26.883 net/qede: not in enabled drivers build config 00:39:26.883 net/ring: not in enabled drivers build config 00:39:26.883 net/sfc: not in enabled drivers build config 00:39:26.883 net/softnic: not in enabled drivers build config 00:39:26.883 net/tap: not in enabled drivers build config 00:39:26.883 net/thunderx: not in enabled drivers build config 00:39:26.883 net/txgbe: not in enabled drivers build config 00:39:26.883 net/vdev_netvsc: not in enabled drivers build config 00:39:26.883 net/vhost: not in enabled drivers build config 00:39:26.883 net/virtio: not in enabled drivers build config 00:39:26.883 net/vmxnet3: not in enabled drivers build config 00:39:26.883 raw/*: missing internal dependency, "rawdev" 00:39:26.883 crypto/armv8: not in enabled drivers build config 00:39:26.883 crypto/bcmfs: not in enabled drivers build config 00:39:26.883 crypto/caam_jr: not in enabled drivers build config 00:39:26.883 crypto/ccp: not in enabled drivers build config 00:39:26.883 crypto/cnxk: not in enabled drivers build config 00:39:26.883 crypto/dpaa_sec: not in enabled drivers build config 00:39:26.883 crypto/dpaa2_sec: not in enabled drivers build config 00:39:26.883 crypto/ipsec_mb: not in enabled drivers build config 00:39:26.883 crypto/mlx5: not in enabled drivers build config 00:39:26.883 crypto/mvsam: not in enabled drivers build config 00:39:26.883 crypto/nitrox: not in enabled drivers build config 00:39:26.883 crypto/null: not in enabled drivers build config 00:39:26.883 crypto/octeontx: not in enabled drivers build config 00:39:26.884 crypto/openssl: not in enabled drivers build config 00:39:26.884 crypto/scheduler: not in enabled drivers build config 00:39:26.884 crypto/uadk: not in enabled drivers build config 00:39:26.884 crypto/virtio: not in enabled drivers build config 00:39:26.884 compress/isal: not in enabled drivers build config 00:39:26.884 compress/mlx5: not in enabled drivers build config 00:39:26.884 compress/nitrox: not in enabled drivers build config 00:39:26.884 compress/octeontx: not in enabled drivers build config 00:39:26.884 compress/zlib: not in enabled drivers build config 00:39:26.884 regex/*: missing internal dependency, "regexdev" 00:39:26.884 ml/*: missing internal dependency, "mldev" 00:39:26.884 vdpa/ifc: not in enabled drivers build config 00:39:26.884 vdpa/mlx5: not in enabled drivers build config 00:39:26.884 vdpa/nfp: not in enabled drivers build config 00:39:26.884 vdpa/sfc: not in enabled drivers build config 00:39:26.884 event/*: missing internal dependency, "eventdev" 00:39:26.884 baseband/*: missing internal dependency, "bbdev" 00:39:26.884 gpu/*: missing internal dependency, "gpudev" 00:39:26.884 00:39:26.884 00:39:27.450 Build targets in project: 85 00:39:27.450 00:39:27.450 DPDK 24.03.0 00:39:27.450 00:39:27.450 User defined options 00:39:27.450 default_library : static 00:39:27.450 libdir : lib 00:39:27.450 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:39:27.450 b_lto : true 00:39:27.450 b_sanitize : address 00:39:27.450 c_args : -Wno-stringop-overflow -fcommon -fPIC -Werror 00:39:27.450 c_link_args : -Wno-stringop-overflow -fcommon 00:39:27.450 cpu_instruction_set: native 00:39:27.450 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:39:27.450 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:39:27.450 enable_docs : false 00:39:27.450 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:39:27.450 enable_kmods : false 00:39:27.450 max_lcores : 128 00:39:27.450 tests : false 00:39:27.450 00:39:27.450 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:39:28.016 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:39:28.016 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:39:28.016 [2/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:39:28.016 [3/267] Linking static target lib/librte_kvargs.a 00:39:28.016 [4/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:39:28.016 [5/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:39:28.016 [6/267] Linking static target lib/librte_log.a 00:39:28.274 [7/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:39:28.274 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:39:28.274 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:39:28.274 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:39:28.274 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:39:28.274 [12/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:39:28.275 [13/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:39:28.533 [14/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:39:28.533 [15/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:39:28.533 [16/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:39:28.791 [17/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:39:28.791 [18/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:39:28.791 [19/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:39:29.050 [20/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:39:29.050 [21/267] Linking target lib/librte_log.so.24.1 00:39:29.050 [22/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:39:29.050 [23/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:39:29.050 [24/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:39:29.050 [25/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:39:29.050 [26/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:39:29.311 [27/267] Linking target lib/librte_kvargs.so.24.1 00:39:29.311 [28/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:39:29.311 [29/267] Linking static target lib/librte_telemetry.a 00:39:29.311 [30/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:39:29.311 [31/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:39:29.311 [32/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:39:29.311 [33/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:39:29.580 [34/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:39:29.580 [35/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:39:29.580 [36/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:39:29.580 [37/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:39:29.580 [38/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:39:29.581 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:39:29.581 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:39:29.581 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:39:29.839 [42/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:39:29.839 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:39:30.096 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:39:30.096 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:39:30.096 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:39:30.355 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:39:30.355 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:39:30.355 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:39:30.355 [50/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:39:30.355 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:39:30.613 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:39:30.613 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:39:30.613 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:39:30.613 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:39:30.613 [56/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:39:30.871 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:39:30.871 [58/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:39:30.871 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:39:30.871 [60/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:39:30.871 [61/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:39:30.872 [62/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:39:30.872 [63/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:39:31.131 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:39:31.131 [65/267] Linking target lib/librte_telemetry.so.24.1 00:39:31.131 [66/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:39:31.131 [67/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:39:31.389 [68/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:39:31.389 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:39:31.389 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:39:31.389 [71/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:39:31.648 [72/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:39:31.648 [73/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:39:31.648 [74/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:39:31.648 [75/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:39:31.648 [76/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:39:31.648 [77/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:39:31.906 [78/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:39:31.906 [79/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:39:31.906 [80/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:39:31.906 [81/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:39:32.164 [82/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:39:32.164 [83/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:39:32.164 [84/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:39:32.164 [85/267] Linking static target lib/librte_ring.a 00:39:32.423 [86/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:39:32.423 [87/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:39:32.423 [88/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:39:32.423 [89/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:39:32.423 [90/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:39:32.681 [91/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:39:32.681 [92/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:39:32.681 [93/267] Linking static target lib/librte_eal.a 00:39:32.681 [94/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:39:32.681 [95/267] Linking static target lib/librte_mempool.a 00:39:32.938 [96/267] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:39:32.938 [97/267] Linking static target lib/net/libnet_crc_avx512_lib.a 00:39:32.938 [98/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:39:32.938 [99/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:39:32.938 [100/267] Linking static target lib/librte_rcu.a 00:39:32.938 [101/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:39:33.196 [102/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:39:33.196 [103/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:39:33.196 [104/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:39:33.196 [105/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:39:33.454 [106/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:39:33.454 [107/267] Linking static target lib/librte_net.a 00:39:33.454 [108/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:39:33.454 [109/267] Linking static target lib/librte_meter.a 00:39:33.455 [110/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:39:33.455 [111/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:39:33.713 [112/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:39:33.713 [113/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:39:33.713 [114/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:39:33.713 [115/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:39:33.992 [116/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:39:34.251 [117/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:39:34.251 [118/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:39:34.251 [119/267] Linking static target lib/librte_mbuf.a 00:39:34.511 [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:39:34.769 [121/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:39:34.769 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:39:34.769 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:39:35.027 [124/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:39:35.027 [125/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:39:35.027 [126/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:39:35.027 [127/267] Linking static target lib/librte_pci.a 00:39:35.027 [128/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:39:35.027 [129/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:39:35.285 [130/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:39:35.285 [131/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:39:35.285 [132/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:39:35.285 [133/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:39:35.285 [134/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:39:35.544 [135/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:39:35.544 [136/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:39:35.544 [137/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:39:35.544 [138/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:39:35.544 [139/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:39:35.544 [140/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:39:35.544 [141/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:39:35.544 [142/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:39:35.544 [143/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:39:35.803 [144/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:39:35.803 [145/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:39:35.803 [146/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:39:35.803 [147/267] Linking static target lib/librte_cmdline.a 00:39:36.062 [148/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:39:36.062 [149/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:39:36.630 [150/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:39:36.630 [151/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:39:36.630 [152/267] Linking static target lib/librte_timer.a 00:39:36.630 [153/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:39:36.630 [154/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:39:36.630 [155/267] Linking static target lib/librte_compressdev.a 00:39:36.630 [156/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:39:36.630 [157/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:39:36.889 [158/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:39:36.889 [159/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:39:36.889 [160/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:39:37.148 [161/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:39:37.148 [162/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:39:37.148 [163/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:39:37.148 [164/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:39:37.406 [165/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:39:37.406 [166/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:39:37.664 [167/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:39:37.664 [168/267] Linking static target lib/librte_dmadev.a 00:39:37.923 [169/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:39:37.923 [170/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:39:37.923 [171/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:39:38.182 [172/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:39:38.182 [173/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:39:38.441 [174/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:39:38.441 [175/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:39:38.441 [176/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:39:38.441 [177/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:39:38.699 [178/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:39:38.699 [179/267] Linking static target lib/librte_power.a 00:39:38.958 [180/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:39:38.958 [181/267] Linking static target lib/librte_reorder.a 00:39:38.958 [182/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:39:38.958 [183/267] Linking static target lib/librte_security.a 00:39:39.216 [184/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:39:39.216 [185/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:39:39.216 [186/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:39:39.216 [187/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:39:39.474 [188/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:39:39.474 [189/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:39:39.474 [190/267] Linking static target lib/librte_ethdev.a 00:39:39.474 [191/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:39:39.731 [192/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:39:39.731 [193/267] Linking static target lib/librte_cryptodev.a 00:39:39.989 [194/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:39:39.989 [195/267] Linking static target lib/librte_hash.a 00:39:40.261 [196/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:39:40.261 [197/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:39:40.261 [198/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:39:40.542 [199/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:39:40.542 [200/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:39:40.800 [201/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:39:41.058 [202/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:39:41.058 [203/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:39:41.058 [204/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:39:41.058 [205/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:39:41.316 [206/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:39:41.574 [207/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:39:41.574 [208/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:39:41.832 [209/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:39:41.832 [210/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:39:41.832 [211/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:39:41.832 [212/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:39:41.832 [213/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:39:41.832 [214/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:39:41.832 [215/267] Linking static target drivers/librte_bus_vdev.a 00:39:42.089 [216/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:39:42.089 [217/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:39:42.089 [218/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:39:42.089 [219/267] Linking static target drivers/librte_bus_pci.a 00:39:42.089 [220/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:39:42.089 [221/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:39:42.089 [222/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:39:42.347 [223/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:39:42.347 [224/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:39:42.347 [225/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:39:42.347 [226/267] Linking static target drivers/librte_mempool_ring.a 00:39:42.605 [227/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:39:45.892 [228/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:39:52.450 [229/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:39:52.450 [230/267] Linking target lib/librte_eal.so.24.1 00:39:52.450 lto-wrapper: warning: using serial compilation of 5 LTRANS jobs 00:39:52.450 [231/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:39:52.450 [232/267] Linking target lib/librte_meter.so.24.1 00:39:52.450 [233/267] Linking target lib/librte_pci.so.24.1 00:39:52.450 [234/267] Linking target lib/librte_ring.so.24.1 00:39:52.450 [235/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:39:52.450 [236/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:39:52.450 [237/267] Linking target drivers/librte_bus_vdev.so.24.1 00:39:52.450 [238/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:39:52.707 [239/267] Linking target lib/librte_timer.so.24.1 00:39:52.707 [240/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:39:52.707 [241/267] Linking target lib/librte_dmadev.so.24.1 00:39:52.965 [242/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:39:53.531 [243/267] Linking target lib/librte_rcu.so.24.1 00:39:53.531 [244/267] Linking target lib/librte_mempool.so.24.1 00:39:53.531 [245/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:39:53.531 [246/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:39:54.098 [247/267] Linking target drivers/librte_bus_pci.so.24.1 00:39:54.098 [248/267] Linking target drivers/librte_mempool_ring.so.24.1 00:39:55.485 [249/267] Linking target lib/librte_mbuf.so.24.1 00:39:55.744 [250/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:39:56.311 [251/267] Linking target lib/librte_reorder.so.24.1 00:39:56.311 [252/267] Linking target lib/librte_compressdev.so.24.1 00:39:56.879 [253/267] Linking target lib/librte_net.so.24.1 00:39:56.879 [254/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:39:58.254 [255/267] Linking target lib/librte_cmdline.so.24.1 00:39:58.254 [256/267] Linking target lib/librte_cryptodev.so.24.1 00:39:58.511 [257/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:39:59.076 [258/267] Linking target lib/librte_security.so.24.1 00:40:01.607 [259/267] Linking target lib/librte_hash.so.24.1 00:40:01.607 [260/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:40:09.757 [261/267] Linking target lib/librte_ethdev.so.24.1 00:40:09.757 lto-wrapper: warning: using serial compilation of 5 LTRANS jobs 00:40:09.757 [262/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:40:12.328 [263/267] Linking target lib/librte_power.so.24.1 00:40:15.619 [264/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:40:15.619 [265/267] Linking static target lib/librte_vhost.a 00:40:17.520 [266/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:41:13.755 [267/267] Linking target lib/librte_vhost.so.24.1 00:41:13.755 lto-wrapper: warning: using serial compilation of 7 LTRANS jobs 00:41:13.755 INFO: autodetecting backend as ninja 00:41:13.755 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:41:13.755 CC lib/ut_mock/mock.o 00:41:13.755 CC lib/ut/ut.o 00:41:13.755 CC lib/log/log.o 00:41:13.755 CC lib/log/log_deprecated.o 00:41:13.756 CC lib/log/log_flags.o 00:41:13.756 LIB libspdk_ut.a 00:41:13.756 LIB libspdk_ut_mock.a 00:41:13.756 LIB libspdk_log.a 00:41:13.756 CC lib/dma/dma.o 00:41:13.756 CXX lib/trace_parser/trace.o 00:41:13.756 CC lib/util/bit_array.o 00:41:13.756 CC lib/util/cpuset.o 00:41:13.756 CC lib/ioat/ioat.o 00:41:13.756 CC lib/util/base64.o 00:41:13.756 CC lib/util/crc32.o 00:41:13.756 CC lib/util/crc32c.o 00:41:13.756 CC lib/util/crc16.o 00:41:13.756 CC lib/vfio_user/host/vfio_user_pci.o 00:41:13.756 CC lib/util/crc32_ieee.o 00:41:13.756 CC lib/vfio_user/host/vfio_user.o 00:41:13.756 CC lib/util/crc64.o 00:41:13.756 CC lib/util/dif.o 00:41:13.756 LIB libspdk_dma.a 00:41:13.756 CC lib/util/fd.o 00:41:13.756 CC lib/util/file.o 00:41:13.756 CC lib/util/hexlify.o 00:41:13.756 LIB libspdk_ioat.a 00:41:13.756 CC lib/util/iov.o 00:41:13.756 CC lib/util/math.o 00:41:13.756 CC lib/util/pipe.o 00:41:13.756 CC lib/util/strerror_tls.o 00:41:13.756 LIB libspdk_vfio_user.a 00:41:13.756 CC lib/util/string.o 00:41:13.756 CC lib/util/uuid.o 00:41:13.756 CC lib/util/fd_group.o 00:41:13.756 CC lib/util/xor.o 00:41:13.756 CC lib/util/zipf.o 00:41:13.756 LIB libspdk_util.a 00:41:13.756 LIB libspdk_trace_parser.a 00:41:13.756 CC lib/rdma_utils/rdma_utils.o 00:41:13.756 CC lib/conf/conf.o 00:41:13.756 CC lib/rdma_provider/common.o 00:41:13.756 CC lib/env_dpdk/env.o 00:41:13.756 CC lib/rdma_provider/rdma_provider_verbs.o 00:41:13.756 CC lib/env_dpdk/memory.o 00:41:13.756 CC lib/json/json_parse.o 00:41:13.756 CC lib/idxd/idxd.o 00:41:13.756 CC lib/env_dpdk/pci.o 00:41:13.756 CC lib/vmd/vmd.o 00:41:13.756 CC lib/env_dpdk/init.o 00:41:13.756 LIB libspdk_conf.a 00:41:13.756 CC lib/json/json_util.o 00:41:13.756 CC lib/json/json_write.o 00:41:13.756 LIB libspdk_rdma_provider.a 00:41:13.756 LIB libspdk_rdma_utils.a 00:41:13.756 CC lib/env_dpdk/threads.o 00:41:13.756 CC lib/env_dpdk/pci_ioat.o 00:41:13.756 CC lib/env_dpdk/pci_virtio.o 00:41:13.756 CC lib/vmd/led.o 00:41:13.756 CC lib/idxd/idxd_user.o 00:41:13.756 CC lib/env_dpdk/pci_vmd.o 00:41:13.756 CC lib/env_dpdk/pci_idxd.o 00:41:13.756 CC lib/env_dpdk/pci_event.o 00:41:13.756 LIB libspdk_json.a 00:41:13.756 CC lib/env_dpdk/sigbus_handler.o 00:41:13.756 CC lib/env_dpdk/pci_dpdk.o 00:41:13.756 CC lib/env_dpdk/pci_dpdk_2207.o 00:41:13.756 LIB libspdk_vmd.a 00:41:13.756 LIB libspdk_idxd.a 00:41:13.756 CC lib/env_dpdk/pci_dpdk_2211.o 00:41:13.756 CC lib/jsonrpc/jsonrpc_server.o 00:41:13.756 CC lib/jsonrpc/jsonrpc_client.o 00:41:13.756 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:41:13.756 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:41:13.756 LIB libspdk_jsonrpc.a 00:41:13.756 LIB libspdk_env_dpdk.a 00:41:13.756 CC lib/rpc/rpc.o 00:41:13.756 LIB libspdk_rpc.a 00:41:13.756 CC lib/notify/notify.o 00:41:13.756 CC lib/notify/notify_rpc.o 00:41:13.756 CC lib/trace/trace_rpc.o 00:41:13.756 CC lib/trace/trace.o 00:41:13.756 CC lib/trace/trace_flags.o 00:41:13.756 CC lib/keyring/keyring.o 00:41:13.756 CC lib/keyring/keyring_rpc.o 00:41:13.756 LIB libspdk_notify.a 00:41:13.756 LIB libspdk_keyring.a 00:41:13.756 LIB libspdk_trace.a 00:41:13.756 CC lib/thread/thread.o 00:41:13.756 CC lib/thread/iobuf.o 00:41:13.756 CC lib/sock/sock.o 00:41:13.756 CC lib/sock/sock_rpc.o 00:41:13.756 LIB libspdk_sock.a 00:41:13.756 LIB libspdk_thread.a 00:41:13.756 CC lib/nvme/nvme_ctrlr_cmd.o 00:41:13.756 CC lib/nvme/nvme_ctrlr.o 00:41:13.756 CC lib/nvme/nvme_fabric.o 00:41:13.756 CC lib/nvme/nvme_ns.o 00:41:13.756 CC lib/nvme/nvme_ns_cmd.o 00:41:13.756 CC lib/nvme/nvme_pcie_common.o 00:41:13.756 CC lib/init/json_config.o 00:41:13.756 CC lib/blob/blobstore.o 00:41:13.756 CC lib/accel/accel.o 00:41:13.756 CC lib/virtio/virtio.o 00:41:13.756 CC lib/init/subsystem.o 00:41:13.756 CC lib/virtio/virtio_vhost_user.o 00:41:13.756 CC lib/init/subsystem_rpc.o 00:41:13.756 CC lib/accel/accel_rpc.o 00:41:13.756 CC lib/init/rpc.o 00:41:13.756 CC lib/virtio/virtio_vfio_user.o 00:41:13.756 CC lib/nvme/nvme_pcie.o 00:41:13.756 CC lib/blob/request.o 00:41:13.756 CC lib/nvme/nvme_qpair.o 00:41:13.756 CC lib/accel/accel_sw.o 00:41:13.756 CC lib/virtio/virtio_pci.o 00:41:13.756 CC lib/blob/zeroes.o 00:41:13.756 LIB libspdk_init.a 00:41:13.756 CC lib/blob/blob_bs_dev.o 00:41:13.756 CC lib/nvme/nvme.o 00:41:13.756 LIB libspdk_accel.a 00:41:13.756 CC lib/nvme/nvme_transport.o 00:41:13.756 CC lib/nvme/nvme_quirks.o 00:41:13.756 LIB libspdk_virtio.a 00:41:13.756 CC lib/nvme/nvme_discovery.o 00:41:13.756 LIB libspdk_blob.a 00:41:13.756 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:41:13.756 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:41:13.756 CC lib/event/app.o 00:41:13.756 CC lib/event/reactor.o 00:41:13.756 CC lib/event/log_rpc.o 00:41:13.756 CC lib/nvme/nvme_tcp.o 00:41:13.756 CC lib/event/app_rpc.o 00:41:13.756 CC lib/event/scheduler_static.o 00:41:13.756 CC lib/nvme/nvme_opal.o 00:41:13.756 CC lib/nvme/nvme_io_msg.o 00:41:13.756 CC lib/bdev/bdev.o 00:41:13.756 CC lib/bdev/bdev_rpc.o 00:41:13.756 CC lib/nvme/nvme_poll_group.o 00:41:13.756 CC lib/bdev/bdev_zone.o 00:41:13.756 CC lib/blobfs/blobfs.o 00:41:13.756 LIB libspdk_event.a 00:41:13.756 CC lib/nvme/nvme_zns.o 00:41:13.756 CC lib/lvol/lvol.o 00:41:13.756 CC lib/nvme/nvme_stubs.o 00:41:13.756 CC lib/nvme/nvme_auth.o 00:41:13.756 CC lib/bdev/part.o 00:41:13.756 CC lib/blobfs/tree.o 00:41:13.756 CC lib/nvme/nvme_cuse.o 00:41:13.756 LIB libspdk_lvol.a 00:41:13.756 LIB libspdk_blobfs.a 00:41:13.756 CC lib/nvme/nvme_rdma.o 00:41:13.756 CC lib/bdev/scsi_nvme.o 00:41:13.756 LIB libspdk_bdev.a 00:41:13.756 CC lib/nbd/nbd.o 00:41:13.756 CC lib/nbd/nbd_rpc.o 00:41:13.756 CC lib/scsi/dev.o 00:41:13.756 CC lib/scsi/port.o 00:41:13.756 CC lib/scsi/lun.o 00:41:13.756 CC lib/scsi/scsi_bdev.o 00:41:13.756 CC lib/scsi/scsi.o 00:41:13.756 CC lib/ftl/ftl_core.o 00:41:13.756 CC lib/ftl/ftl_init.o 00:41:13.756 CC lib/ftl/ftl_layout.o 00:41:13.756 CC lib/ftl/ftl_debug.o 00:41:13.756 CC lib/ftl/ftl_io.o 00:41:13.756 CC lib/ftl/ftl_sb.o 00:41:13.756 CC lib/ftl/ftl_l2p.o 00:41:13.756 CC lib/ftl/ftl_l2p_flat.o 00:41:13.756 LIB libspdk_nbd.a 00:41:13.756 LIB libspdk_nvme.a 00:41:13.756 CC lib/ftl/ftl_nv_cache.o 00:41:13.756 CC lib/ftl/ftl_band.o 00:41:13.756 CC lib/scsi/scsi_pr.o 00:41:13.756 CC lib/scsi/scsi_rpc.o 00:41:13.756 CC lib/scsi/task.o 00:41:13.756 CC lib/ftl/ftl_band_ops.o 00:41:13.756 CC lib/ftl/ftl_writer.o 00:41:13.756 CC lib/ftl/ftl_rq.o 00:41:13.756 CC lib/ftl/ftl_reloc.o 00:41:13.756 CC lib/ftl/ftl_l2p_cache.o 00:41:13.756 CC lib/ftl/ftl_p2l.o 00:41:13.756 LIB libspdk_scsi.a 00:41:13.756 CC lib/ftl/mngt/ftl_mngt.o 00:41:13.756 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:41:13.756 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:41:13.756 CC lib/ftl/mngt/ftl_mngt_startup.o 00:41:13.756 CC lib/nvmf/ctrlr.o 00:41:13.756 CC lib/ftl/mngt/ftl_mngt_md.o 00:41:13.756 CC lib/nvmf/ctrlr_discovery.o 00:41:13.756 CC lib/ftl/mngt/ftl_mngt_misc.o 00:41:13.756 CC lib/nvmf/ctrlr_bdev.o 00:41:13.756 CC lib/iscsi/conn.o 00:41:13.756 CC lib/iscsi/init_grp.o 00:41:13.756 CC lib/vhost/vhost.o 00:41:13.756 CC lib/vhost/vhost_rpc.o 00:41:13.756 CC lib/vhost/vhost_scsi.o 00:41:13.756 CC lib/vhost/vhost_blk.o 00:41:13.756 CC lib/vhost/rte_vhost_user.o 00:41:13.756 CC lib/iscsi/iscsi.o 00:41:13.756 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:41:13.756 CC lib/nvmf/subsystem.o 00:41:13.756 CC lib/nvmf/nvmf.o 00:41:13.756 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:41:13.756 CC lib/iscsi/md5.o 00:41:13.756 CC lib/ftl/mngt/ftl_mngt_band.o 00:41:13.756 CC lib/nvmf/nvmf_rpc.o 00:41:13.756 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:41:13.756 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:41:13.756 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:41:13.756 CC lib/iscsi/param.o 00:41:13.756 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:41:13.756 CC lib/ftl/utils/ftl_conf.o 00:41:13.756 CC lib/iscsi/portal_grp.o 00:41:13.756 CC lib/iscsi/tgt_node.o 00:41:13.756 CC lib/iscsi/iscsi_subsystem.o 00:41:13.756 CC lib/iscsi/iscsi_rpc.o 00:41:13.756 LIB libspdk_vhost.a 00:41:13.756 CC lib/iscsi/task.o 00:41:13.756 CC lib/nvmf/transport.o 00:41:13.756 CC lib/nvmf/tcp.o 00:41:13.756 CC lib/ftl/utils/ftl_md.o 00:41:13.756 CC lib/nvmf/stubs.o 00:41:13.756 CC lib/nvmf/mdns_server.o 00:41:13.756 CC lib/ftl/utils/ftl_mempool.o 00:41:13.756 CC lib/nvmf/rdma.o 00:41:13.756 CC lib/nvmf/auth.o 00:41:13.756 CC lib/ftl/utils/ftl_bitmap.o 00:41:13.756 CC lib/ftl/utils/ftl_property.o 00:41:13.756 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:41:13.756 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:41:13.756 LIB libspdk_iscsi.a 00:41:13.756 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:41:13.756 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:41:13.756 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:41:13.756 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:41:13.756 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:41:13.756 CC lib/ftl/upgrade/ftl_sb_v3.o 00:41:13.756 CC lib/ftl/upgrade/ftl_sb_v5.o 00:41:13.757 CC lib/ftl/nvc/ftl_nvc_dev.o 00:41:13.757 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:41:13.757 CC lib/ftl/base/ftl_base_dev.o 00:41:13.757 CC lib/ftl/base/ftl_base_bdev.o 00:41:13.757 LIB libspdk_nvmf.a 00:41:13.757 LIB libspdk_ftl.a 00:41:14.015 CC module/env_dpdk/env_dpdk_rpc.o 00:41:14.015 CC module/scheduler/gscheduler/gscheduler.o 00:41:14.015 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:41:14.015 CC module/keyring/linux/keyring.o 00:41:14.015 CC module/sock/posix/posix.o 00:41:14.015 CC module/keyring/file/keyring.o 00:41:14.015 CC module/blob/bdev/blob_bdev.o 00:41:14.015 CC module/scheduler/dynamic/scheduler_dynamic.o 00:41:14.015 CC module/accel/ioat/accel_ioat.o 00:41:14.015 CC module/accel/error/accel_error.o 00:41:14.015 LIB libspdk_env_dpdk_rpc.a 00:41:14.015 CC module/accel/ioat/accel_ioat_rpc.o 00:41:14.274 LIB libspdk_scheduler_gscheduler.a 00:41:14.274 LIB libspdk_scheduler_dpdk_governor.a 00:41:14.274 CC module/keyring/file/keyring_rpc.o 00:41:14.274 CC module/keyring/linux/keyring_rpc.o 00:41:14.274 CC module/accel/error/accel_error_rpc.o 00:41:14.274 LIB libspdk_scheduler_dynamic.a 00:41:14.274 LIB libspdk_blob_bdev.a 00:41:14.274 LIB libspdk_accel_ioat.a 00:41:14.274 LIB libspdk_keyring_linux.a 00:41:14.274 LIB libspdk_keyring_file.a 00:41:14.274 LIB libspdk_accel_error.a 00:41:14.274 CC module/accel/dsa/accel_dsa.o 00:41:14.274 CC module/accel/iaa/accel_iaa.o 00:41:14.534 LIB libspdk_sock_posix.a 00:41:14.534 CC module/accel/dsa/accel_dsa_rpc.o 00:41:14.534 CC module/blobfs/bdev/blobfs_bdev.o 00:41:14.534 CC module/bdev/lvol/vbdev_lvol.o 00:41:14.534 CC module/bdev/null/bdev_null.o 00:41:14.534 CC module/bdev/gpt/gpt.o 00:41:14.534 CC module/bdev/error/vbdev_error.o 00:41:14.534 CC module/accel/iaa/accel_iaa_rpc.o 00:41:14.534 CC module/bdev/malloc/bdev_malloc.o 00:41:14.534 CC module/bdev/malloc/bdev_malloc_rpc.o 00:41:14.534 CC module/bdev/delay/vbdev_delay.o 00:41:14.534 LIB libspdk_accel_dsa.a 00:41:14.534 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:41:14.534 LIB libspdk_accel_iaa.a 00:41:14.534 CC module/bdev/gpt/vbdev_gpt.o 00:41:14.534 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:41:14.534 CC module/bdev/error/vbdev_error_rpc.o 00:41:14.534 CC module/bdev/delay/vbdev_delay_rpc.o 00:41:14.793 CC module/bdev/null/bdev_null_rpc.o 00:41:14.793 LIB libspdk_bdev_malloc.a 00:41:14.793 LIB libspdk_bdev_error.a 00:41:14.793 LIB libspdk_blobfs_bdev.a 00:41:14.793 LIB libspdk_bdev_delay.a 00:41:14.793 LIB libspdk_bdev_gpt.a 00:41:14.793 LIB libspdk_bdev_lvol.a 00:41:14.793 LIB libspdk_bdev_null.a 00:41:14.793 CC module/bdev/nvme/bdev_nvme.o 00:41:14.793 CC module/bdev/nvme/bdev_nvme_rpc.o 00:41:14.793 CC module/bdev/raid/bdev_raid.o 00:41:14.793 CC module/bdev/passthru/vbdev_passthru.o 00:41:14.793 CC module/bdev/split/vbdev_split.o 00:41:15.052 CC module/bdev/zone_block/vbdev_zone_block.o 00:41:15.052 CC module/bdev/aio/bdev_aio.o 00:41:15.052 CC module/bdev/ftl/bdev_ftl.o 00:41:15.052 CC module/bdev/iscsi/bdev_iscsi.o 00:41:15.052 CC module/bdev/virtio/bdev_virtio_scsi.o 00:41:15.052 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:41:15.052 CC module/bdev/split/vbdev_split_rpc.o 00:41:15.052 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:41:15.052 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:41:15.052 CC module/bdev/ftl/bdev_ftl_rpc.o 00:41:15.312 CC module/bdev/aio/bdev_aio_rpc.o 00:41:15.312 CC module/bdev/nvme/nvme_rpc.o 00:41:15.312 CC module/bdev/raid/bdev_raid_rpc.o 00:41:15.312 LIB libspdk_bdev_passthru.a 00:41:15.312 LIB libspdk_bdev_split.a 00:41:15.312 CC module/bdev/virtio/bdev_virtio_blk.o 00:41:15.312 CC module/bdev/virtio/bdev_virtio_rpc.o 00:41:15.312 CC module/bdev/raid/bdev_raid_sb.o 00:41:15.312 LIB libspdk_bdev_zone_block.a 00:41:15.312 LIB libspdk_bdev_iscsi.a 00:41:15.312 LIB libspdk_bdev_aio.a 00:41:15.312 CC module/bdev/nvme/bdev_mdns_client.o 00:41:15.312 CC module/bdev/nvme/vbdev_opal.o 00:41:15.312 LIB libspdk_bdev_ftl.a 00:41:15.312 CC module/bdev/raid/raid0.o 00:41:15.312 CC module/bdev/raid/raid1.o 00:41:15.312 CC module/bdev/raid/concat.o 00:41:15.312 CC module/bdev/nvme/vbdev_opal_rpc.o 00:41:15.312 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:41:15.570 LIB libspdk_bdev_virtio.a 00:41:15.570 LIB libspdk_bdev_raid.a 00:41:15.570 LIB libspdk_bdev_nvme.a 00:41:16.138 CC module/event/subsystems/iobuf/iobuf.o 00:41:16.138 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:41:16.138 CC module/event/subsystems/vmd/vmd.o 00:41:16.138 CC module/event/subsystems/vmd/vmd_rpc.o 00:41:16.138 CC module/event/subsystems/scheduler/scheduler.o 00:41:16.138 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:41:16.138 CC module/event/subsystems/sock/sock.o 00:41:16.138 CC module/event/subsystems/keyring/keyring.o 00:41:16.138 LIB libspdk_event_vmd.a 00:41:16.138 LIB libspdk_event_vhost_blk.a 00:41:16.138 LIB libspdk_event_keyring.a 00:41:16.138 LIB libspdk_event_scheduler.a 00:41:16.138 LIB libspdk_event_iobuf.a 00:41:16.138 LIB libspdk_event_sock.a 00:41:16.397 CC module/event/subsystems/accel/accel.o 00:41:16.656 LIB libspdk_event_accel.a 00:41:16.915 CC module/event/subsystems/bdev/bdev.o 00:41:16.915 LIB libspdk_event_bdev.a 00:41:17.174 CC module/event/subsystems/scsi/scsi.o 00:41:17.174 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:41:17.174 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:41:17.174 CC module/event/subsystems/nbd/nbd.o 00:41:17.434 LIB libspdk_event_nbd.a 00:41:17.435 LIB libspdk_event_scsi.a 00:41:17.435 LIB libspdk_event_nvmf.a 00:41:17.694 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:41:17.694 CC module/event/subsystems/iscsi/iscsi.o 00:41:17.953 LIB libspdk_event_vhost_scsi.a 00:41:17.953 LIB libspdk_event_iscsi.a 00:41:18.211 CC app/trace_record/trace_record.o 00:41:18.211 CC app/spdk_lspci/spdk_lspci.o 00:41:18.211 CC app/spdk_nvme_perf/perf.o 00:41:18.211 CXX app/trace/trace.o 00:41:18.211 CC app/iscsi_tgt/iscsi_tgt.o 00:41:18.211 CC app/nvmf_tgt/nvmf_main.o 00:41:18.211 CC app/spdk_tgt/spdk_tgt.o 00:41:18.211 CC examples/util/zipf/zipf.o 00:41:18.211 CC examples/ioat/perf/perf.o 00:41:18.211 CC test/thread/poller_perf/poller_perf.o 00:41:18.212 LINK spdk_lspci 00:41:18.470 LINK spdk_trace_record 00:41:18.470 LINK nvmf_tgt 00:41:18.470 LINK zipf 00:41:18.470 LINK ioat_perf 00:41:18.470 LINK iscsi_tgt 00:41:18.470 LINK poller_perf 00:41:18.470 LINK spdk_tgt 00:41:18.470 LINK spdk_trace 00:41:18.470 LINK spdk_nvme_perf 00:41:26.650 CC examples/ioat/verify/verify.o 00:41:27.617 LINK verify 00:41:42.485 CC test/dma/test_dma/test_dma.o 00:41:42.485 CC test/app/bdev_svc/bdev_svc.o 00:41:42.485 LINK test_dma 00:41:42.485 LINK bdev_svc 00:41:46.670 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:41:48.044 LINK nvme_fuzz 00:41:51.325 CC test/thread/lock/spdk_lock.o 00:41:53.225 LINK spdk_lock 00:41:53.790 CC test/app/histogram_perf/histogram_perf.o 00:41:54.355 LINK histogram_perf 00:41:55.727 CC examples/interrupt_tgt/interrupt_tgt.o 00:41:56.662 CC test/app/jsoncat/jsoncat.o 00:41:56.662 LINK interrupt_tgt 00:41:57.597 LINK jsoncat 00:42:36.506 CC test/app/stub/stub.o 00:42:36.506 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:42:36.506 LINK stub 00:42:37.072 LINK iscsi_fuzz 00:42:41.258 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:42:42.193 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:42:43.567 TEST_HEADER include/spdk/config.h 00:42:43.567 CXX test/cpp_headers/accel.o 00:42:43.824 LINK vhost_fuzz 00:42:44.754 CXX test/cpp_headers/accel_module.o 00:42:46.124 CXX test/cpp_headers/assert.o 00:42:47.497 CXX test/cpp_headers/barrier.o 00:42:48.881 CXX test/cpp_headers/base64.o 00:42:50.808 CXX test/cpp_headers/bdev.o 00:42:52.181 CXX test/cpp_headers/bdev_module.o 00:42:54.760 CXX test/cpp_headers/bdev_zone.o 00:42:56.192 CXX test/cpp_headers/bit_array.o 00:42:58.090 CXX test/cpp_headers/bit_pool.o 00:42:59.467 CXX test/cpp_headers/blob.o 00:43:00.843 CXX test/cpp_headers/blob_bdev.o 00:43:02.215 CXX test/cpp_headers/blobfs.o 00:43:03.587 CXX test/cpp_headers/blobfs_bdev.o 00:43:04.962 CXX test/cpp_headers/conf.o 00:43:06.338 CXX test/cpp_headers/config.o 00:43:06.596 CXX test/cpp_headers/cpuset.o 00:43:07.972 CXX test/cpp_headers/crc16.o 00:43:09.347 CXX test/cpp_headers/crc32.o 00:43:10.282 CC app/spdk_nvme_identify/identify.o 00:43:10.541 CXX test/cpp_headers/crc64.o 00:43:11.939 CXX test/cpp_headers/dif.o 00:43:12.874 LINK spdk_nvme_identify 00:43:12.874 CXX test/cpp_headers/dma.o 00:43:14.248 CXX test/cpp_headers/endian.o 00:43:15.186 CXX test/cpp_headers/env.o 00:43:16.120 CXX test/cpp_headers/env_dpdk.o 00:43:16.120 CC app/spdk_nvme_discover/discovery_aer.o 00:43:16.685 LINK spdk_nvme_discover 00:43:16.685 CXX test/cpp_headers/event.o 00:43:17.253 CXX test/cpp_headers/fd.o 00:43:17.818 CXX test/cpp_headers/fd_group.o 00:43:18.385 CXX test/cpp_headers/file.o 00:43:18.952 CXX test/cpp_headers/ftl.o 00:43:19.212 CXX test/cpp_headers/gpt_spec.o 00:43:19.212 CC examples/thread/thread/thread_ex.o 00:43:19.779 CXX test/cpp_headers/hexlify.o 00:43:19.779 LINK thread 00:43:20.046 CXX test/cpp_headers/histogram_data.o 00:43:20.324 CC app/spdk_top/spdk_top.o 00:43:20.583 CC app/vhost/vhost.o 00:43:20.583 CXX test/cpp_headers/idxd.o 00:43:20.842 CC examples/sock/hello_world/hello_sock.o 00:43:21.100 LINK vhost 00:43:21.358 CXX test/cpp_headers/idxd_spec.o 00:43:21.358 LINK hello_sock 00:43:21.358 LINK spdk_top 00:43:21.926 CC test/env/mem_callbacks/mem_callbacks.o 00:43:21.926 CXX test/cpp_headers/init.o 00:43:22.185 CC test/event/event_perf/event_perf.o 00:43:22.443 LINK event_perf 00:43:22.701 CXX test/cpp_headers/ioat.o 00:43:22.701 CC test/nvme/aer/aer.o 00:43:23.269 CXX test/cpp_headers/ioat_spec.o 00:43:23.269 LINK mem_callbacks 00:43:23.529 LINK aer 00:43:23.788 CXX test/cpp_headers/iscsi_spec.o 00:43:24.721 CXX test/cpp_headers/json.o 00:43:25.654 CXX test/cpp_headers/jsonrpc.o 00:43:26.587 CXX test/cpp_headers/keyring.o 00:43:27.960 CXX test/cpp_headers/keyring_module.o 00:43:28.894 CXX test/cpp_headers/likely.o 00:43:30.793 CXX test/cpp_headers/log.o 00:43:31.726 CXX test/cpp_headers/lvol.o 00:43:33.098 CXX test/cpp_headers/memory.o 00:43:34.995 CXX test/cpp_headers/mmio.o 00:43:36.369 CXX test/cpp_headers/nbd.o 00:43:36.628 CXX test/cpp_headers/notify.o 00:43:38.003 CXX test/cpp_headers/nvme.o 00:43:39.903 CXX test/cpp_headers/nvme_intel.o 00:43:40.836 CXX test/cpp_headers/nvme_ocssd.o 00:43:42.739 CXX test/cpp_headers/nvme_ocssd_spec.o 00:43:44.114 CXX test/cpp_headers/nvme_spec.o 00:43:45.048 CXX test/cpp_headers/nvme_zns.o 00:43:46.424 CXX test/cpp_headers/nvmf.o 00:43:46.992 CXX test/cpp_headers/nvmf_cmd.o 00:43:47.942 CXX test/cpp_headers/nvmf_fc_spec.o 00:43:47.942 CC app/spdk_dd/spdk_dd.o 00:43:48.516 CXX test/cpp_headers/nvmf_spec.o 00:43:49.083 LINK spdk_dd 00:43:49.342 CXX test/cpp_headers/nvmf_transport.o 00:43:49.909 CC test/env/vtophys/vtophys.o 00:43:49.909 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:43:50.169 CXX test/cpp_headers/opal.o 00:43:50.427 LINK vtophys 00:43:50.427 CC test/env/memory/memory_ut.o 00:43:50.685 LINK env_dpdk_post_init 00:43:50.942 CXX test/cpp_headers/opal_spec.o 00:43:51.876 CXX test/cpp_headers/pci_ids.o 00:43:52.134 CXX test/cpp_headers/pipe.o 00:43:53.068 CXX test/cpp_headers/queue.o 00:43:53.068 LINK memory_ut 00:43:53.068 CXX test/cpp_headers/reduce.o 00:43:54.002 CXX test/cpp_headers/rpc.o 00:43:54.002 CC test/env/pci/pci_ut.o 00:43:54.002 CC test/event/reactor/reactor.o 00:43:54.594 CXX test/cpp_headers/scheduler.o 00:43:54.595 LINK reactor 00:43:55.160 LINK pci_ut 00:43:55.160 CXX test/cpp_headers/scsi.o 00:43:56.535 CXX test/cpp_headers/scsi_spec.o 00:43:57.101 CXX test/cpp_headers/sock.o 00:43:57.678 CC test/rpc_client/rpc_client_test.o 00:43:57.937 CXX test/cpp_headers/stdinc.o 00:43:58.503 LINK rpc_client_test 00:43:59.069 CXX test/cpp_headers/string.o 00:44:00.443 CXX test/cpp_headers/thread.o 00:44:00.443 CC test/nvme/reset/reset.o 00:44:01.385 CXX test/cpp_headers/trace.o 00:44:01.643 LINK reset 00:44:03.019 CXX test/cpp_headers/trace_parser.o 00:44:03.954 CXX test/cpp_headers/tree.o 00:44:04.213 CXX test/cpp_headers/ublk.o 00:44:05.586 CXX test/cpp_headers/util.o 00:44:06.960 CXX test/cpp_headers/uuid.o 00:44:08.334 CXX test/cpp_headers/version.o 00:44:08.592 CXX test/cpp_headers/vfio_user_pci.o 00:44:09.967 CXX test/cpp_headers/vfio_user_spec.o 00:44:11.868 CXX test/cpp_headers/vhost.o 00:44:12.804 CXX test/cpp_headers/vmd.o 00:44:14.177 CXX test/cpp_headers/xor.o 00:44:15.636 CXX test/cpp_headers/zipf.o 00:44:17.539 CC test/nvme/sgl/sgl.o 00:44:18.912 LINK sgl 00:44:20.287 CC app/fio/nvme/fio_plugin.o 00:44:22.288 LINK spdk_nvme 00:44:23.663 CC app/fio/bdev/fio_plugin.o 00:44:25.563 LINK spdk_bdev 00:44:27.462 CC test/accel/dif/dif.o 00:44:28.028 CC test/event/reactor_perf/reactor_perf.o 00:44:28.962 LINK reactor_perf 00:44:28.962 LINK dif 00:44:32.244 CC test/nvme/e2edp/nvme_dp.o 00:44:33.178 LINK nvme_dp 00:44:48.057 CC examples/vmd/lsvmd/lsvmd.o 00:44:48.991 LINK lsvmd 00:44:48.991 CC examples/vmd/led/led.o 00:44:49.925 LINK led 00:45:02.125 CC test/blobfs/mkfs/mkfs.o 00:45:02.125 LINK mkfs 00:45:06.313 CC test/lvol/esnap/esnap.o 00:45:07.708 CC test/event/app_repeat/app_repeat.o 00:45:09.084 LINK app_repeat 00:45:15.647 LINK esnap 00:45:20.949 CC test/nvme/overhead/overhead.o 00:45:21.884 LINK overhead 00:45:27.152 CC test/event/scheduler/scheduler.o 00:45:27.410 LINK scheduler 00:45:35.519 CC test/nvme/err_injection/err_injection.o 00:45:35.779 LINK err_injection 00:45:36.038 CC examples/idxd/perf/perf.o 00:45:37.414 LINK idxd_perf 00:45:37.979 CC test/nvme/startup/startup.o 00:45:39.356 LINK startup 00:46:11.419 CC test/nvme/reserve/reserve.o 00:46:11.419 LINK reserve 00:46:11.419 CC test/nvme/simple_copy/simple_copy.o 00:46:11.677 LINK simple_copy 00:46:14.210 CC examples/accel/perf/accel_perf.o 00:46:14.210 CC test/nvme/connect_stress/connect_stress.o 00:46:15.590 LINK connect_stress 00:46:15.590 LINK accel_perf 00:46:25.573 CC test/nvme/boot_partition/boot_partition.o 00:46:26.953 LINK boot_partition 00:46:30.240 CC examples/blob/hello_world/hello_blob.o 00:46:31.176 LINK hello_blob 00:46:49.251 CC test/nvme/compliance/nvme_compliance.o 00:46:49.817 LINK nvme_compliance 00:46:54.002 CC test/nvme/fused_ordering/fused_ordering.o 00:46:54.567 LINK fused_ordering 00:46:55.944 CC test/bdev/bdevio/bdevio.o 00:46:56.510 CC test/nvme/doorbell_aers/doorbell_aers.o 00:46:56.768 LINK bdevio 00:46:57.335 LINK doorbell_aers 00:46:57.335 CC test/nvme/fdp/fdp.o 00:46:58.271 LINK fdp 00:47:08.278 CC examples/blob/cli/blobcli.o 00:47:08.278 CC examples/nvme/hello_world/hello_world.o 00:47:09.209 LINK blobcli 00:47:09.787 LINK hello_world 00:47:16.358 CC examples/nvme/reconnect/reconnect.o 00:47:17.733 LINK reconnect 00:47:27.740 CC examples/nvme/nvme_manage/nvme_manage.o 00:47:30.271 LINK nvme_manage 00:47:33.552 CC test/nvme/cuse/cuse.o 00:47:38.817 LINK cuse 00:47:39.383 CC examples/nvme/arbitration/arbitration.o 00:47:39.640 CC examples/nvme/hotplug/hotplug.o 00:47:41.013 LINK arbitration 00:47:41.013 LINK hotplug 00:47:49.152 CC examples/nvme/cmb_copy/cmb_copy.o 00:47:50.523 LINK cmb_copy 00:47:54.709 CC examples/nvme/abort/abort.o 00:47:56.084 LINK abort 00:48:02.643 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:48:03.579 LINK pmr_persistence 00:48:30.117 CC examples/bdev/hello_world/hello_bdev.o 00:48:30.117 CC examples/bdev/bdevperf/bdevperf.o 00:48:30.117 LINK hello_bdev 00:48:30.117 LINK bdevperf 00:48:56.656 CC examples/nvmf/nvmf/nvmf.o 00:48:56.656 LINK nvmf 00:49:08.854 make: Leaving directory '/home/vagrant/spdk_repo/spdk' 00:49:08.854 00:49:08.854 real 10m11.013s 00:49:08.854 user 78m1.476s 00:49:08.854 sys 4m18.410s 00:49:08.854 14:41:53 build_release -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:49:08.854 14:41:53 build_release -- common/autotest_common.sh@10 -- $ set +x 00:49:08.854 ************************************ 00:49:08.854 END TEST build_release 00:49:08.854 ************************************ 00:49:08.854 14:41:53 -- common/autotest_common.sh@1142 -- $ return 0 00:49:08.854 14:41:53 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:49:08.854 14:41:53 -- pm/common@29 -- $ signal_monitor_resources TERM 00:49:08.854 14:41:53 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:49:08.854 14:41:53 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:49:08.854 14:41:53 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:49:08.854 14:41:53 -- pm/common@44 -- $ pid=226919 00:49:08.854 14:41:53 -- pm/common@50 -- $ kill -TERM 226919 00:49:08.854 14:41:53 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:49:08.854 14:41:53 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:49:08.854 14:41:53 -- pm/common@44 -- $ pid=226921 00:49:08.854 14:41:53 -- pm/common@50 -- $ kill -TERM 226921 00:49:08.854 + [[ -n 7395 ]] 00:49:08.854 + sudo kill 7395 00:49:08.854 sudo: /etc/sudoers.d/99-spdk-rlimits:1:23: unknown defaults entry "rlimit_core" 00:49:08.863 [Pipeline] } 00:49:08.882 [Pipeline] // timeout 00:49:08.887 [Pipeline] } 00:49:08.905 [Pipeline] // stage 00:49:08.911 [Pipeline] } 00:49:08.928 [Pipeline] // catchError 00:49:08.936 [Pipeline] stage 00:49:08.938 [Pipeline] { (Stop VM) 00:49:08.952 [Pipeline] sh 00:49:09.231 + vagrant halt 00:49:12.518 ==> default: Halting domain... 00:49:20.679 [Pipeline] sh 00:49:20.954 + vagrant destroy -f 00:49:24.233 ==> default: Removing domain... 00:49:24.243 [Pipeline] sh 00:49:24.523 + mv output /var/jenkins/workspace/rocky9-vg-autotest_2/output 00:49:24.533 [Pipeline] } 00:49:24.550 [Pipeline] // stage 00:49:24.554 [Pipeline] } 00:49:24.568 [Pipeline] // dir 00:49:24.572 [Pipeline] } 00:49:24.581 [Pipeline] // wrap 00:49:24.585 [Pipeline] } 00:49:24.596 [Pipeline] // catchError 00:49:24.604 [Pipeline] stage 00:49:24.607 [Pipeline] { (Epilogue) 00:49:24.620 [Pipeline] sh 00:49:24.899 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:49:43.030 [Pipeline] catchError 00:49:43.033 [Pipeline] { 00:49:43.048 [Pipeline] sh 00:49:43.331 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:49:43.331 Artifacts sizes are good 00:49:43.340 [Pipeline] } 00:49:43.358 [Pipeline] // catchError 00:49:43.371 [Pipeline] archiveArtifacts 00:49:43.378 Archiving artifacts 00:49:43.751 [Pipeline] cleanWs 00:49:43.763 [WS-CLEANUP] Deleting project workspace... 00:49:43.763 [WS-CLEANUP] Deferred wipeout is used... 00:49:43.770 [WS-CLEANUP] done 00:49:43.772 [Pipeline] } 00:49:43.790 [Pipeline] // stage 00:49:43.796 [Pipeline] } 00:49:43.810 [Pipeline] // node 00:49:43.815 [Pipeline] End of Pipeline 00:49:43.844 Finished: SUCCESS